Powered by OpenAIRE graph

LPNC

Laboratoire de Psychologie et NeuroCognition
34 Projects, page 1 of 7
  • Funder: French National Research Agency (ANR) Project Code: ANR-14-CE26-0034
    Funder Contribution: 360,086 EUR

    The goal of the LACIS's project is to demonstrate the validity of a new approach for color and spectral imaging sensor and camera systems. The demonstration will be given by building one or two prototypes showing the functionality of the novel approach and measuring the improvement compared to the state of the art. The novel approach is based on two principles inspired from the human visual system. First, human retina consist of a mosaic of cone photoreceptors (LMS) but the mosaic arrangement of cones is changing from individual to individual without impinging on color vision capability of the individual. A generalization of this principle would say that we can build a color sensor with any arrangement of color samples in the color filter array that cover the camera. This flexibility of sensor colorization allows optimizing the sensor for many type of application, particularly those that need multispectral encoding. Our prototypes would be therefore equipped with different color filter array and the performance of these different sensors will be tested. Second, instead of being perfectly linear with light intensity, the human retina response is non-linear and adaptive. Adaptation to light allows the human visual system to be sensitive to a large range of light value despite the noisy nature of the retina cells. We will implement this property on the prototypes in analog, before the analog to digital converter to prevent from noise amplification due to digitalization. A previous prototype have already been build and tested favorably by two members of the project. A new implementation has been proposed for a patent and will be implemented in the project. The general goal of the project is to build a demonstrator composed by (1) new filters, either pseudo-random 6x6 RGB, or multispectral based on COLOR SHADE technology, (2) a locally adaptive color CMOS sensor and (3) a motherboard including embedded processing for color or spectral image reconstruction optimized for spatio-spectral information. The demonstrator will be given by a functioning prototype that will deliver images of size 256x256 and showing the properties of the new approach for color or spectral sensor. The consortium is composed on three entities, two laboratories (LPNC, TIMA) and a company (SILIOS Technologies). The two laboratories have already worked together on a first prototype of light adaptive sensor. TIMA is well recognized in microelectronic and have a long achievement in sensor building. LPNC has developed several models for spatio-spectral representation and demosaicing method as well as high dynamic range and tone mapping inspired from human vision. SILIOS is a SME that develops technology and know-how on micro-optics and more specifically on multispectral filters for spectrometry and multispectral imaging. The project will open new products and skill for the company and new intellectual property for the consortium.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-22-FRAL-0012
    Funder Contribution: 312,607 EUR

    This project proposes an innovative approach to studying the role of gesture in social evaluations and learning across development. Social judgements shape our social world, and can lead to discrimination or conflict. Despite ample evidence that the language someone speaks and their accent drive social preferences, research to date has not addressed how the gestures that routinely accompany speech influence social evaluation. However, gestures are universal, and they show cross-cultural variation. In the proposed project, we will study gesture along with language to uncover the social preferences that result from the integration of multiple communicative cues. We will develop a unique and high-quality set of videos that will manipulate the background of gesture (native vs foreign), and of language (native vs foreign). We will use this set of videos to test the role of gesture in social preferences (WP1), and social learning (WP2) in 5-year-old children and 12-14-month-old infants. More specifically, we will test how different combinations of gesture and language (both native, both foreign, or mismatched) affect social preferences and learning across development. This project will provide the first evidence about the link between gestural communication and intergroup cognition, and how it unfolds across development. This could lead to new research and breakthroughs in our understanding of gestural communication and its connection to other cognitive processes. This project brings together two experienced developmental scientists with expertise in cultural learning, gesture research and nonverbal communication: Dr Cristina Galusca, a Postdoctoral Researcher at the Neurocognition Laboratory at the Centre National de la Recherche Scientifique in Grenoble, France, and Prof Gerlind Grosse, from the University of Applied Sciences in Potsdam, Germany.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-17-CE28-0006
    Funder Contribution: 340,867 EUR

    The cochlear implant (CI) in congenital deaf children is now widely considered as a highly efficient means to restore auditory functions. However, after several decades of retrospective analysis, it is clear that there is a large range of recuperation levels, and in extreme cases some CI recipients never develop adequate oral language skills. The major goal of HearCog to improve rehabilitation strategies in CI children, it is to better understand and circumscribe the origins of such variability in CI outcomes. The originality of HearCog project is to consider CI outcomes in a broad range of interdependent aspects, from speech perception to speech production and the associated cognitive mechanism embedded in executive functions. The novelty of the proposal is both theoretical and methodological. The goals will be first to evaluate the capacities of the visual and auditory system to respond to natural environmental stimuli and to analyse neuronal mechanisms induced by sensory loss and recovery through the CI using brain imaging techniques (Functional Near-Infrared Spectroscopy, fNIRS). In view of the co-structuration of speech perception and production during development, we will assess how deafness and CI recovery can alter speech production. But congenital deafness has deleterious impacts that extend beyond the auditory functions and encompass cognitive systems including higher-order executive processes. Based on the disconnecting model (Kral et al., 2016), our objective will be to relate neuronal assessment, using the fNIRS technique, of executive functions to auditory restoration in CI children. HearCog is based on longitudinal assessment on CI infants and age-matched controls, to search for prognosis factors of auditory restoration. We will also compare these measurements to data acquired in older CI children implanted for several years, and controls. In fine our goal is to acquire objective measures of brain reorganization that could be linked to variability in CI outcomes and therefore would constitute a predictive factor. HearCog is at the crossroad of cognitive neuropsychology, clinical research with a strong opening toward education. Consequently HearCog is translational and multidisciplinary with the unique objective to understand the compensatory mechanisms induced by congenital hearing loss to support both the social insertion as well as the insertion within the school system of cochlear implanted deaf children.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-12-CORP-0001
    Funder Contribution: 199,067 EUR

    For more than a century, researchers in psycholinguistics, cognitive psychology, and cognitive science have tried to understand the mental processes underlying visual and spoken word recognition (see e.g., Adelman, 2011; Balota, Yap & Cortese, 2006; Ferrand, 2007; Ferrand, New, Brysbaert, Keuleers, Bonin, Méot, Augustinova & Pallier, 2010; Grainger & Holcomb, 2009; Grainger & Ziegler, 2011; Spinelli & Ferrand, 2005; Dahan & Magnuson, 2006; Pisoni & Levi, 2007). To date, nearly all research has been based on small studies involving a limited set of monosyllabic words selected according to factorial designs, with a limited number of independent variables matched on a series of control variables. The present project aims to supplement previous studies with a new approach, the "megastudy approach", by (1) using multiple regression designs involving very large-scale stimuli sets; (2) investigating the cognitive processes underlying the visual and spoken word recognition of more complex words, i.e. polysyllabic and polymorphemic words; and (3) using the psychophysical approach (with the repeated measures design) developed recently by Keuleers, Lacey, Rastle, and Brysbaert (2011). This project has two main phases. Phase 1 of the project will collect reaction times and percent errors in the visual lexical decision task on about 28000 French words and 28000 pseudowords with a small group of participants (n=100). The 28000 words (mainly polysyllabic and polymorphemic words of different lengths and frequencies) will be selected among the 130,000 distinct lexical entries available in Lexique (www.lexique.org; New, Pallier, Brysbaert, & Ferrand, 2004). We will also include inflected forms (such as feminine, plural, and verbal forms). Thanks to this mega-corpus, we will provide answers to some important unresolved theoretical issues in the field of visual word recognition. Collected reaction times will be submitted to multiple regression analyses (linear mixed effects: Baayen, Davidson, & Bates, 2008) in order to study the influence of continuous lexical variables that used to be treated as categorized in factorial designs. Phase 2 of the project will collect reaction times and percent errors on the same number of words and pseudowords, in a modality never tested before at such large-scale, namely the auditory modality. Megastudies are virtually nonexistent in auditory word recognition research and the literature on auditory word recognition has been dominated by experimental studies. It is therefore crucial to provide and explore an auditory analogue of what has been already done in visual word recognition. Presenting auditory stimuli will imply more effort than presenting visual stimuli, but it is worth trying because factors specific to the auditory modality are influencing auditory word recognition (e.g., phonological neighborhood density, stimulus duration, uniqueness point, etc.) in plus of the usual factors found in visual word recognition (e.g., word frequency, length in letters and syllables, semantic neighbors, etc.). To succeed in the realization of this ambitious project, we have put together a dynamic and interdisciplinary team (whose members have already worked and published together), which is extremely competent in the areas of psycholinguistics and data mining. The collected reaction times and the sophisticated analyses (mixed models) we will conduct will allow us to (1) understand more precisely the functional architecture of the different levels of processing involved in both visual and spoken word recognition, (2) detail the nature of the representation on which these processes apply, and (3) study the type of coding (orthographic, phonological, morphological, semantic) used by these different levels of processing. These results will be crucial for models of reading and spoken word recognition. Overall, this work will lead us to a better understanding of factors at play in visual and spoken word recognition.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-21-CE37-0017
    Funder Contribution: 299,512 EUR

    During speech, singing, or music playing, the auditory feedback involves both an aerial component received by the external ear, and an internal vibration: the ‘bone conduction’ component. While the speaker or musician hears both components, a listener only hears the aerial part. Thus, a person, child or adult, must learn to control oral sound production with a different information than that communicated. Since von Bekesy (1949), studies have consistently found that about half of the cochlear signal comes from bone conduction, but the information it conveys, and how it impacts oral motor control, is still unclear. Previous studies have highlighted important differences in spectral balance between the aerial- and bone-conducted signal during speech, but these studies have not led to an understanding of their possible difference in terms of informational content. Besides, nearly nothing is known of the bone-conducted feedback of other oral audiomotor behaviors like singing or playing a wind instrument through a mouthpiece (although modulation of auditory feedback by ear protectors is obvious, and some studies noted their behavioral consequences). Recent preliminary findings of our consortium suggest that specific information exists in the bone-conducted signal of speech, and in particular, information related to articulator (tongue) position. This intriguing observation warrants further examination and raises several questions. How does the bone-conducted component differ from the aerial component in general, during oral audiomotor tasks (speech, singing, playing a wind instrument), and can we explain these differences, e.g. link them to articulator motion? Are these differences typical, or does bone-conducted auditory feedback vary significantly among individuals and could explain behavioral idiosyncrasies? Can we recover the complete auditory signal that subjects obtain during oral audiomotor tasks, that is, including faithfully its bone-conducted part? How does bone-conduction affect perception of ones’ production in speech and music; in particular, does it lead to biases in auditory perception? Last, does bone-conducted sound guide audiomotor behavior, or in other words, is sound production guided by sounds that cannot be perceived by the interlocutor or the audience? The aim of the present project is to tackle these questions by combining 1) careful experimental extraction of the bone-conducted component thanks to deep in-ear recording during speech and music production, using a specially developed experimental apparatus; 2) a modeling approach, using signal processing, statistical and information-theoretic tools; 3) experimental psychoacoustics to analyze auditory perception; and 4) a sensory modification method for which a novel technique based on sound cancelation will be developed, in order to demonstrate behavioral consequences of a bone conduction perturbation. Answers to the aforementioned questions should help appreciate the role of the invisible part of the auditory iceberg, understand how the central nervous system uses the auditory feedback even when its acoustic communicative goal is different, and pave the way for further research on audiomotor control, in particular its short-term flexibility and longer-term plasticity. Our consortium unites specialists of sensorimotor control, acoustics, phonetics, psychoacoustics, music and modeling around this undertaking, that should contribute both to behavioral/cognitive neuroscience, phonetics and to artistic practice, and could translate down the road into improvements in speech therapy, speech communication systems and ear protection devices.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.