Powered by OpenAIRE graph

INST RECH ET DE COORDINATION ACOUSTIQUE-MUSIQUE

Country: France

INST RECH ET DE COORDINATION ACOUSTIQUE-MUSIQUE

8 Projects, page 1 of 2
  • Funder: French National Research Agency (ANR) Project Code: ANR-19-CE37-0022
    Funder Contribution: 360,300 EUR

    Autism spectrum disorder (ASD) diagnosis requires both symptoms of socio-emotional impairments (e.g. not responding to smiles) and repetitive behaviors/restricted interests (e.g. not reacting to changes or regularities in the environment in a typical way). While both aspects are clearly prone to interact (e.g. how could one react to smiles if one doesn’t attend to changing facial expressions), little research has attempted to integrate them: whether difficulties in ASD are related to specifically-emotional deficits, to sensory particularities which are even more marked for social stimuli, or both, still remains to be established. Using smiling voice as an experimental model, project SEPIA (Sensory and Emotional Processing In Autism spectrum disorders) proposes to explore all three major steps of the emotional Perception-Representation-Action loop (Sensory processing - Perceptual representation - Motor resonance) in the same children and adults participants. At the sensory level (WP1), the project will study brain correlates of auditory regularity encoding and change detection in ASD, and how these depend on the emotional nature of the information. To do so, we will use a new electrophysiological (EEG) ‘roving’ paradigm in which participants will be presented with trains of repeated smiling or neutral voices, while we measure Repetition-Suppression (RS) and mismatch (MMN) responses. At the perceptual representation level (WP2), the project will provide a full characterization of how ASD participants internally represent the spectral signature of smiled speech, and how these representations depart from controls. To do so, we will use a recent ‘reverse-correlation’ paradigm in which participants are asked to evaluate the ‘smiliness’ of voice recordings algorithmically manipulated to have random spectral content, and their responses are reverse-engineered to find what spectral information correlates with their judgements. Finally, at the motor resonance level (WP3), the project will study the mechanisms by which vocal smiles evoke automatic facial and autonomic reactions, and how these may be affected in ASD. To do so, we will use a novel facial electromyography (EMG) paradigm, combined with pupillometry, in which participants listen to vocal expressions experimentally manipulated to be smiling or non-smiling, while their own zygomatic and pupil reactions are monitored. Most importantly, by assessing all three steps of the perception-representation-action chain in the same participants, project SEPIA offers the unique potential to identify their respective interactions in ASD sensory and socio-emotional difficulties, e.g. whether potential deficits in regularity processing (WP1) correlate with atypical perceptual representations (WP2), or whether facial mimicry (WP3) is facilitated for stimuli that are prototypical of a participant’s perceptual representation. This integrative approach, made possible by the combined expertise of INSERM UMR1253 for clinical and EEG research and CNRS-IRCAM for innovative psycho acoustical techniques, therefore provides a rare opportunity to disentangle sensory and emotion-related processes in ASD, to determine at which level physio-pathological processes operate, and to provide novel mechanistic insights into the socio-emotional difficulties at play in autism. These outcomes will be key to define precise targets for behavioral and cognitive therapies and educational interventions tailored to each subgroup of patients.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-19-CE23-0023
    Funder Contribution: 630,547 EUR

    Audition is a key modality to understand and to interact with our spatial environment, and plays a major role in Augmented Reality (AR) applications. The HAIKUS project investigates the use of Artificial Intelligence (AI) for synthesising augmented acoustic scenes. Embedding computer-generated or pre-recorded auditory content into a user's real acoustic environment creates an engaging and interactive experience that can be applied to video games, museum guides or radio plays. Audio-signal processing tools for real-time 3D sound spatialisation and artificial reverberation are nowadays mature and can handle both multichannel loudspeaker systems and binaural rendering over headphones. However, the seamless and congruent integration of computer-generated and pre-recorded objects within a live context is still challenging. It needs the automatic adaptation of the virtual object rendering to the acoustic properties of the user’s real environment. Among the different subcategories of AI, machine learning (ML) is well suited to the audio processing in virtual and augmented reality applications. ML has shown its strong potential for solving complex acoustic problems such as sound source localisation or source separation. In the HAIKUS project, ML is applied to the identification and manipulation of the acoustic channels between the sources and the listener. The three main objectives of the project are (a) the blind estimation of room acoustic parameters and/or the room geometry from the observed reverberant audio signals originating from live sounds occurring in the room, (b) the inference of plausible rules to modify the spatialisation parameters and methods to interpolate between room impulse responses according to the movement of the listener, and (c) the blind estimation of the HRTFs of the listener from binaural signals captured in a real environment with in-ear microphones. The three objectives benefit from the mobility of the listener, which allows for gradually accumulating knowledge about the acoustic environment. The HAIKUS project brings together three research teams with complementary expertise in the fields of signal processing, machine learning, acoustics, and audio technology. The general methodology combines statistical methods, acoustic modelling, and machine learning. The division of the scientific program is structured around the three main objectives. Each objective requires the development of statistical deep regression methods in order to map audio features extracted from observed signals to the acoustic parameters we want to estimate. Each objective tackles this problem from a different perspective, i.e. with different input and output features, and different assumptions about the known and unknown variables. Learning the mapping between the observed audio features and the target acoustic parameters requires the creation of dedicated audio datasets either built from numerical modelling or from real-world recordings. The scientific results will be disseminated in publications and conferences representative of signal processing, acoustics or audio. Besides the theoretical results, practical outcomes will also comprise the development of a high-order spherical microphone array. In the spirit of open research, the generated or collected audio datasets will be made publicly available in order to serve the scientific community. Considering the increasing interest on applications of machine learning and auditory scene analysis, two workshops will be organised during the project. The workshops will address both the scientific community and companies involved in AAR research and development, and from other potential application domains such as audio/video gaming, cultural heritage, professional audio production, and broadcasting. Work dedicated to the personalisation of HRTFs using binaural recordings should lead to an original web-based solution for personalised binaural rendering accessible to any consumer.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-19-CE38-0001
    Funder Contribution: 774,198 EUR

    The study of singing style in popular music is an emerging branch in musicology, while effects related to singing style have become a central part of the majority of popular music productions making use of the few effect plugins that are available today. ARS aims to establish a mutually beneficial collaboration between musicologists working on singing performance and specialists in signal processing, with the following objectives: 1) to exploit advances in voice signal processing and deep learning for musicological research on singing style and 2) to develop new algorithms for high quality expressive singing voice transformation that diversify and enrich the palette of artistic expressions in popular music. Musicologists will contribute to the development of singing effects with their expertise about musically and artistically relevant singing style features, while signal-processing specialist will establish robust analysis algorithms for musicologists to study singing style in real music performances, and innovative singing voice transformation algorithms that allow modification of singing style in music productions.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-09-SSOC-0068
    Funder Contribution: 260,000 EUR
    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-19-CE33-0010
    Funder Contribution: 689,031 EUR

    Improvisation can be seen as a major driving force in human interactions, strategic in every aspect of communication and action. In its highest form, musical improvisation is a mixture of structured, planned, directed action, and of hardly predictable local decisions and deviations optimizing adaption to the context, expressing in a unique way the creative self, and stimulating the coordination and cooperation between agents. Setting up powerful and realistic human-machine environments for improvisation necessitates to go beyond software engineering of creative agents with audio-signal listening and generating capabilities. This project proposes to drastically renew the paradigm of human-machine improvised interaction by establishing a continuum from the logics of co-creative improvising agents to a form of “physical interreality” (a mixed reality scheme where the physical world is actively modified) embedded in acoustical instruments involving full embodiment for musicians.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.