Loading
In everyday life, we are immersed in a multitude of sounds. Sound waves convey language, emotion, and other vital information on events in the environment. To recognize sounds, we effortlessly analyze and combine their elementary features. For example, we recognize a high and fast fluctuating sound as birdsong. Alternatively, a low and more slowly fluctuating sound is recognized as the voice of a colleague. Previous research in animals suggested several neuronal mechanisms for the analysis and combination of sound features. However, insufficient spatial resolution of non-invasive methods precluded investigating if these mechanisms are present and relevant for human listening in natural environments. In this project, I will use 7 Tesla functional magnetic resonance imaging (fMRI) and a novel analysis method to study the neural mechanisms underlying feature processing of natural sounds in the human brain. My results will provide a detailed view on the neural basis of human audition, bridging the gap with findings from animal research. Furthermore, they may provide the methodological basis for similar investigations throughout the brain. I will perform this research at the Center for Magnetic Resonance Research (CMRR) in Minneapolis (USA), which provides unique facilities and immense technical expertise in MRI at ultra-high magnetic fields.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=nwo_________::70c3713bc7c4376f9b5d91a9ac93d723&type=result"></script>');
-->
</script>