Loading
Daily language use contains many (implicit) expressions about social categories, like minority, age, and gender groups. These expressions construct and maintain stereotypes, and, often unknowingly, feed prejudice and discrimination. In this project we create a thorough understanding and awareness of stereotype communication, by studying when and how stereotypes are reflected in, and inferred from, spontaneously produced natural language. In addition, we develop a software toolset to automatically detect stereotypes and prejudices in texts. This toolset can uncover implicit biases in a variety of real-life contexts and thereby facilitates further (applied) research, content monitoring, correction, prevention and education about stereotyping.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=nwo_________::41cea2fc096a66d150efa290511b9561&type=result"></script>');
-->
</script>