Loading
The HYBRINFOX project aims to contribute to the fight against online misinformation by studying and developing possible synergies between symbolic AI and deep learning approaches for the detection of fake news (aka. infox). The main lever is the identification of vague information, likely to introduce or promote bias (subjectivity, evaluativity). This project is a continuation of a RAPID program entitled DIEKB ('Disinformation Identification in Evolving Knowledge Bases', 2019-2022) between the CoLoR team at INSTITUT JEAN-NICOD (Paul Egré, Benjamin Icard, Thomas Souverain), MONDECA (Ghislain Atemezing) and AIRBUS (Sylvain Gatepaille, Guillaume Gadek, Souhir Gabiche, Paul Guélorget). This research produced promising results whose success calls for new resources and for scaling up (funding of two postdocs, integration of the current prototypes). This development justifies in particular the association of a new partner, the LinkMedia team of IRISA (represented by Vincent Claveau), specialized in deep learning and automatic language processing for the identification of fake news. The leading hypothesis behind this project is that some lexical markers of semantic vagueness, in particular evaluative adjectives, which favor subjective interpretations, constitute a relevant cue of the potentially false, biased, or unreliable character of some texts. This hypothesis was tested end of 2021 with the development of a symbolic AI algorithm, the VAGO tool, and by comparing it with a deep learning based algorithm, the FAKE-CLF classifier. The VAGO tool provides a measure of the vagueness versus precision of a text, and the subjectivity (opinion) versus objectivity (factual character) of a text. Comparison with the results of the FAKE-CLF classifier shows a positive correlation between subjectivity scores measured by VAGO and falsity scores predicted by FAKE-CLF. This result opens up several avenues of hybridization between the two methods, which the HYBRINFOX program proposes to develop. The ambition of the project is both scientific and industrial: first, we aim to make the deep learning method exemplified in classifiers like FAKE-CLF explicable through symbolic AI and the use of explicit semantic rules. Then, the goal is to leverage the symbolic AI method developed with VAGO to improve the performance of the deep learning models, and conversely to enrich the lexicon of VAGO as the underlying typology in order to refine the identification of textual falsity cues. Finally, the goal is to better define the boundary between truthful and non-verbatim uses of linguistic vagueness in discourse, by training and testing deep learning-based algorithms on more or less vague or precise corpora. By associating research partners (IJN, IRISA) and industrial partners (Mondeca, Airbus), the project will test the developed tools on novel use cases including for defense applications.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=anr_________::c19bfdcf4b7ba1c96b09cf617edefc32&type=result"></script>');
-->
</script>