Powered by OpenAIRE graph

WLT

WEBLYZARD TECHNOLOGY GMBH
Country: Austria
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
13 Projects, page 1 of 3
  • Funder: European Commission Project Code: 687786
    Overall Budget: 3,765,710 EURFunder Contribution: 3,115,740 EUR

    In video veritas, if we divert the old Latin saying: In video, there is truth! The digital media revolution and the convergence of social media with broadband wired and wireless connectivity are bringing breaking news to online video platforms; and, news organisations delivering information by Web streams and TV broadcast often rely on user-generated recordings of breaking and developing news events shared by social media to illustrate the story. However, in video there is also deception. Access to increasingly sophisticated editing and content management tools, and the ease in which fake information spreads in electronic networks requires reputable news outlets to carefully verify third-party content before publishing it, reducing their ability to break news quickly while increasing costs in times of tight budgets. InVID will build a platform providing services to detect, authenticate and check the reliability and accuracy of newsworthy video files and video content spread via social media. This platform will enable novel newsroom applications for broadcasters, news agencies, web pure-players, newspapers and publishers to integrate social media content into their news output without struggling to know if they can trust the material or how they can reach the user to ask permission for re-use. It will ensure that verified and rights-cleared video content is readily available for integration into breaking and developing news reports. Validated by real customer pilots, InVID will help protecting the news industry from distributing fakes, falsehoods, lost reputation and ... lawsuits. The InVID platform and applications will be validated and qualified through several development and validation cycles. They will be pilot-tested by three leading institutions in the European news industry ecosystem: AFP (the French News Agency), DW (Deutsche Welle), and APA (the Austria Press Agency), and will create new exploitation possibilities for all consortium members.

    more_vert
  • Funder: CHIST-ERA Project Code: CHIST-ERA-19-XAI-003

    Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods, to design more understandable, reconfigurable and personalisable explanations. Knowledge Graphs offer significant potential to better structure the core of AI models, and to use semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches. Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users’ trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quicky understandable explanations of rather complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.

    more_vert
  • Funder: European Commission Project Code: 619706
    more_vert
  • Funder: European Commission Project Code: 101070305
    Overall Budget: 3,991,270 EURFunder Contribution: 3,991,270 EUR

    Explainable Artificial Intelligence (AI) is key to achieving a human-centred and ethical development of digital and industrial solutions. ENEXA builds upon novel and promising results in knowledge representation and machine learning to develop scalable, transparent and explainable machine learning algorithms for knowledge graphs. The project focuses on knowledge graphs because of their critical role as enabler of new solutions across domains and industries in Europe. Some of the existing machine learning approaches for knowledge graphs are known to already provide guarantees with respect to their completeness and correctness. However, they are still impossible or impractical to deploy on real-world data due to the scale, incompleteness and inconsistency of knowledge graphs in the wild. We devise approaches that maintain formal guarantees pertaining to completeness and correctness while being able to exploit different representations of knowledge graphs in a concurrent fashion. With our new methods, we plan to achieve significant advances in the efficiency and scalability of machine learning, especially on knowledge graphs. A supplementary innovation of ENEXA lies in its approach to explainability. Here, we focus on devising human-centred explainability techniques based on the concept of co-construction, where human and machine enter a conversation to jointly produce human-understandable explanations. Three use cases on business software services, geospatial intelligence and data-driven brand communication have been chosen to apply and validate this new approach. Given their expected growth rates, these sectors will play a major role in future European data value chains.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-21-CHR4-0005
    Funder Contribution: 296,842 EUR

    Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions (interpretability). Explainability considers how AI can be understood by human users. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods. CIMPLE will draw on models of human creativity, both in manipulating and understanding information, to design more understandable, reconfigurable and personalisable explanations. Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users’ trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used. Knowledge Graphs offer significant potential to better structure the core of AI models, and to use semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quickly understandable explanations of rather complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.