EXIDA DEV
EXIDA DEV
3 Projects, page 1 of 1
Open Access Mandate for Publications and Research data assignment_turned_in Project2025 - 2029Partners:SPRITZ MATTER, BEE MOBILITY SOLUTIONS OTOMOTIV SANAYI VE TICARET AS, NEOTERA S.R.L., Ikerlan, IRISBOND +12 partnersSPRITZ MATTER,BEE MOBILITY SOLUTIONS OTOMOTIV SANAYI VE TICARET AS,NEOTERA S.R.L.,Ikerlan,IRISBOND,MGEP,NXP (Germany),KEYSIGHT RISCURE,EXIDA DEV,STICHTING RADBOUD UNIVERSITEIT,EUROPEAN SCIENCE COMMUNICATION INSTITUTE (ESCI) GGMBH,NXP (Netherlands),BSC,LKS S COOP,SMART CONTROL SYSTEMS AND SOFTWARE JOINT STOCK COMPANY,Infineon Technologies (Germany),NEC LABORATORIES EUROPE GMBHFunder: European Commission Project Code: 101225866Funder Contribution: 5,999,510 EURSHASAI targets the HW/SW security and AI-based high risk systems intersection, aiming to enhance the security, resilience, automated testing, and continuous assessment of AI systems. The rising interest in these systems makes them attractive targets for threat actors due to their complexity and valuable data. Ensuring the security of AI systems involves safeguarding AI models, datasets, dependencies, and securing the underlying HW/SW infrastructure. SHASAI takes a holistic approach of AI system security throughout their lifecycle stages. At requirement definition, SHASAI provides an enhanced risk assessment methodology for secure and safe AI. At design, SHASAI will propose secure and safe design patterns at SW and HW level to achieve trustworthy AI systems. During implementation, SHASAI provides tooling for a secure supply chain of the system by analyzing vulnerabilities in SW / HW dependencies, detecting poisoned data and backdoors in pretrained models, scanning for software vulnerabilities, hardening hardware platforms, and safeguarding intellectual property. At evaluation, SHASAI offers a virtual testing platform with automated attack and defense test suites to assess security against AI and infrastructure-specific threats. In operation, AI-enhanced security services continuously monitor the system, detect anomalies, and mitigate attacks using AI firewalls and attestation methods, ensuring availability and integrity. The feasibility of SHASAI methods and tools will be demonstrated in 3 real scenarios: 1. Agrifood industry: Cutting machines. 2. Health: Eye-tracking systems in augmentative and alternative communication. 3. Automotive: Tele-operated last mile delivery vehicle. Their heterogeneity and complementarity maximize the transferability of solutions. SHASAI will contribute to scientific, techno-economic, and societal impacts as it aligns with the CRA, EU AI Act, NIS2 and CSA, sharing and commercializing methods and tools to ensure trustworthy AI components.
more_vert Open Access Mandate for Publications and Research data assignment_turned_in Project2024 - 2027Partners:NVIDIA DENMARK APS, FENTISS, TTTechAuto Spain, TEGnology ApS, TECHNEXT +43 partnersNVIDIA DENMARK APS,FENTISS,TTTechAuto Spain,TEGnology ApS,TECHNEXT,IOTAM INTERNET OF THINGS APPLICATIONS AND MULTI LAYER DEVELOPMENT LTD,DLR,IFEVS,Solver IA,SMART CONTROL SYSTEMS AND SOFTWARE JOINT STOCK COMPANY,UPV,ZETTASCALE TECHNOLOGY SARL,JSI,ASVIN GMBH,Infineon Technologies (Austria),SMARTSOL SIA,CLUE,VIF,TTTech Computertechnik (Austria),TTTech Germany GmbH,RAPITA SYSTEMS SL,SIRRIS,URCA,MEDISYS MONITORATE IKE INSTITUTE OF COMPUTING SYSTEMS AND ART INTELLIGENCE,AVIMECC SPA,BEE MOBILITY SOLUTIONS OTOMOTIV SANAYI VE TICARET AS,VELTI PLATFORMS AND SERVICES LIMITED,TECHNICA ELECTRONICS BARCELONA SL,EXIDA DEV,AVL,IVEX NV,BUYUTECH TEKNOLOJI SANAYI VE TICARET ANONIM SIRKETI,NEOTERA S.R.L.,ZF FRIEDRICHSHAFEN AG,TTTECH AUTO AG,TOFAS,ST,ΕΛΜΕΠΑ,VRANKEN-POMMERY PRODUCTION,University of Siegen,Mulytic,STMicroelectronics (Switzerland),AVL TURKIYE,Infineon Technologies (Germany),Offenburg University of Applied Sciences,STGNB 2 SAS,Hamm-Lippstadt University of Applied Sciences,BSCFunder: European Commission Project Code: 101139892Overall Budget: 38,208,300 EURFunder Contribution: 11,006,200 EUREdgeAI-trust aims to develop a domain-independent architecture for decentralized edge AI along with HW/SW edge AI solutions and tools, which enable fully collaborative AI and learning at the edge. The edge AI technologies address key challenges faced by Europe's industrial and societal sectors such energy efficiency, system complexity and sustainability. EdgeAI-trust will enable large-scale edge AI solutions that enable interoperability, upgradeability, reliability, safety, security and societal acceptance with a focus on explainability and robustness. Toolchains will provide standardized interfaces for developing, optimizing and validating edge AI solutions in heterogeneous systems. The generic results will be instantiated for automated vehicles, production and agriculture, thus offering innovation potential not only in the generic HW/SW technologies and tools, but also in the three target domains. These technological innovations are complemented with business strategies and community building, ensuring the widespread uptake of the innovations in Europe. EdgeAI-trust will establish sustainable impact by building open edge AI platforms and ecosystems, with a focus on standardization, supply chain integrity, environmental impact, benchmarking frameworks, and support for open-source solutions. The consortium consists of major suppliers and OEMs encompassing a broad range of application domains, supported by leading research and academic organizations. By embracing the opportunity to specialize in Edge AI, Europe can maintain its position in the global context, especially as it aligns with decentralized and privacy-driven European policy. Furthermore, as AI is closely connected with the Green Deal, this project can provide proper solutions for environmental issues. Ultimately, the project will enable AI to be connected with other strong sectors and industries, improving the innovation process and decision-making in Europe.
more_vert Open Access Mandate for Publications and Research data assignment_turned_in Project2022 - 2025Partners:AIKO, NAVINFO, Ikerlan, RISE, EXIDA DEV +1 partnersAIKO,NAVINFO,Ikerlan,RISE,EXIDA DEV,BSCFunder: European Commission Project Code: 101069595Overall Budget: 3,891,880 EURFunder Contribution: 3,891,880 EURDeep Learning (DL) techniques are key for most future advanced software functions in Critical Autonomous AI-based Systems (CAIS) in cars, trains and satellites. Hence, those CAIS industries depend on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. There is a fundamental gap between Functional Safety (FUSA) requirements of CAIS and the nature of DL solutions needed to satisfy those requirements. The lack of transparency (mainly explainability and traceability), and the data-dependent and stochastic nature of DL software clash against the need for deterministic, verifiable and pass/fail test-based software solutions for CAIS. SAFEXPLAIN tackles this challenge by providing a novel and flexible approach to allow the certification – hence adoption – of DL-based solutions in CAIS by (1) architecting transparent DL solutions that allow explaining why they satisfy FUSA requirements, with end-to-end traceability, with specific approaches to explain whether predictions can be trusted, and with strategies to reach (and prove) correct operation, in accordance with certification standards. SAFEXPLAIN will also (2) devise alternative and increasingly complex FUSA design safety patterns for different DL usage levels (i.e. with varying safety requirements) that will allow using DL in any CAIS functionality, for varying levels of criticality and fault tolerance. SAFEXPLAIN brings together a highly skilled and complementary consortium to successfully tackle this endeavor including 3 research centers, RISE (AI expertise), IKR (FUSA expertise), and BSC (platform expertise); and 3 CAIS case studies, automotive (NAV), space (AIKO), and railway (IKR). SAFEXPLAIN DL-based solutions are assessed in an industrial toolset (EXI). Finally, to prove that transparency levels are fully compliant with FUSA, solutions are reviewed by internal certification experts (EXI), and external ones subcontracted for an independent assessment.
more_vert
