Powered by OpenAIRE graph

EXIDA DEV

EXIDA DEVELOPMENT SRL
Country: Italy
3 Projects, page 1 of 1
  • Funder: European Commission Project Code: 101225866
    Funder Contribution: 5,999,510 EUR

    SHASAI targets the HW/SW security and AI-based high risk systems intersection, aiming to enhance the security, resilience, automated testing, and continuous assessment of AI systems. The rising interest in these systems makes them attractive targets for threat actors due to their complexity and valuable data. Ensuring the security of AI systems involves safeguarding AI models, datasets, dependencies, and securing the underlying HW/SW infrastructure. SHASAI takes a holistic approach of AI system security throughout their lifecycle stages. At requirement definition, SHASAI provides an enhanced risk assessment methodology for secure and safe AI. At design, SHASAI will propose secure and safe design patterns at SW and HW level to achieve trustworthy AI systems. During implementation, SHASAI provides tooling for a secure supply chain of the system by analyzing vulnerabilities in SW / HW dependencies, detecting poisoned data and backdoors in pretrained models, scanning for software vulnerabilities, hardening hardware platforms, and safeguarding intellectual property. At evaluation, SHASAI offers a virtual testing platform with automated attack and defense test suites to assess security against AI and infrastructure-specific threats. In operation, AI-enhanced security services continuously monitor the system, detect anomalies, and mitigate attacks using AI firewalls and attestation methods, ensuring availability and integrity. The feasibility of SHASAI methods and tools will be demonstrated in 3 real scenarios: 1. Agrifood industry: Cutting machines. 2. Health: Eye-tracking systems in augmentative and alternative communication. 3. Automotive: Tele-operated last mile delivery vehicle. Their heterogeneity and complementarity maximize the transferability of solutions. SHASAI will contribute to scientific, techno-economic, and societal impacts as it aligns with the CRA, EU AI Act, NIS2 and CSA, sharing and commercializing methods and tools to ensure trustworthy AI components.

    more_vert
  • Funder: European Commission Project Code: 101139892
    Overall Budget: 38,208,300 EURFunder Contribution: 11,006,200 EUR

    EdgeAI-trust aims to develop a domain-independent architecture for decentralized edge AI along with HW/SW edge AI solutions and tools, which enable fully collaborative AI and learning at the edge. The edge AI technologies address key challenges faced by Europe's industrial and societal sectors such energy efficiency, system complexity and sustainability. EdgeAI-trust will enable large-scale edge AI solutions that enable interoperability, upgradeability, reliability, safety, security and societal acceptance with a focus on explainability and robustness. Toolchains will provide standardized interfaces for developing, optimizing and validating edge AI solutions in heterogeneous systems. The generic results will be instantiated for automated vehicles, production and agriculture, thus offering innovation potential not only in the generic HW/SW technologies and tools, but also in the three target domains. These technological innovations are complemented with business strategies and community building, ensuring the widespread uptake of the innovations in Europe. EdgeAI-trust will establish sustainable impact by building open edge AI platforms and ecosystems, with a focus on standardization, supply chain integrity, environmental impact, benchmarking frameworks, and support for open-source solutions. The consortium consists of major suppliers and OEMs encompassing a broad range of application domains, supported by leading research and academic organizations. By embracing the opportunity to specialize in Edge AI, Europe can maintain its position in the global context, especially as it aligns with decentralized and privacy-driven European policy. Furthermore, as AI is closely connected with the Green Deal, this project can provide proper solutions for environmental issues. Ultimately, the project will enable AI to be connected with other strong sectors and industries, improving the innovation process and decision-making in Europe.

    more_vert
  • Funder: European Commission Project Code: 101069595
    Overall Budget: 3,891,880 EURFunder Contribution: 3,891,880 EUR

    Deep Learning (DL) techniques are key for most future advanced software functions in Critical Autonomous AI-based Systems (CAIS) in cars, trains and satellites. Hence, those CAIS industries depend on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. There is a fundamental gap between Functional Safety (FUSA) requirements of CAIS and the nature of DL solutions needed to satisfy those requirements. The lack of transparency (mainly explainability and traceability), and the data-dependent and stochastic nature of DL software clash against the need for deterministic, verifiable and pass/fail test-based software solutions for CAIS. SAFEXPLAIN tackles this challenge by providing a novel and flexible approach to allow the certification – hence adoption – of DL-based solutions in CAIS by (1) architecting transparent DL solutions that allow explaining why they satisfy FUSA requirements, with end-to-end traceability, with specific approaches to explain whether predictions can be trusted, and with strategies to reach (and prove) correct operation, in accordance with certification standards. SAFEXPLAIN will also (2) devise alternative and increasingly complex FUSA design safety patterns for different DL usage levels (i.e. with varying safety requirements) that will allow using DL in any CAIS functionality, for varying levels of criticality and fault tolerance. SAFEXPLAIN brings together a highly skilled and complementary consortium to successfully tackle this endeavor including 3 research centers, RISE (AI expertise), IKR (FUSA expertise), and BSC (platform expertise); and 3 CAIS case studies, automotive (NAV), space (AIKO), and railway (IKR). SAFEXPLAIN DL-based solutions are assessed in an industrial toolset (EXI). Finally, to prove that transparency levels are fully compliant with FUSA, solutions are reviewed by internal certification experts (EXI), and external ones subcontracted for an independent assessment.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.