Powered by OpenAIRE graph

CISPA - HELMHOLTZ-ZENTRUM FUR INFORMATIONSSICHERHEIT GGMBH

Country: Germany

CISPA - HELMHOLTZ-ZENTRUM FUR INFORMATIONSSICHERHEIT GGMBH

20 Projects, page 1 of 4
  • Funder: European Commission Project Code: 101041207
    Overall Budget: 1,482,690 EURFunder Contribution: 1,482,690 EUR

    Communication efficiency is one of the central challenges for cryptography. Modern distributed computing techniques work on large quantities of data, and critically depend on keeping the amount of information exchanged between parties as low as possible. However, classical cryptographic protocols for secure distributed computation cause a prohibitive blow-up of communication in this setting. Laconic cryptography is an emerging paradigm in cryptography aiming to realize protocols for complex tasks with a minimal amount of interaction and a sub-linear overall communication complexity. If we manage to construct truly efficient laconic protocols, we could add a cryptographic layer of protection to modern data-driven techniques in computing. My initial results in laconic cryptography did not just demonstrate the potential of this paradigm, but proved to be a game-changer in solving several long standing open problems in cryptography, e.g. enabling me to construct identity-based encryption from weak assumptions. However, the field faces two major challenges: (a) Current constructions employ techniques that are inherently inefficient. (b) The most advanced notions in laconic cryptography are only known from very specific combinations of assumptions, and are therefore just one cryptanalytic breakthrough away from becoming void. This project will make a leap forward in both challenges. I will systematically address these challenges in a work program which pursues the following objectives: (i) Develop new tools and mechanisms to realize crucial cryptographic primitives in a compact way. (ii) Design efficient protocols for advanced laconic functionalities which sidestep the need for inherently inefficient low-level techniques and widen the foundation of underlying assumptions. (iii) Strengthen the conceptual bridge between laconic cryptography and cryptographically secure obfuscation, transferring new techniques and ideas between these domains.

    more_vert
  • Funder: European Commission Project Code: 101123525
    Funder Contribution: 150,000 EUR

    In large distributed computing systems there is a big performance advantage when all communication can be carried out synchronously. Current synchronization techniques result in long communication latencies as the systems scale up in size and operating frequency. We identify two key application areas in which this is an immediate and pressing challenge. 1. Large Networks-on-Chip (NoCs) do not operate synchronously, despite the relative ease of design and low-latency communication this would offer. 2. Despite issues of security and availability, current cellphone networks rely on Global Navigation Satellite Systems (GNSS) such as GPS to obtain tightly synchronized time. We propose the application of Gradient Clock Synchronization (GCS) as a novel clock synchronization method for these applications. GCS minimizes the time offset between close-by parts of the system. This results in much smaller offsets between such parts than standard techniques that aim at minimizing the maximum global offset only. Given that in the above application settings, it is the offset between close-by parts that matters, this enables us to achieve large improvements in performance. In particular, we can eliminate the issues faced by NoC designs and cellphone networks that we pointed out above. The main objectives of the proposed PoC project itself can be stated as follows. - Development, fabrication, and evaluation of an ASIC demonstrator for SoC and NoC clocking. - Development and evaluation of a secure wireless implementation of the GCS algorithm. - Patent protection of the generated intellectual property. - Finding industrial pilot partners for development of products in follow-up projects.

    more_vert
  • Funder: European Commission Project Code: 101045669
    Overall Budget: 1,998,850 EURFunder Contribution: 1,998,850 EUR

    In parallel with the ongoing digitization, computer security has become an increasingly important and urgent challenge. In particular, the sound and robust implementation of complex software systems is still not well understood in practice, as evidenced by the steady stream of successful attacks observed in the wild. The current state of the art in software security consists of solutions that are often technically sound, but do not provide operational security in practice. With the Resilient and Sustainable Software Security (RS³) project, we propose a compelling research agenda to fundamentally change this situation by developing novel countermeasures at different system levels that fundamentally improve security. On the one hand, the system must be "resilient" against entire classes of attack vectors. On the other hand, the system must be "sustainable", i.e., it must be able to maintain its security at least over its design lifetime and possibly even adapt over time. Our work plan addresses the problem from four different angles by (i) developing novel software testing strategies that enable accurate and efficient vulnerability discovery, (ii) designing secure compiler chains that embed security properties during the compilation phase that can be enforced at runtime, (iii) devising robust mechanisms that mitigate and patch advanced attacks, and (iv) investigating how hardware changes for open-source hardware (e.g., RISC-V processors) can improve the efficiency and accuracy of all of these goals. We expect to develop innovative methods and fundamental principles to build, test, and patch complex systems securely and efficiently. This holistic approach covers multiple layers of the computing stack, and each aspect has the potential to improve security significantly. The main success criterion will be our ability to perform a security analysis of a complex system an order of magnitude more effectively and efficiently than with current state-of-the-art methods.

    more_vert
  • Funder: European Commission Project Code: 101116395
    Overall Budget: 1,499,280 EURFunder Contribution: 1,499,280 EUR

    Deep learning continues to achieve impressive breakthroughs across disciplines and is a major driving force behind a multitude of industry innovations. Most of its successes are achieved by increasingly large neural networks that are trained on massive data sets. Their development inflicts costs that are only affordable by a few labs and prevent global participation in the creation of related technologies. The huge model sizes also pose computational challenges for algorithms that aim to address issues with features that are critical in real-world applications like fairness, adversarial robustness, and interpretability. The high demand of neural networks for vast amounts of data further limits their utility for solving highly relevant tasks in biomedicine, economics, or natural sciences. To democratize deep learning and to broaden its applicability, we have to find ways to learn small-scale models. With this end in view, we will promote sparsity at multiple stages of the machine learning pipeline and identify models that are scaleable, resource- and data-efficient, robust to noise, and provide insights into problems. To achieve this, we need to overcome two challenges: the identification of trainable sparse network structures and the de novo optimization of small-scale models. The solutions that we propose combine ideas from statistical physics, complex network science, and machine learning. Our fundamental innovations rely on the insight that neural networks are a member of a cascade model class that we made analytically tractable on random graphs. Advancing our derivations will enable us to develop novel parameter initialization, regularization, and reparameterization methods that will compensate for the missing implicit benefits of overparameterization for learning. The significant reduction in model size achieved by our methods will help unlock the full potential of deep learning to serve society as a whole.

    more_vert
  • Funder: European Commission Project Code: 101170430
    Overall Budget: 2,000,000 EURFunder Contribution: 2,000,000 EUR

    Machine learning models are growing larger and more complex, making training increasingly resource-demanding. Concurrently, our world, and hence the training data is perpetually evolving. This requires continual model updating or retraining to address changing training data. Presently, the most reliable course to handle such distribution shifts is to retrain models from scratch on new training data. This results in substantial resource usage, increased CO2 footprint, elevated energy consumption, and limits the decisive ML progress to large-scale industry players. Imagine a world in which models help each other learn. When the data distribution changes, a complete retraining of models could be avoided if the new model could learn from the outdated one by using reliable and provably effective methods. Furthermore, the convention of relying on large, versatile monolithic models could then give way to a consortium of smaller specialized models, with each contributing its specific domain knowledge when needed. By encouraging this form of decentralization, we could reduce resource consumption as the individual components can be updated independently of each other. Drawing on groundbreaking research in distributed ML model training, CollectiveMinds aspires to design adaptable ML models. These models can effectively manage updates in training data and task modifications, while also enabling efficient knowledge exchange across various models, thereby fostering widescale collaborative learning and constructing a sustainable framework for collaborative machine intelligence. This initiative could revolutionize sectors like healthcare, where there is limited training data, and trustworthy AI that demands guarantees on data ownership and control. Furthermore, it could foster improved collaborative research within the realm of science. CollectiveMinds embodies a significant paradigm shift towards democratizing ML, focusing on cooperative intellectual efforts.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.