Powered by OpenAIRE graph

EURECOM

Country: France
70 Projects, page 1 of 14
  • Funder: French National Research Agency (ANR) Project Code: ANR-18-CE46-0002
    Funder Contribution: 284,286 EUR

    Information and Communication Technologies (ICT) are constantly producing advancements that translate into a variety of societal changes including improvements to economy, better living conditions, access to education, well-being, and entertainment. The widespread use and growth of ICT, however, is posing a huge threat to the sustainability of this development, given that the energy consumption of current computing devices is growing at an uncontrolled pace. Within ICT, machine learning is currently one of the fastest growing fields, given its pervasive use and adoption in smart cities, recommendation systems, finance, social media analysis, communication systems, and transportation. Apart from isolated application-specific attempts, the only general solution to tackle the sustainability of computations in machine learning is Google's Tensor Processing Unit (TPU), which has been opened to general use through a cloud system in mid-February. This is an interesting and effective direction to push a transistor-based technology to address some of the issues above pertaining to the sustainability of computing for machine learning, and it is inspiring other companies and start-ups to follow this trend. ECO-ML's ambition is to radically change this and to propose a novel angle of attack to the sustainability of computations in machine learning. The starting point of ECO-ML is the realization that current approaches for inference and prediction with Gaussian Processes (GPs) and Deep Gaussian Processes (DGPs) are competitive with popular Deep Neural Networks (DNNs), while offering attractive flexibility and quantification of uncertainty. In the last year, we have come across the work that the French company LightOn has done on the development of novel Optical Processing Units (OPUs). OPUs perform a specific matrix operation in hardware exploiting the properties of scattering of light, so that in practice this happens at the speed of light. Not only this is the case, but the consumption of OPUs is much lower than current computing devices, while allowing for the possibility to operate with large Gaussian random matrices, orders of magnitude larger than current computing devices. GP and DGP models are perfect candidates to benefit from the principles behind OPUs, but there is need to make advancements on the design and inference of these models for this to become a reality. We expect to produce and release the first implementation of GPs and DGPs using OPUs, and to demonstrate that this leads to considerable acceleration in model training and prediction times while reducing power consumption with respect to the state-of-the-art. We expect to advance the state-of-the-art in GP and DGP modeling and inference by developing novel model approximations and inference tailored to exploit OPU computing, but that will also trigger advances in the theory of approximation of GPs and DGPs. Finally, we expect to showcase a variety of modeling applications in environmental and life sciences, demonstrating that our approach leads to competitive performance with the state-of-the-art, while achieving sound quantification of uncertainty and fast model training and prediction times in a sustainable way. Similarly to the success that Graphical Processing Units (GPUs) enabled in the deep learning revolution, we envisage that OPUs will be a key element in making GPs the preferred choice for future large-scale modeling and accurate quantification of uncertainty tasks.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-23-CE39-0005
    Funder Contribution: 245,880 EUR

    Our speech is being collected by various devices with voice-driven interactive services and transmitted over unsecured public networks to be stored and processed on vulnerable cloud-based infrastructure. With always-listening functionality, these devices result in ongoing energy consumption challenges and significant privacy concerns due to the potential for data interception by malicious actors. This scenario is particularly concerning since speech data is inherently personal, containing far more information than most people realise and can be misused for nefarious purposes. In light of the above, ensuring that voice data are private and minimising energy consumption are critical and urgent issues that require immediate action. Ultimately, P-SPIKE will accomplish this vision within the context of speaker verification in realistic conditions, while maintaining individual privacy protection by harnessing the potential of energy-efficient spiking neural networks for the processing of speech signals, a largely unexplored research domain with immense potential.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-21-CE23-0028
    Funder Contribution: 200,524 EUR

    Human History is composed of a continuous flow of events. Each of them can impact subsequent events and contribute to the evolution of human knowledge. Knowledge Graphs try to encode the information about facts and events, often falling short when representing the temporal evolution of this knowledge and tracking cause-effect flows. kFLOW aims to propose strategies for representing, extracting, predicting and using the information about event relationships and knowledge evolution. For achieving these goals, a Knowledge Graph of interconnected events and facts will be realised. This graph will be populated and exploited through developing specialised strategies for data modelling, information extraction, link prediction, incorrect triple detection and automatic fact-checking.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-22-CE45-0015
    Funder Contribution: 232,668 EUR

    I-VESSEG aims to close the gap hindering the use of 3D vessel segmentation tools to assist clinicians in angiographic clinical routines. The project will build on learning-based techniques and will address their limitations regarding the need for large, fully annotated training sets and their poor generalization. I-VESSEG will use interactive learning to allow continual training from weak annotations provided by the user, as data becomes available. To facilitate data access for training, I-VESSEG will be formulated in a collaborative federated learning paradigm that enables learning without the need for sensitive data sharing or centralized storage. Finally, by relying on domain adaptation and generalization techniques, I-VESSEG will be applicable in a transparent manner to any cerebrovascular imaging modality. Through a unique collaboration with a network of international excellence partners in neuroimaging, the translational value of this project will be demonstrated on two use cases of primary societal impact: 1) the diagnosis of multiple sclerosis; and 2) the detection of intracranial stenosis, a risk factor for stroke.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-15-CE23-0004
    Funder Contribution: 306,653 EUR

    Wireless communications are currently exhibiting a giant leap in volume and societal impact, but are also facing a massive environmental challenge in the form of a carbon footprint that matches that of global aviation, and which will triple by 2020. This challenge has spurred worldwide research to produce radically new power-efficient high-performance environmentally-friendly communication technologies. However, the efforts have encountered two seemingly insurmountable bottlenecks; the bottleneck of computational complexity corresponding to the need for algorithms that require extreme computing resources, and the bottleneck of feedback corresponding to the need for equally idealistic feedback mechanisms that must disseminate massive amounts of overhead information about the fluctuating states of each link in the network. These bottlenecks drive our theoretical vision: We will provide a never-before-attempted exploration of the crucial interdependencies between computational complexity, feedback and performance in wireless communications. They also drive our technological vision: We will develop algorithms for a new class of mobile-user devices that can participate in properly gathering/disseminating feedback (at the right place and time) as well as in computing solutions to outsourced algorithmic tasks across the network, in an effort which we term as “outsourcing the surgical insertion of bidirectional bits and flops across the network” and which aims to reduce computational complexity and improve performance. We will take a novel approach, which drives our vision. A recent result of ours has revealed the surprising fact that – for a simple point-to-point setting – a single bit of feedback from the receiver back to the transmitter (properly placed in time, and properly representing the predicted flop count), managed to massively reduce the computational complexity of transceiver algorithms. This reduction was a surprising finding, and it was traced back to the newly-found ability of feedback to `skew’ the statistics of the accumulation of computational load, without negatively skewing the statistics defining performance. We will expand this idea to networks with more than one pair of nodes, with different topologies and different users that assist in the computations of the network. In the process we will explore uncharted territory by addressing questions such as: How should this feedback (timing, location, message) change in interference channels with interesting topologies, or even larger (or massive) MIMO broadcast channels? What happens if feedback is abstracted to involve an interactive back-and-forth between `transmitters’ and `receivers’? What would be the best way to surgically distribute feedback bits across the network users, in order to reduce computational cost, while maintaining or even improving performance? What if then we took this same idea, and turned it around so that the roles of feedback and complexity are reversed, and now instead of surgically inserting feedback bits to reduce complexity, we carefully inserted flops (computational capabilities across the users) to reduce the need for feedback? Naturally these key ideas – just like computational complexity, feedback and performance – are intertwined, and will be explored jointly. Finding the crucial and largely-unexplored complexity-feedback-performance interdependencies, will offer guiding principles for merging fog (decentralized) and cloud (centralized) ideas, towards hybrid solutions that better traverse the complexity-feedback-performance triangle by surgically inserting bidirectional bits and flops across the network nodes.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.