Powered by OpenAIRE graph

Nokia Bell Labs

Nokia Bell Labs

11 Projects, page 1 of 3
  • Funder: UK Research and Innovation Project Code: EP/Y035925/1
    Funder Contribution: 127,041 GBP

    The project will represent the robustness of earable audio based detection of physiological signal. Project ERC EAR has been the first to demonstrate that in-ear microphone can be used to detect activity and heart rate and to yield promising precision in case of intense motion and with superior performance with respect to photoplethysmography (PPG) or inertial movement units (IMUs). Microphones are already embedded in earables for audio related purposes, unlike PPG or IMUs which do not fulfil any other functions in these devices, and are especially inexpensive both in terms of hardware but also in terms of sensing computation needs and energy. There are currently no solutions in the market which use audio for physiological signal detection and the PI is the leading expert in the use of audio for physiology and diagnostics. Specifically the project aims at i) improving and demonstrating the in the wild feasibility of in-ear based audio sensing for detection of human activity, heart rate and respiration monitoring; ii) integrating the technology into a commercial grade prototype; iii) perform a robustness validation of the technology (and prototype); iv) innovate on the algorithms to improve robustness and system performance; v) pursue exploitation and dissemination of the prototype. The high gain of the project will be the reaching the 2 billion market share of earables, bring physiological monitoring in the hands of the world population affordably and accurately. There are risks to overcome in the achievements of this objectives but they are mitigated by the strong track record in the area of the PI and the team and the industrial support established.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/Z53447X/1
    Funder Contribution: 2,058,430 GBP

    Wearable devices have become pervasive and generating a lot of data which is indicative of our behaviour and physiology. This offers an unprecedented and detailed window onto human wellbeing, fitness as well as the potential for scalable public health and clinical monitoring tools. Recently, hearables have started to be used for a variety of activities ranging from the traditional music listening to more advanced fitness activities (such as running). During and following the pandemic, individuals have been commonly using these for all their virtual meetings too. Emerging companies are starting to market the devices also as comfortable sleeping aids. Hearable devices, placed on a person's head have also higher potential for detection of stable physiological signals with respect to watches, as arm movements are very pronounced and often affect sensors on watches, especially during full body movement, and hearables offer two channels (left and right). Yet, while hearable devices are indeed on the market in some form, their functions are generally still fairly restricted to means of transmission of audio and speech. Their ability to detect physiology, especially under motion and considering head and face macro and micro movements is also unproven. Additionally, they are not treated as standalone devices but they are usually dependent on smartphones for further computation and communication. Finally, the precious data generated usually, like for many other wearables, flows to commercial servers for analysis, potentially exposing users to privacy invasion. In general, there have been questions on the precision of data from wearable concerning our wellbeing and health: the sensors on these devices are often imprecise and various factors contribute to making the inference over this data hard (movement, variety of use, heterogeneity of human characteristics, etc). In this proposal I plan to advance the research on hearable sensing in fundamental ways to enable these devices to become truly reliable, trustworthy and privacy aware means of detection of our activity, fitness and health. The potential of such technology is immense: hearables are small and some versions are already very affordable, certainly more affordable than clinical diagnostics or fitness monitoring equipment. They are also more portable and people tend to wear them throughout their day (and sometimes nights, in the case of sleep hearables): this means that they have the potential of sensing the users continuously generating very precious longitudinal data which would impact the way in which we study personalized fitness as well as clinical disease progression, onset and recovery. The scalability enabled by such technology means that large populations can be reached and yet the temporal granularity of the data (i.e., the almost continuous monitoring of individuals) is not compromised, enabling public health and epidemiological studies to scale. Some of the findings of this work will impact the research in wearables and wearable data analysis in general, opening the door to a wide range of applications. More precisely the programme will innovate on the type of sensors which can be used to sense activity and health, the machine learning methods applied to this data and the systems aspects related to this which include the ability to run the models on device or explore the trade offs of local and remote computation. HearFit will also conduct extensive user studies in the context of fitness and health through collaborations with sport scientists and clinicians.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/X035085/1
    Funder Contribution: 522,780 GBP

    AI/ML systems are becoming an integral part of user products and applications as well as the main revenue driver for most organizations. This resulted in shifting the focus toward the Edge AI paradigm as edge devices possess the data necessary for training the models. Main Edge AI approaches either coordinate the training rounds and exchange model updates via a central server (i.e., Federated Learning), split the model training task between edge devices and a server (i.e., split Learning), or coordinate the model exchange among the edge devices via gossip protocols (i.e., decentralized training). Due to the highly heterogeneous learners, configurations, environment as well as significant synchronization challenges, these approaches are ill-suited for distributed edge learning at scale. They fail to scale with a large number of learners and produce models with low qualities at prolonged training times. It is imperative for modern applications to rely on a system providing timely and accurate models. This project addresses this gap by proposing a ground-up transformation to decentralized learning methods. Similar to Uber's delivery services, the goal of KUber is to build a novel distributed architecture to facilitate the exchange and delivery of acquired knowledge among the learning entities. In particular, we seize an opportunity to decouple the training task of a common model from the sharing task of learned knowledge. This is made possible by the advances in the AI/ML accelerators embedded in edge devices and the high-throughput and low-latency 5G/6G technologies. KUber will revolutionize the use of AI/ML methods in daily-life applications and open the door for flexible, scalable, and efficient collaborative learning between users, organizations, and governments.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/Y016378/1
    Funder Contribution: 608,960 GBP

    Respiratory Tract Infections (RTIs) are the most common cause of illness. This was true even before the COVID-19 pandemic. They are most often the reason patients consult a GP. The illness they cause is usually mild, but in some cases can become severe, and occasionally can lead to death. Around half of all antibiotic prescriptions are for RTIs. Most people with an RTI get better without needing treatment. However, we need to notice quickly when people are getting seriously ill. If we do not, the effect on them and on healthcare services can be large. Doctors have rules and tests that help them identify patients who are more likely to need treatment, but these do not work well for every patient. Also, they are not useful for helping patients manage their own illness. Using machine learning (AI systems) to analyse breathing and speech sounds automatically could be a game-changer. Firstly, it could reassure many patients that they do not need to see a doctor. Secondly, it could reduce prescriptions for antibiotics by identifying patients who will get better on their own. Identifying patients at higher risk could also reduce hospital admissions, cases of severe illness and the number who die. All these effects would reduce pressure on the NHS. We already know that some signs, such as breathing faster, can tell us whether an RTI is getting worse, and we know we can measure these signs by recording the sound of the breath. We know that RTIs also affect breathing pattern, the sound of speech and trying to breathe when speaking. We believe that other breathing sounds and patterns are also likely to change when you get an RTI and this is something we want to explore in this project. We aim to find information in sound recordings of breathing, cough and speech which changes in a way we can predict as a person gets sicker or recovers. We will need to research the sounds we should record and how we should analyse them to get the most useful information. A study into how these sounds change over time will give us added information, not previously explored in any great depth. We have already worked with sounds from people with COVID-19, so we know lots of people will volunteer to take part and give us their sound data if we give them an app. We know this is a very cost-effective way to study how symptoms of a disease change over time. To be confident about using a machine learning system to treat patients, Doctors need to know if it is giving good advice. If they know a sound recording or a prediction is not very dependable, they can make sure they do extra checks or ask the patient to re-record their sounds. We plan to develop a machine learning system that can rate how reliable its own advice is each time. This will help doctors to know when to trust the system. Designing machine learning systems that can tell us about the quality of their advice is something new we will be exploring in this study. Our project will ask volunteers to use an app to collect speech and breathing sound data. They will be asked to make a recording when they are healthy and then another one every day if they get an RTI. The app will also collect other health information from them, such as any medication they take and any other illness they may have. The machine learning system will process the data to predict whether they are getting better or worse and rate its own confidence in its prediction. GPs will use patients' medical records to tell us which of the volunteers comes to see their doctor for treatment and whether anyone had to go to hospital. This will allow us to assess the quality of the advice from the machine learning system. Our aim is to develop a machine learning system that can assess if someone with an RTI should see their doctor for advice or can expect to get better without treatment.

    more_vert
  • Funder: UK Research and Innovation Project Code: AH/Z505651/1
    Funder Contribution: 268,770 GBP

    Over the last decade, the street has emerged as one of the primary sites where everyday publics encounter AI. Industry and public sector organisations have deployed a variety of AI-based technologies in UK streets, from autonomous vehicles (AVs) to navigation apps, data-driven modelling in smart city projects and facial recognition technologies (FRT). These deployments have been accompanied by significant policy initiatives defining societal benefits of AI-driven innovation (safety, levelling up, sustainability, inclusion) as well as institutional engagements with affected communities through policy exhibitions, user-centred workshops and citizen cafés. However, from the perspective of the street, AI innovation often manifests as a messy social reality, provoking frictions that exceed existing frameworks for responsible innovation: in Cambridge, firefighters battling a fire had to move a delivery robot that was in their way, while in Australia suburbs were left without electricity after a food delivery drone made an emergency landing on top of a set of powerlines. There remain, then, significant divergences between the general frameworks for responsible AI and the particular lived realities of AI in the street. To build capacity among everyday publics and AI innovation consortia to engage across such divides, this 6-month project will develop a situated, creative approach to public engagement with AI: street-level observatories of everyday AI. To bridge divides between lay and expert understandings of AI innovation, we will evaluate and prototype a set of street-level observatories for everyday AI. The aim of these observatories is to explore how everyday publics perceive and engage with AI at a primary site - city streets - where specific transformations, benefits, harms and (ir)responsibilities of AI in society can be made visible and thus legible for both publics and stakeholders. To realise this, we will collaborate with local partners and the arts to trial creative interventions that invite people on the street to observe the effects of AI in the lived environment. Our scoping project will 1) build partnerships across the humanities, arts and social sciences and with organisations and groups committed to situated forms of public engagement with AI-based science and innovation in connected and automated cities. In partnership with local government, we will 2) trial street-level AI observatories in 4 diverse UK cities—Cambridge, Coventry, London and Edinburgh—and one international location, Logan (Australia). The observatories will combine digital, place-based and/or embodied approaches, such as data walks and sensor media (apps) and will be designed to support shared learning across the project teams and partners. Trialling AI observatories in city streets will enable us to undertake 3) a joint process of evaluating and prototyping an everyday AI observatory. This will make visible the entanglement of everyday social life with AI, showing people and technologies in complex real-world settings where sectoral, disciplinary and specialist interests intersect. This will be a space of interest to partners in local and national government, public policy innovation, and AI scientists and industry representatives, and create opportunities for developing shared understandings of societal responses and priorities between industry, policymakers, researchers and everyday publics.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.