Powered by OpenAIRE graph

Canon Medical Research Europe Ltd

Canon Medical Research Europe Ltd

9 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/X017680/1
    Funder Contribution: 202,351 GBP

    The prospect of an AI-based revolution and its socio-economic benefits is tantalising. We want to live in a world where AI learns effectively with high performance and minimal risks. Such a world is extremely exciting. We tend to believe that AI learns higher level concepts from data, but this is not what happens. Particularly in data such as images, AI extracts rather trivial (low-level) notions from the data even when provided with millions of examples. We often hear that providing more data with high diversity should help improve the information that AI can extract. This data amassing does have though privacy and cost implications. Indeed, considerable cost comes also by the need to pre-process and to sanitise data (i.e. remove unwanted information). More critically, though, in several key applications (e.g. healthcare) some events (e.g. disease) can be rare or truly unique. Collecting more and more data will not change the relative frequency of such rare data. It appears that current AI is not data efficient: it poorly leverages the goldmine of information present in unique and rare data. This project aims to answer a key research question: **Why does AI struggle with concepts, and what is the role of unique data? ** We suspect there are several reasons why AI struggles with concepts: A) The mechanisms we use to extract information from data (known as representation learning) rely on very simple assumptions that do not reflect how real data exist in the world. For example, we know that data have correlations, and we now make simplified assumptions of no correlation at all. We propose to introduce stronger assumptions of causal relationships in the concepts we want to extract. This should in turn help us extract better information. B) To learn any model, we do have to use optimisation processes to find the parameters of the model. We find a weakness in these processes: data that are unique and rare do not get so much attention, or if they do get some, it happens by chance. This leads to considerable inconsistency in the extraction of information. In addition, sometimes wrong information is extracted, either because we found suboptimal representations or because we latched on some data that escaped from the sanitisation process -since no such perfect process can always be guaranteed. We want to understand why such inconsistency exists and propose to devise methods that can ensure that when we train models, we can consistently extract information even from rare data. There is a tight connection between B and A. Without new methods that better optimise learning functions we cannot extract representations reliably from rare data, and hence we cannot impose the causal relationships we need. There is an additional element about this work that helps answer the second part of the question. Rare and unique data may actually reveal unique causal relationships. This is a very tantalising prospect that the work we propose aims to investigate. There are considerable and broad rewards of the work we propose. We put herein the underpinnings for an AI that, because it is data efficient, should not require blind amassing of data with all the privacy fears this engenders for the general public. Because it learns high-lever concepts it will be more adept to empower decision tools that can support how decisions have been reached. And because we introduce strong causal priors in extracting these concepts, we reduce the risk of learning trivial data associations. Overall, a major goal of the AI research community is to create AI that can generalise to new unseen data beyond what was available during training time. We hope that our AI will bring us closer to this goal, thus further paving the way to broader deployment of AI to the real world.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/T026111/1
    Funder Contribution: 254,575 GBP

    There is a silent but steady revolution happening in all sectors of the economy, from agriculture through manufacturing to services. In virtually all activities in these sectors, processes are being constantly monitored and improved via data collection and analysis. While there has been tremendous progress in data collection through a panoply of new sensor technologies, data analysis has revealed to be a much more challenging task. Indeed, in many situations, the data generated by sensors often comes in quantities so large that most of it ends up being discarded. Also, many times, sensors collect different types of data about the same phenomenon, the so-called multimodal data. However, it is hard to determine how the different types of data relate to each other or, in particular, what one sensing modality tells about another sensing modality. In this project, we address the challenge of making sensing of multimodal data, that is, data that refers to the same phenomenon, but reveals different aspects from it and is usually presented in different formats. For example, several modalities can be used to diagnose cancer, including blood tests, imaging technologies like magnetic resonance (MR) and computed tomography (CT), genetic data, and family history information. Each of these modalities is typically insufficient to perform an accurate diagnosis but, when considered together, they usually lead to an undeniable conclusion. Our departing point is the realization that different sensing modalities have different costs, where "cost" can be financial, refer to safety or societal issues, or both. For instance, in the above example of cancer diagnosis, CT imaging involves exposing patients to X-ray radiation which, ironically, can provoke cancer. MR imaging, on the other hand, exposes patients to strong magnetics fields, a procedure that is generally safe. A pertinent question is then whether we can perform both MR and CT imaging, but use a lower dose of radiation in CT (obtaining a poor-resolution CT) and, afterward, improve the resolution of CT by leveraging information from MR. This, of course, requires learning what type of information can be transferred between different modalities. Another example scenario is autonomous driving, in which sensors like radar, LiDAR, or infrared cameras, although much more expensive than conventional cameras, collect information that is critical to driving in safe conditions. In this case, is it possible to use cheaper, lower-resolution sensors and enhance them with information from conventional cameras? These examples also demonstrate that many of the scenarios in which we collect multimodal data also have robustness requirements, namely, precision of diagnosis in cancer detection and safety in autonomous driving. Our goal is then to develop data processing algorithms that effectively capture common information across multimodal data, leverage these structures to improve reconstruction, prediction, or classification of the costlier (or all) modalities, and are verifiable and robust. We do this by combining learning-based approaches with model-based approaches. Over the last years, learning-based approaches, namely deep learning methods, have reached unprecedented performance, and work by extracting information from large datasets. Unfortunately, they are vulnerable to so-called generalization errors, which occur when the data to which they are applied differs significantly from the data used in the learning process. On the other hand, model-based methods tend to be more robust, but have poorer performance in general. The approaches we propose to explore use learning-based techniques to determine correspondences across modalities, extracting relevant common information, and integrate that common information into model-based schemes. Their ultimate goal is to compensate cost and quality imbalances across the modalities while, at the same time, providing robustness and verifiability.

    more_vert
  • Funder: UK Research and Innovation Project Code: MR/X03349X/1
    Funder Contribution: 1,554,200 GBP

    Childhood obesity is a worldwide epidemic. This is of concern as childhood obesity is associated with an increased risk of premature death and disability in adulthood, due to higher rates of noncommunicable diseases at a younger age. It is recognised that whilst the causes of childhood obesity are multifaceted, accurate measurement is essential. As direct measurement of adipose tissue (body fat) and health is resource intensive; we rely on measurement of external body size to infer obesity, through an understanding of the association between external body size and adipose tissue, and between adipose tissue and health. Body Mass Index (BMI) is the most prolific measure of external body size used to estimate adipose tissue to infer obesity. Although manually acquirable, simple to calculate and accompanied by standardised risk thresholds, BMI is fundamentally flawed: unable to accurately estimate and detect change in adipose tissue and infer health, particularly for children and some ethnicities (namely Asian and Black ethnic groups). Furthermore, as a measure created from multiple highly related measures, it is impossible to untangle the causal roles of each component measure, leading to incorrect interpretations. Complete revolutionary reform is needed, focusing on the causal relationship between adiposity and health, and ability and suitability of direct measured external body size measures, of lengths, breadths, girths, volumes and areas, to predict adiposity to infer health in children, across body sizes, ages, genders, social backgrounds and ethnicities. I am a Sports Engineer specialising in the body measurement of elite athletes, exploring the impact and value of direct external body size measures to performance and training. Using this expertise, I will undertake disruptive interdisciplinary research, translating my expertise from elite sport to childhood obesity, aiming to revolutionise and reform external body size measures and methods for children living with obesity. To achieve this, I will: 1. Identify the most accurate external body size measures to assess factors (adiposity and health) causally associated with obesity in children. 2: Identify the most suitable external body size measures and body measurement methods for children. 3. Create Child Anthrobank, the world first data repository dedicated to child body measurement 4. Develop myself as an independent global leader in child body measurement. In doing so, the fellowship will: 1. IMPROVE CHILD BODY MEASUREMENT METHODS, through identification of the most accurate and suitable external body size measures to assess factors (adiposity and health) causally associated with obesity in children. In doing so we will be able to measure childhood obesity and child health more accurately, thereby ensuring accurate: diagnosis and monitoring, to underpin treatment for individuals, and interpretation of the epidemiology of conditions, to underpin the planning of appropriate services. Fundamentally aiming to reduce child obesity rates and improve child health and wellbeing in the UK and globally. 2. SUPPORT BUSINESS AND MANUFACTURING through establishing Child Anthrobank we will be providing access to up-to-date representative child body measurement data and methods, that will facilitate and minimise the barriers for research, design and innovation of theories, standards, guidelines, methods, services and products on/for children across disciplines and applications. In doing so, underpin and accelerate improvements in child health and wellbeing that impacts society and the economy, in the UK and globally. 3. CONDUCT AND FACILITATE WORLD-LEADING RESEARCH within and beyond this fellowship through the creation of a global leader in the field and data repository. Thus, this fellowship will benefit the public, health care practitioners, researchers, businesses, and policy makers - meaningfully impacting society and the economy, in the UK and globally.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/X03075X/1
    Funder Contribution: 3,211,470 GBP

    Developing new health technologies is complicated and often fails to lead to improved patient care. Successfully taking an idea through the necessary research studies and developing it to the point of use in the NHS requires many different areas of expertise. These include; understanding patients' and health professionals' needs, medical and healthcare environments, engineering and digital technologies, design, manufacturing, legal and ethical regulation, business development, how to obtain funding, and many other topics (in our application we refer to these areas the "Innovation Curriculum"). Our Hub covers a region of 1.4 million people in a region that is affected by high levels of disease and health inequalities. Our team includes all regional NHS organisations including GPs, adult and children's hospitals, mental health services and the recently introduced South Yorkshire "Integrated Care System", hundreds of researchers from the University of Sheffield and Sheffield Hallam University, many large and small companies, and patient and public groups. These partners between them have all the necessary expertise and experience in developing new Digital Health technologies to the point of use in the NHS. We will help researchers develop Digital Health technologies by training them in all aspects of the Innovation Curriculum, and by supporting them to work together with the NHS and patients on real ideas and projects. We will hold Citizen's Juries to understand the public and patients' views of Digital Health and to help design our research. We will produce sixty hours of training in Digital Health for researchers, clinicians, patients and the public, freely available and accredited through our partnership with YouTube's authoritative health content programme. We will hold regular "Calls for Ideas" where we support project teams and train them in Digital Health, providing the most promising ideas with initial project funding to help take these towards potential commercialisation.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/T017961/1
    Funder Contribution: 1,295,780 GBP

    In our work in the current edition of the CMIH we have built up a strong pool of researchers and collaborations across the board from mathematics, statistics, to engineering, medical physics and clinicians. Our work has also confirmed that imaging data is a very important diagnostic biomarker, but also that non-imaging data in the form of health records, memory tests and genomics are precious predictive resources and that when combined in appropriate ways should be the source for AI-based healthcare of the future. Following this philosophy, the new CMIH brings together researchers from mathematics, statistics, computer science and medicine, with clinicians and relevant industrial stakeholder to develop rigorous and clinically practical algorithms for analysing healthcare data in an integrated fashion for personalised diagnosis and treatment, as well as target identification and validation on a population level. We will focus on three medical streams: Cancer, Cardiovascular disease and Dementia, which remain the top 3 causes of death and disability in the UK. Whilst applied mathematics and mathematical statistics are still commonly regarded as separate disciplines there is an increasing understanding that a combined approach, by removing historic disciplinary boundaries, is the only way forward. This is especially the case when addressing methodological challenges in data science using multi-modal data streams, such as the research we will undertake at the Hub. This holistic approach will support the Hub aims to bring AI for healthcare decision making to the clinical end users.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.