Powered by OpenAIRE graph

Amazon Web Services (UK)

Amazon Web Services (UK)

9 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: MR/Y034120/1
    Funder Contribution: 468,439 GBP

    The proposed fellowship uses multimodal cancer data to help predict the treatment response, personalise patient care, and improve survival and quality-of-life post-treatment. By harnessing diverse data sets, the AI models are trained to identify subtle patterns and indicators. This not only aims to customize treatment plans to individual patient needs but also seeks to mitigate the risks associated with radiotherapy, potentially reducing the occurrence of debilitating side effects and improving overall treatment efficacy. The project will focus on clinical, anatomical, and biological patient data. First, the project focuses on employing AI to analyze complex pathology slide images. This research is set to transform tumor diagnostics by providing unprecedented insights into the microenvironment of cancers. This approach aims to uncover new diagnostic markers and patterns, enhancing the accuracy of tumor classification and staging. The fusion of these AI-generated insights with the recognised prognostic feature of pathologists will lead to more precise results. Furthermore, recognising the gap in the application of AI in clinical settings, a part of the research is focused on developing a web-based platform that makes AI tools readily available and user-friendly for clinicians. The platform is envisioned to provide non-specialist healthcare professionals with access to state-of-the-art AI analysis for radiology and histopathology. This means that clinicians can benefit from AI-powered insights in real-time, enhancing their decision-making process in patient care. This service aims to democratize the use of AI in healthcare, making it a standard part of clinical practice and thus accelerating the adoption of AI in medical diagnostics and treatment planning. Collectively, these components of my research will represent a significant leap forward in the application of AI in the realm of cancer care. The project is not just about technological innovation; it's about fundamentally transforming the approach to cancer diagnosis and treatment.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/Y017749/1
    Funder Contribution: 574,025 GBP

    Clinicians, patients and policy makers lack access to accurate, real time information on new treatments for treating cancer. This is because such a large amount of information is continuously generated, and it is too complicated to be manually analysed in a timely fashion. This is sometimes referred to as a health 'infodemic'. Information analysed to create clinical evidence (known as systematic reviews) quickly goes out of date, and national bodies responsible for appraising new treatments such as the National Institute for Clinical Excellence are unable to keep up. It is increasingly hard to detect misinformation published within medical literature, and an increasing number of papers have to be withdrawn after publication. INDICATE is a deep learning tool for the autonomous generation of systematic reports and analysis of both structured and unstructured data from published literature on cancer. It has been developed through a collaboration between Imperial College London and Amazon Web Services, NICE and the British Medical Journal (BMJ). The aim is to develop a methodology for the real time analysis of healthcare infodemics that can be used to autonomously create clinical guidance and identify misinformation. This project will build on previous work to develop AI methodologies that automate how we search for medical literature and it will intelligently support peer reviewers as they appraise and assess the quality of research papers. This work has three main goals: 1. To develop a tool for detecting research fraud. 2. To asses if our AI tools can speed up the creation of NICE guidance. 3. To develop autonomous summary reports of clinical evidence of breast cancer treatment that could be used by medical publishers. The study group will work with clinicians, researchers and NICE to define and prioritise critical questions that require answering and to refine the user interface for the system. Moreover, we will prospectively validate the performance of the system to determine the accuracy and performance of its reporting mechanism. The validated data generated by this study will form the basis of a phase II study that scales the number of cancer types and the trial of the technology in a real world clinical environment.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/W004755/1
    Funder Contribution: 301,430 GBP

    This project is about devising and implementing a smart operating room environment, powered by trustable, human-understanding artificial intelligence, able to continually adapt and learn the best way to optimise safety, efficacy, teamwork, economy, and clinical outcomes. We call this concept MAESTRO. A fitting analogy for MAESTRO is that of an orchestra conductor, a 'maestro', who oversees, overhears and directs a group of people on a common task, towards a common goal: a masterful musical performance. Although the music score is identical for all orchestras, there is no doubt that they all perform it in different ways and some significantly better than others. Although the quality and personality of orchestra musicians is very important, it is widely accepted that the role of the maestro is crucial, and extends beyond the duration of the musical performance to rehearsals and understanding of the context behind the music score. Thus, while it is possible for orchestras to perform without conductors, most cannot function without one. Our proposed MAESTRO AI-powered operating room of the future rotates around four key elements: (a) The holistic sensing of patient, staff, operating room environment and equipment through an array of diverse sensor devices. (b) Artificial intelligence focused on humans (human-centric), able to continually understand situations and actions developing in the operating room, and of intervening when necessary. (c) The use of advanced human-machine user interfaces for augmenting task performance. (d) A secure device interconnectivity platform, allowing the full integration of all above key elements. As in our orchestra analogy, our envisioned MAESTRO directs the OR staff and surgical devices before, during and after a surgical procedure by: (1) Sensing surgical procedures in all their aspects, including those which are currently neglected such as the physiological responses of staff (e.g., heart rate, blood pressure, sweating, pupil dilation), focus of attention, brain activity, as well as harmful events that may escape the attention of the clinical team. (2) Overseeing individual and team performance in real-time, throughout the operation and across different types of surgeries and different teams. (3) Guiding and assisting the surgical team via automated checkpoints, virtual and augmented visualisations, warnings, individualised and broadcasted alerts, automation, semi-automation, robotics, and other aids and factors that can affect performance in the operating room. (4) Augmenting and optimising individual and collective operational capabilities, skills, and task ergonomics, through novel human-machine interaction and interfacing modalities. The project is designed to have a significant societal, economic and technological impact, and to establish the NHS as a leading healthcare paradigm worldwide. MAESTRO leverages the expertise of top researchers in the areas of robotics, sensing, artificial intelligence, human factors, health policies and patient safety. It is co-designed in collaboration with top clinicians, one of the largest NHS Trusts in England, patient groups, performing artists, and several small and medium-sized enterprises and large multinational industries operating in the areas of artificial intelligence, medical devices, digital health, large networks, cloud services, cyber security.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/X031276/1
    Funder Contribution: 4,195,580 GBP

    The Digital Health Hub for Antimicrobial Resistance (AMR) aims to harness innovative digital technologies to ultimately transform antimicrobial one-health surveillance and antimicrobial stewardship, recognising the interconnectedness of AMR between humans, animals and the environment. The World Health Organization declared AMR - also known as the 'silent pandemic' - a top 10 global public health threat facing humanity. AMR also ranks on the UK Cabinet Office Risk Register and yet despite this recognition, there remains alarmingly low levels of attention and funding for AMR prevention. The 2016 O'Neill Review on Antimicrobial Resistance highlights that by 2050, 10 million lives a year and a cumulative US$100 trillion of economic output are at risk unless action is taken to reduce AMR. Resistant pathogens from animals, humans and food can be cross-transmitted and environmental reservoirs are a potentially important domain in which the mobilisation and transfer of resistant genes occur. Thus, an integrated One Health approach to AMR surveillance and public health action is needed. Moreover, there is growing concern that climate change could increase the risk of emerging and re-emerging infectious diseases. There is growing recognition of the importance of data science and digital health technologies in the fight against AMR, though the field remains in its infancy. The COVID-19 pandemic has dramatically accelerated advances in digital health technologies, driven by unprecedented need, and there is a huge opportunity to leverage these advances for AMR. However, there remain many challenges: poor understanding of one-health needs; data linkage, silos and gaps hinder surveillance; the lack of rapid tests, the lack of public awareness of AMR; digital interventions often do not prioritise user-led design and are not grounded in behaviour change; data privacy, security and ethical issues of bringing together large datasets; health inequalities and the digital divide; the disconnect between early stage research and AMR needs, and lack of understanding of how digital technologies can be commercialised, regulated and integrated into health systems and patient pathways. The Digital Health Hub for AMR brings together a critical mass of Co-Is working across traditional disciplines for AMR, including computer science, biomedical engineering, behavioural social science, environmental science, data visualisation, and clinical and public health research, from five universities, NHS, UK Health Security Agency, Centre for Ecology and Hydrology, charities and industry partners. Our hub vision will be achieved through five objectives: 1. Systems-level needs: To nurture a new culture of cross-sector engagement to accelerate the creation and adoption of digital health innovations for AMR one-health surveillance and antimicrobial stewardship. 2. Skills and Capacity: To grow interdisciplinary skills, capacity, knowledge sharing and leadership needed to deliver a world-leading digital health strategy for combatting AMR. 3. Grand Challenges: To co-create digital health solutions for two AMR grand challenges: i) Digital one-health surveillance of antibiotic use and AMR, linking human, animal and environmental data ii) Digital antimicrobial stewardship via decision support algorithms, digital diagnostics wearables and sensors 4. Partnership Fund: To grow critical mass and a hub of innovation by seeding interdisciplinary pilot studies between industry, academia, health and social care. 5. Impact and Engagement: To maximise hub impact and EPSRC's investment through our communications strategy, patient and public engagement, biannual conferences and events.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R006865/1
    Funder Contribution: 6,146,080 GBP

    The smooth functioning of society is critically dependent not only on the correctness of programs, particularly of programs controlling critical and high-sensitivity core components of individual systems, but also upon correct and robust interaction between diverse information-processing ecosystems of large, complex, dynamic, highly distributed systems. Failures are common, unpredictable, highly disruptive, and span multiple organizations. The scale of systems' interdependence will increase by orders of magnitude in the next few years. Indeed by 2020, with developments in Cloud, the Internet of Things, and Big Data, we may be faced with a world of 25 million apps, 31 billion connected devices, 1.3 trillion tags/sensors, and a data store of 50 trillion gigabytes (data: IDC, ICT Outlook: Recovering Into a New World, #DR2010_GS2_JG, March 2010). Robust interaction between systems will be critical to everyone and every aspect of society. Although the correctness and security of complete systems in this world cannot be verified, we can hope to be able to ensure that specific systems, such as verified safety-, security-, or identity-critical modules, are correctly interfaced. The recent success of program verification notwithstanding, there remains little prospect of verifying such ecosystems in their entireties: the scale and complexity are just too great, as are the social and managerial coordination challenges. Even being able to define what it means to verify something that is going to have an undetermined role in a larger system presents a serious challenge. It is perhaps evident that the most critical aspect of the operation of these information-processing ecosystems lies in their interaction: even perfectly specified and implemented individual systems may be used in contexts for which they were not intended, leading to unreliable, insecure communications between them. We contend that interfaces supporting such interactions are therefore the critical mechanism for ensuring systems behave as intended. However, the verification/modelling techniques that have been so effective in ensuring reliability of low-level features of programs, protocols, and policies (and so the of the software that drives large systems) are, essentially, not applied to reasoning about such large-scale systems and their interfaces. We intend to explore this deficiency by researching the technical, organizational, and social challenges of specifying and verifying interfaces in system ecosystems. In so doing, we will drive the use of verification techniques and improve the reliability of large systems. Complex systems ecosystems and their interfaces are some of the most intricate and critical information ecosystems in existence today, and are highly dynamic and constantly evolving. We aim to understand how the interfaces between the components constituting these ecosystems work, and to verify them against their intended use. This research will be undertaken through a collection of different themes covering systems topics where interface is crucially important, including critical code, communications and security protocols, distributed systems and networks, security policies, business ecosystems, and even extending to the physical architecture of buildings and networks. These themes are representative of the problem of specifying and reasoning about the correctness of interfaces at different levels of abstraction and criticality. Interfaces of each degree of abstraction and criticality can be studied independently, but we believe that it will be possible to develop a quite general, uniform account of specifying and reasoning about them. It is unlikely that any one level of abstraction will suggest all of the answers: we expect that the work of the themes will evolve and interact in complex ways.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.