Amazon Web Services (Not UK)
Amazon Web Services (Not UK)
13 Projects, page 1 of 3
assignment_turned_in Project2020 - 2021Partners:Western Cape Government, Biotip Ltd, SETPOM, Peculiar Grace Youth Empower Initiative, ScienceScope +22 partnersWestern Cape Government,Biotip Ltd,SETPOM,Peculiar Grace Youth Empower Initiative,ScienceScope,Peculiar Grace Youth Empower Initiative,Lagos State Government,Amazon Web Services (Not UK),Earthwatch Europe,CENTER FOR HUMAN DEVELOPMENT, INC.,Made Culture Lagos,University of Bath,Conserv Educ & Res Trust UK (Earthwatch),City of Cape Town,Department of Water Affairs,Western Cape Government,Biotip Ltd,TecLab,SETPOM,ScienceScope,Abington Partners,TecLab,Mayden,University of Bath,Department of Water & Sanitation,Amazon Web Services, Inc.,MaydenFunder: UK Research and Innovation Project Code: EP/T029986/1Funder Contribution: 126,722 GBPUnprecedented rate of urbanization constitutes substantial risks to the resilience of cities, with public health and welfare being the most critical concern. This includes the emergence of (non-)communicable disease epidemics due to environment contamination and lifestyle factors. To increase the sustainability of cities, there is a critical need for an early warning system (EWS) for public & environmental health diagnostics that operates on a large scale and in real time. Rapid urbanisation and the young, growing population of Africa are also linked with rapid digitisation and an unprecedented up-take of new technology. This presents a unique opportunity for the development of a digital technology-based, comprehensive and real time EWS that is attuned to public and environmental health risks in rapidly changing Africa. We propose to build a network aiming to develop a public & environmental health diagnostics and hazard forecasting platform in Africa via urban environment fingerprinting underpinned by digital innovation. EDGE-I will develop a conceptual model (and a prototype in EDGE-II) of an environment fingerprinting platform for hazard forecasting and EWS using DIGITAL INNOVATION and state-of-the-art bioanalytical, socioeconomic, statistical & modelling tools. The digital innovation will be focused on the use of Internet of Things (IoT) enabled sensors and cloud computing as a plat-form for capturing, storing, processing, and presenting a wide range of environmental measures to a broad group of stakeholders. EDGE will focus on two key thematic areas of critical importance to rapidly growing and urbanising Africa: (1) Water, sanitation & public health: as a vector for infectious disease spread and environmental AMR. (2) Urbanization & pollution: as a vector for environmental degradation and non-communicable disease. EDGE postulates that the measurement of endo- and exogenous environment & human derived residues continuously and anonymously pooled by the receiving environment (sewage, rivers, soils and air), can provide near real-time dynamic information about the quantity and type of physical, biological or chemical stressor to which the surveyed system is exposed, and can profile the effects of this exposure. It can therefore provide anonymised, comprehensive and objective information on the health status of urban dwellers and surrounding environments in real time, as urban environment continuously pools anonymous urine, wastewater and runoff samples from thousands of urban dwellings. EDGE-I will focus on building a concept of a prototype of EWS in two geographically and socioeconomically contrasting areas in Africa: Lagos (Nigeria), Cape Town (South Africa). The young and growing population of Africa that is rapidly up-taking digital innovation provides a unique opportunity for building a system underpinned by digital channels to provide long and lasting impacts. To achieve above EDGE-I will: 1 Develop a transdisciplinary and cross-sectoral network focussed on building EWS in Africa 2 Develop a conceptual model of an EWS in Afri-ca underpinned by digital innovation in techno-logical solutions and Citizen Science 3 Engage with stakeholders: from citizens, through government to digital tech industry E DGE-I will catalyse the development of a large-scale research programme (EDGE-II).
more_vert assignment_turned_in Project2024 - 2029Partners:Chief Scientist Office (CSO), Scotland, Endeavour Health Charitable Trust, Zeit Medical, Scotland 5G Centre, Gendius Limited +44 partnersChief Scientist Office (CSO), Scotland,Endeavour Health Charitable Trust,Zeit Medical,Scotland 5G Centre,Gendius Limited,Research Data Scotland,CANCER RESEARCH UK,Health Data Research UK (HDR UK),Nat Inst for Health & Care Excel (NICE),NHS Lothian,Manchester Cancer Research Centre,Hurdle,Amazon Web Services (Not UK),Sibel Health,Canon Medical Research Europe Ltd,The MathWorks Inc,Queen Mary University of London,UCB Pharma UK,Evergreen Life,Scottish AI Alliance,Spectra Analytics,ELLIS,Scottish Ambulance Service,Institute of Cancer Research,Univ Coll London Hospital (replace),Willows Health,Life Sciences Scotland,PrecisionLife Ltd,Healthcare Improvement Scotland,NHS NATIONAL SERVICES SCOTLAND,Data Science for Health Equity,Kheiron Medical Technologies,Indiana University,McGill University,University of Dundee,NHS GREATER GLASGOW AND CLYDE,The Data Lab,Mayo Clinic and Foundation (Rochester),Microsoft Research Ltd,Samsung AI Centre (SAIC),ARCHIMEDES,University of Edinburgh,Bering Limited,University of California Berkeley,Huawei Technologies R&D (UK) Ltd,British Standards Institution BSI,Digital Health & Care Innovation Centre,CausaLens,Meta (Previously Facebook)Funder: UK Research and Innovation Project Code: EP/Y028856/1Funder Contribution: 10,288,800 GBPThe current AI paradigm at best reveals correlations between model input and output variables. This falls short of addressing health and healthcare challenges where knowing the causal relationship between interventions and outcomes is necessary and desirable. In addition, biases and vulnerability in AI systems arise, as models may pick up unwanted, spurious correlations from historic data, resulting in the widening of already existing health inequalities. Causal AI is the key to unlock robust, responsible and trustworthy AI and transform challenging tasks such as early prediction, diagnosis and prevention of disease. The Causality in Healthcare AI with Real Data (CHAI) Hub will bring together academia, industry, healthcare, and policy stakeholders to co-create the next-generation of world-leading artificial intelligence solutions that can predict outcomes of interventions and help choose personalised treatments, thus transforming health and healthcare. The CHAI Hub will develop novel methods to identify and account for causal relationships in complex data. The Hub will be built by the community for the community, amassing experts and stakeholders from across the UK to 1) push the boundaries of AI innovation; 2) develop cutting-edge solutions that drive desperately needed efficiency in resource-constrained healthcare systems; and 3) cement the UK's standing as a next-gen AI superpower. The data complexity in heterogeneous and distributed environments such as healthcare exacerbates the risks of bias and vulnerability and introduces additional challenges that must be addressed. Modern clinical investigations need to mix structured and unstructured data sources (e.g. patient health records, and medical imaging exams) which current AI cannot integrate effectively. These gaps in current AI technology must be addressed in order to develop algorithms that can help to better understand disease mechanisms, predict outcomes and estimate the effects of treatments. This is important if we want to ensure the safe and responsible use of AI in personalised decision making. Causal AI has the potential to unearth novel insights from observational data, formalise treatment effects, assess outcome likelihood, and estimate 'what-if' scenarios. Incorporating causal principles is critical for delivering on the National AI Strategy to ensure that AI is technically and clinically safe, transparent, fair and explainable. The CHAI Hub will be formed by a founding consortium of powerhouses in AI, healthcare, and data science throughout the UK in a hub-spoke model with geographic reach and diversity. The hub will be based in Edinburgh's Bayes Centre (leveraging world-class expertise in AI, data-driven innovation in health applications, a robust health data ecosystem, entrepreneurship, and translation). Regional spokes will be in Manchester (expertise in both methods and translation of AI through the Institute for Data Science and AI, and Pankhurst Institute), London (hosted at KCL, representing also UCL and Imperial, leveraging London's rapidly growing AI ecosystem) and Exeter (leveraging strengths in philosophy of causal inference and ethics of AI). The hub will develop a UK-wide multidisciplinary network for causal AI. Through extended collaborations with industry, policymakers and other stakeholders, we will expand the hub to deliver next-gen causal AI where it is needed most. We will work together to co-create, moving beyond co-ideation and co-design, to co-implementation, and co-evaluation where appropriate to ensure fit-for-purpose solutions Our programme will be flexible, will embed trusted, responsible innovation and environmental sustainability considerations, will ensure that equality diversity and inclusion principles are reflected through all activities, and will ensure that knowledge generated through CHAI will continue to have real-world impact beyond the initial 60 months.
more_vert assignment_turned_in Project2019 - 2028Partners:The Tor Project, Hatdex Community Foundation, Microsoft Research Ltd, Privitar, Amazon Web Services, Inc. +33 partnersThe Tor Project,Hatdex Community Foundation,Microsoft Research Ltd,Privitar,Amazon Web Services, Inc.,Lloyd's Register EMEA,Amazon Web Services (Not UK),DeepMind,Cisco Systems UK,Ripple,BARCLAYS BANK PLC,UCL,Creditmint,Veganetwork.io,National Police Chief's Council,The Tor Project,CISCO,Spherical Defence,Ripple,Creditmint,National Cyber Security Centre,Kryptic PBC,National Cyber Security Centre,Privitar,Spherical Defence,Hatdex Community Foundation,MICROSOFT RESEARCH LIMITED,CYBERNETICA AS,Google Deep Mind UK,Barclays Bank plc,Lloyd's Register Foundation,Veganetwork.io,National Police Chief's Council,CISCO Systems Ltd,Cisco Systems (United Kingdom),Association of Chief Police Officers,Cybernetica AS (Norway),Lloyd's Register FoundationFunder: UK Research and Innovation Project Code: EP/S022503/1Funder Contribution: 6,096,750 GBPRecent reports from the Royal Society, the government Cybersecurity strategy, as well as the National Cyber Security Center highlight the importance of cybersecurity, in ensuring a safe information society. They highlight the challenges faced by the UK in this domain, and in particular the challenges this field poses: from a need for multi-disciplinary expertise and work to address complex challenges, that span from high-level policy to detailed engineering; to the need for an integrated approach between government initiatives, private industry initiatives and wider civil society to tackle both cybercrime and nation state interference into national infrastructures, from power grids to election systems. They conclude that expertise is lacking, particularly when it comes to multi-disciplinary experts with good understanding of effective work both in government and industry. The EPSRC Doctoral Training Center in Cybersecurity addresses this challenge, and aims to train multidisciplinary experts in engineering secure IT systems, tacking and interdicting cybercrime and formulating effective public policy interventions in this domain. The training provided provides expertise in all those areas through a combination of taught modules, and training in conducting original world-class research in those fields. Graduates will be domain experts in more than one of the subfields of cybersecurity, namely Human, Organizational and Regulatory aspects; Attacks, Defences and Cybercrime; Systems security and Cryptography; Program, Software and Platform Security and Infrastructure Security. They will receive training in using techniques from computing, social sciences, crime science and public policy to find appropriate solutions to problems within those domains. Further, they will be trained in responsible research and innovation to ensure both research, but also technology transfer and policy interventions are protective of people's rights, are compatible with democratic institutions, and improve the welfare of the public. Through a program of industrial internships all doctoral students will familiarize themselves with the technologies, polices and also challenges faced by real-world organizations, large and small, trying to tackle cybersecurity challenges. Therefore they will be equipped to assume leadership positions to solve those problems upon graduation.
more_vert assignment_turned_in Project2018 - 2023Partners:Skyscanner, The Data Lab, Skyscanner Ltd, University of Glasgow, Amazon Web Services (Not UK) +7 partnersSkyscanner,The Data Lab,Skyscanner Ltd,University of Glasgow,Amazon Web Services (Not UK),Amazon Web Services, Inc.,Widex A/S (International),J.P. Morgan,University of Glasgow,The Data Lab,Widex (Denmark),JP Morgan ChaseFunder: UK Research and Innovation Project Code: EP/R018634/1Funder Contribution: 3,078,240 GBPProgress in sensing, computational power, storage and analytic tools has given us access to enormous amounts of complex data, which can inform us of better ways to manage our cities, run our companies or develop new medicines. However, the 'elephant in the room' is that when we act on that data we change the world, potentially invalidating the older data. Similarly, when monitoring living cities or companies, we are not able to run clean experiments on them - we get data which is affected by the way they are run today, which limits our ability to model these complex systems. We need ways to run ongoing experiments on such complex systems. We also need to support human interactions with large and complex data sets. In this project we will look at the overlap between the challenge someone faces when coping with all the choices associated with booking a flight for a weekend away, and an expert running complex experiments in a laboratory. The project will test the core ideas in a number of areas, including personalisation of hearing aids, analysis of cancer data, and adapting the computing resources for a major bank.
more_vert assignment_turned_in Project2021 - 2024Partners:Cyberselves Universal Limited, Connected Places Catapult, Amazon Web Services, Inc., Amazon Web Services (Not UK), Guidance Automation Ltd +23 partnersCyberselves Universal Limited,Connected Places Catapult,Amazon Web Services, Inc.,Amazon Web Services (Not UK),Guidance Automation Ltd,BAE Systems (United Kingdom),Consequential Robots,Scoutek Ltd,Shadow Robot Company Ltd,Sheffield Childrens NHS Foundation Trust,KCL,Scoutek Ltd,Bloc Digital,Connected Places Catapult,The Shadow Robot Company,Sheffield Childrens NHS Foundation Trust,D-RisQ Ltd,Bloc Digital,ClearSy,BT Group (United Kingdom),Guidance Automation Ltd,D-RisQ Ltd,Consequential Robotics Ltd,ClearSy,BAE Systems (Sweden),British Telecommunications plc,Cyberselves Universal Limited,BAE Systems (UK)Funder: UK Research and Innovation Project Code: EP/V026801/2Funder Contribution: 2,621,150 GBPAutonomous systems promise to improve our lives; driverless trains and robotic cleaners are examples of autonomous systems that are already among us and work well within confined environments. It is time we work to ensure developers can design trustworthy autonomous systems for dynamic environments and provide evidence of their trustworthiness. Due to the complexity of autonomous systems, typically involving AI components, low-level hardware control, and sophisticated interactions with humans and the uncertain environment, evidence of any nature requires efforts from a variety of disciplines. To tackle this challenge, we gathered consortium of experts on AI, robotics, human-computer interaction, systems and software engineering, and testing. Together, we will establish the foundations and techniques for verification of properties of autonomous systems to inform designs, provide evidence of key properties, and guide monitoring after deployment. Currently, verifiability is hampered by several issues: difficulties to understand how evidence provided by techniques that focus on individual aspects of a system (control engineering, AI, or human interaction, for example) compose to provide evidence for the system as whole; difficulties of communication between stakeholders that use different languages and practices in their disciplines; difficulties in dealing with advanced concepts in AI, control and hardware design, software for critical systems; and others. As a consequence, autonomous systems are often developed using advanced engineering techniques, but outdated approaches to verification. We propose a creative programme of work that will enable fundamental changes to the current state of the art and of practice. We will define a mathematical framework that enables a common understanding of the diverse practices and concepts involved in verification of autonomy. Our framework will provide the mathematical underpinning, required by any engineering effort, to accommodate the notations used by the various disciplines. With this common understanding, we will justify translations between languages, compositions of artefacts (engineering models, tests, simulations, and so on) defined in different languages, and system-level inferences from verifications of components. With such a rich foundation and wealth of results, we will transform the state of practice. Currently, developers build systems from scratch, or reusing components without any evidence of their operational conditions. Resulting systems are deployed in constrained conditions (reduced speed or contained environment, for example) or offered for deployment at the user's own risk. Instead, we envisage the future availability of a store of verified autonomous systems and components. In such a future, in the store, users will find not just system implementations, but also evidence of their operational conditions and expected behaviour (engineering models, mathematical results, tests, and so on). When a developer checks in a product, the store will require all these artefacts, described in well understood languages, and will automatically verify the evidence of trustworthiness. Developers will also be able to check in components for other developers; equally, they will be accompanied by evidence required to permit confidence in their use. In this changed world, users will buy applications with clear guarantees of their operational requirements and profile. Users will also be able to ask for verification of adequacy for customised platforms and environment, for example. Verification is no longer an issue. Working with the EPSRC TAS Hub and other nodes, and our extensive range of academic and industrial partners, we will collaborate to ensure that the notations, verification techniques, and properties, that we consider, contribute to our common agenda to bring autonomy to our everyday lives.
more_vert
chevron_left - 1
- 2
- 3
chevron_right
