Powered by OpenAIRE graph

ClearSy

4 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/S001190/1
    Funder Contribution: 562,549 GBP

    This research concerns assuring safety of autonomous robots. A robot is a machine that is capable of performing a complex sequence of steps, in order to achieve some goal. Autonomous robots act independently and do not require the direct intervention of a human operator. For example, autonomous vehicles are cars, lorries, or other road vehicles that are able to drive themselves to a destination without the need for a human driver. Another example are robot carers, that are able to assist elderly or disabled people with every day tasks, such as retrieving items, cleaning, playing games, and so on. Autonomous robots therefore present a number of exciting future opportunities for overcoming societal challenges. Robots are computer systems, and the software that runs on them is very complex, far more so than a typical desktop, laptop computer, or mobile phone. Such computers have very simple inputs, like keyboard, mouse, and touchscreen. Robots, however, are "cyber-physical systems": they are both computational (cyber) devices, but they also interact with their physical environment. They have both sensors, that allow them to "see", "hear", and "feel" the world, and also actuators which allow them to manipulate objects in the world. For example, a care robot may have arms that it can move, and wheels to move around with. The software as a robot therefore has to be constantly monitoring its environment, and be able to quickly, appropriately, and, most importantly, safely respond to the changing environment. If we cannot guarantee that a robot carries out its tasks safely, we cannot risk using them, as human injury or even death could result. A recent example concerns a Tesla Model S automated car that was unable to see a large white lorry crossing its path, and ploughed into the side of it killing its driver. Such tragic accidents reveal why safety is an utmost concern. Our research will employ mathematical and logical techniques in an attempt to demonstrate that a robot is safe to operate in its target environment. We will employ a document called a "safety case" that contains a credible and convincing safety argument. This argument must, of course, be supported by evidence, and our technique will provide this through "model-based design", where computerised models of individual system parts are created as virtual prototypes. Such models can be described using complicated mathematics, such as algebra, differential equations, and probabilistic models. Probability, in particular, is very important since robots need to plan for possible uncertainty in there environment - such as a human being in an unexpected place. Mathematics allows us to be rigorous - considering a large range of possible scenarios that would be very expensive to test in the real world. However, it is also difficult for a human to do the necessary mathematics manually. We will therefore use software called an "automated theorem prover" to try and show that each of the robot models behaves correctly and safely. This will include new techniques specifically for reasoning about cyber-physical systems. We will apply our new techniques to a number of industrial problems, gleaned from our a number of robotics companies that we will partner with. This will allow us to provide guidance to them in ensuring their systems are safe. Our hope is that ultimately our project will ensure that robots can be safely introduced into our society, and thus open up a host of exciting future business opportunities.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V026801/2
    Funder Contribution: 2,621,150 GBP

    Autonomous systems promise to improve our lives; driverless trains and robotic cleaners are examples of autonomous systems that are already among us and work well within confined environments. It is time we work to ensure developers can design trustworthy autonomous systems for dynamic environments and provide evidence of their trustworthiness. Due to the complexity of autonomous systems, typically involving AI components, low-level hardware control, and sophisticated interactions with humans and the uncertain environment, evidence of any nature requires efforts from a variety of disciplines. To tackle this challenge, we gathered consortium of experts on AI, robotics, human-computer interaction, systems and software engineering, and testing. Together, we will establish the foundations and techniques for verification of properties of autonomous systems to inform designs, provide evidence of key properties, and guide monitoring after deployment. Currently, verifiability is hampered by several issues: difficulties to understand how evidence provided by techniques that focus on individual aspects of a system (control engineering, AI, or human interaction, for example) compose to provide evidence for the system as whole; difficulties of communication between stakeholders that use different languages and practices in their disciplines; difficulties in dealing with advanced concepts in AI, control and hardware design, software for critical systems; and others. As a consequence, autonomous systems are often developed using advanced engineering techniques, but outdated approaches to verification. We propose a creative programme of work that will enable fundamental changes to the current state of the art and of practice. We will define a mathematical framework that enables a common understanding of the diverse practices and concepts involved in verification of autonomy. Our framework will provide the mathematical underpinning, required by any engineering effort, to accommodate the notations used by the various disciplines. With this common understanding, we will justify translations between languages, compositions of artefacts (engineering models, tests, simulations, and so on) defined in different languages, and system-level inferences from verifications of components. With such a rich foundation and wealth of results, we will transform the state of practice. Currently, developers build systems from scratch, or reusing components without any evidence of their operational conditions. Resulting systems are deployed in constrained conditions (reduced speed or contained environment, for example) or offered for deployment at the user's own risk. Instead, we envisage the future availability of a store of verified autonomous systems and components. In such a future, in the store, users will find not just system implementations, but also evidence of their operational conditions and expected behaviour (engineering models, mathematical results, tests, and so on). When a developer checks in a product, the store will require all these artefacts, described in well understood languages, and will automatically verify the evidence of trustworthiness. Developers will also be able to check in components for other developers; equally, they will be accompanied by evidence required to permit confidence in their use. In this changed world, users will buy applications with clear guarantees of their operational requirements and profile. Users will also be able to ask for verification of adequacy for customised platforms and environment, for example. Verification is no longer an issue. Working with the EPSRC TAS Hub and other nodes, and our extensive range of academic and industrial partners, we will collaborate to ensure that the notations, verification techniques, and properties, that we consider, contribute to our common agenda to bring autonomy to our everyday lives.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V026801/1
    Funder Contribution: 2,923,650 GBP

    Autonomous systems promise to improve our lives; driverless trains and robotic cleaners are examples of autonomous systems that are already among us and work well within confined environments. It is time we work to ensure developers can design trustworthy autonomous systems for dynamic environments and provide evidence of their trustworthiness. Due to the complexity of autonomous systems, typically involving AI components, low-level hardware control, and sophisticated interactions with humans and the uncertain environment, evidence of any nature requires efforts from a variety of disciplines. To tackle this challenge, we gathered consortium of experts on AI, robotics, human-computer interaction, systems and software engineering, and testing. Together, we will establish the foundations and techniques for verification of properties of autonomous systems to inform designs, provide evidence of key properties, and guide monitoring after deployment. Currently, verifiability is hampered by several issues: difficulties to understand how evidence provided by techniques that focus on individual aspects of a system (control engineering, AI, or human interaction, for example) compose to provide evidence for the system as whole; difficulties of communication between stakeholders that use different languages and practices in their disciplines; difficulties in dealing with advanced concepts in AI, control and hardware design, software for critical systems; and others. As a consequence, autonomous systems are often developed using advanced engineering techniques, but outdated approaches to verification. We propose a creative programme of work that will enable fundamental changes to the current state of the art and of practice. We will define a mathematical framework that enables a common understanding of the diverse practices and concepts involved in verification of autonomy. Our framework will provide the mathematical underpinning, required by any engineering effort, to accommodate the notations used by the various disciplines. With this common understanding, we will justify translations between languages, compositions of artefacts (engineering models, tests, simulations, and so on) defined in different languages, and system-level inferences from verifications of components. With such a rich foundation and wealth of results, we will transform the state of practice. Currently, developers build systems from scratch, or reusing components without any evidence of their operational conditions. Resulting systems are deployed in constrained conditions (reduced speed or contained environment, for example) or offered for deployment at the user's own risk. Instead, we envisage the future availability of a store of verified autonomous systems and components. In such a future, in the store, users will find not just system implementations, but also evidence of their operational conditions and expected behaviour (engineering models, mathematical results, tests, and so on). When a developer checks in a product, the store will require all these artefacts, described in well understood languages, and will automatically verify the evidence of trustworthiness. Developers will also be able to check in components for other developers; equally, they will be accompanied by evidence required to permit confidence in their use. In this changed world, users will buy applications with clear guarantees of their operational requirements and profile. Users will also be able to ask for verification of adequacy for customised platforms and environment, for example. Verification is no longer an issue. Working with the EPSRC TAS Hub and other nodes, and our extensive range of academic and industrial partners, we will collaborate to ensure that the notations, verification techniques, and properties, that we consider, contribute to our common agenda to bring autonomy to our everyday lives.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V026747/1
    Funder Contribution: 3,063,680 GBP

    Imagine a future where autonomous systems are widely available to improve our lives. In this future, autonomous robots unobtrusively maintain the infrastructure of our cities, and support people in living fulfilled independent lives. In this future, autonomous software reliably diagnoses disease at early stages, and dependably manages our road traffic to maximise flow and minimise environmental impact. Before this vision becomes reality, several major limitations of current autonomous systems need to be addressed. Key among these limitations is their reduced resilience: today's autonomous systems cannot avoid, withstand, recover from, adapt, and evolve to handle the uncertainty, change, faults, failure, adversity, and other disruptions present in such applications. Recent and forthcoming technological advances will provide autonomous systems with many of the sensors, actuators and other functional building blocks required to achieve the desired resilience levels, but this is not enough. To be resilient and trustworthy in these important applications, future autonomous systems will also need to use these building blocks effectively, so that they achieve complex technical requirements without violating our social, legal, ethical, empathy and cultural (SLEEC) rules and norms. Additionally, they will need to provide us with compelling evidence that the decisions and actions supporting their resilience satisfy both technical and SLEEC-compliance goals. To address these challenging needs, our project will develop a comprehensive toolbox of mathematically based notations and models, SLEEC-compliant resilience-enhancing methods, and systematic approaches for developing, deploying, optimising, and assuring highly resilient autonomous systems and systems of systems. To this end, we will capture the multidisciplinary nature of the social and technical aspects of the environment in which autonomous systems operate - and of the systems themselves - via mathematical models. For that, we have a team of Computer Scientists, Engineers, Psychologists, Philosophers, Lawyers, and Mathematicians, with an extensive track record of delivering research in all areas of the project. Working with such a mathematical model, autonomous systems will determine which resilience- enhancing actions are feasible, meet technical requirements, and are compliant with the relevant SLEEC rules and norms. Like humans, our autonomous systems will be able to reduce uncertainty, and to predict, detect and respond to change, faults, failures and adversity, proactively and efficiently. Like humans, if needed, our autonomous systems will share knowledge and services with humans and other autonomous agents. Like humans, if needed, our autonomous systems will cooperate with one another and with humans, and will proactively seek assistance from experts. Our work will deliver a step change in developing resilient autonomous systems and systems of systems. Developers will have notations and guidance to specify the socio-technical norms and rules applicable to the operational context of their autonomous systems, and techniques to design resilient autonomous systems that are trustworthy and compliant with these norms and rules. Additionally, developers will have guidance to build autonomous systems that can tolerate disruption, making the system usable in a larger set of circumstances. Finally, they will have techniques to develop resilient autonomous systems that can share information and services with peer systems and humans, and methods for providing evidence of the resilience of their systems. In such a context, autonomous systems and systems of systems will be highly resilient and trustworthy.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.