Powered by OpenAIRE graph

Liverpool Data Research Associate LDRA

Liverpool Data Research Associate LDRA

4 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/F012535/1
    Funder Contribution: 27,531 GBP

    This proposal is a request for funding as partial support for holding the an International Workshop on Software Testing in the UK called Testing Academia & Industrial Conference - Practice And Research Techniques (TAIC PART 2007).The workshop will combine industrial and academic participation to strengthen and develop UK leadership in the area of software testing. This event builds upon previous smaller workshops on testing held in the UK. Although it remains a workshop in character, the event's title includes the word `conference' to allowfor future growth. These events have steadily built a strong community of researchers and industrialists and the time is now ripe for this event to mature into a large more ambitious event. Funding is sought from EPSRC to support the event. This funding will be used tosupport the costs of the meeting. As a sign of their serious commitment to this venture, several of the industrial partners have already offered to support the event with a modest (but useful) level of sponsorship.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R025134/1
    Funder Contribution: 610,059 GBP

    Mobile and autonomous robots have an increasingly important role in industry and the wider society; from driverless vehicles to home assistance, potential applications are numerous. The UK government identified robotics as a key technology that will lead us to future economic growth (tinyurl.com/q8bhcy7). They have recognised, however, that autonomous robots are complex and typically operate in ever-changing environments (tinyurl.com/o2u2ts7). How can we be confident that they perform useful functions, as required, but are safe? It is standard practice to use testing to check correctness and safety. The software-development practice for robotics typically includes testing within simulations, before robots are built, and then testing of the actual robots. Simulations have several benefits: we can test early, and test execution is cheaper and faster. For example, simulation does not require a robot to move physically. Testing with the real robots is, however, still needed, since we cannot be sure that a simulation captures all the important aspects of the hardware and environment. In the current scenario, test generation is typically manual; this makes testing expensive and unreliable, and introduces delays. Manual test generation is error-prone and can lead to tests that produce the wrong verdict. If a test incorrectly states that the robot has a failure, then developers have to investigate, with extra cost and time. If a test incorrectly states that the robot behaves as expected, then a faulty system may be released. Without a systematic approach, tests may also identify infeasible environments; such tests cannot be used with the real robot. To make matters worse, manual test generation limits the number of tests produced. All this affects the cost and quality of robot software, and is in contrast with current practice in other safety-critical areas, like the transport industry, which is highly regulated. Translation of technology, however, is not trivial. For example, lack of a driver to correct mistakes or respond to unforeseen circumstances leads to a much larger set of working conditions for an autonomous vehicle. Another example is provided by probabilistic algorithms, which make the robot behaviour nondeterministic, and so, difficult to repeat in testing and more difficult to characterise as correct or not. We will address all these issues with novel automated test-generation techniques for mobile and autonomous robots. To use our techniques, a RoboTest tester constructs a model of the robot using a familiar notation already employed in the design of simulations and implementations. After that, instead of spending time designing simulation scenarios, the RoboTest tester, with the push of a button, generates tests. With RoboTest, testing is cheaper, since it takes less time, and is more effective, because the RoboTest tester can use many more tests, especially when using a simulation. To execute the tests, the RoboTest tester can choose from a few simulators employing a variety of approaches to programming. Execution of the tests also follows the push of a button. Yet another button translates simulation to deployment tests. So, the RoboTest tester can trace back the results from the deployment tests to the simulation and the original model. So, the RoboTest tester is in a strong position to understand the reality gap between the simulation and the real world. The RoboTest tester knows that the verdicts for the tests are correct, and understands what the testing achieves; for example, it can be guaranteed to find faults of an identified class. So, the RoboTest tester can answer the very difficult question: have we tested enough? In conclusion, RoboTest will move the testing of mobile and autonomous robots onto a sound footing. RoboTest will make testing more efficient and effective in terms of person effort, and so, achieve longer term reduced costs.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R025134/2
    Funder Contribution: 575,876 GBP

    Mobile and autonomous robots have an increasingly important role in industry and the wider society; from driverless vehicles to home assistance, potential applications are numerous. The UK government identified robotics as a key technology that will lead us to future economic growth (tinyurl.com/q8bhcy7). They have recognised, however, that autonomous robots are complex and typically operate in ever-changing environments (tinyurl.com/o2u2ts7). How can we be confident that they perform useful functions, as required, but are safe? It is standard practice to use testing to check correctness and safety. The software-development practice for robotics typically includes testing within simulations, before robots are built, and then testing of the actual robots. Simulations have several benefits: we can test early, and test execution is cheaper and faster. For example, simulation does not require a robot to move physically. Testing with the real robots is, however, still needed, since we cannot be sure that a simulation captures all the important aspects of the hardware and environment. In the current scenario, test generation is typically manual; this makes testing expensive and unreliable, and introduces delays. Manual test generation is error-prone and can lead to tests that produce the wrong verdict. If a test incorrectly states that the robot has a failure, then developers have to investigate, with extra cost and time. If a test incorrectly states that the robot behaves as expected, then a faulty system may be released. Without a systematic approach, tests may also identify infeasible environments; such tests cannot be used with the real robot. To make matters worse, manual test generation limits the number of tests produced. All this affects the cost and quality of robot software, and is in contrast with current practice in other safety-critical areas, like the transport industry, which is highly regulated. Translation of technology, however, is not trivial. For example, lack of a driver to correct mistakes or respond to unforeseen circumstances leads to a much larger set of working conditions for an autonomous vehicle. Another example is provided by probabilistic algorithms, which make the robot behaviour nondeterministic, and so, difficult to repeat in testing and more difficult to characterise as correct or not. We will address all these issues with novel automated test-generation techniques for mobile and autonomous robots. To use our techniques, a RoboTest tester constructs a model of the robot using a familiar notation already employed in the design of simulations and implementations. After that, instead of spending time designing simulation scenarios, the RoboTest tester, with the push of a button, generates tests. With RoboTest, testing is cheaper, since it takes less time, and is more effective, because the RoboTest tester can use many more tests, especially when using a simulation. To execute the tests, the RoboTest tester can choose from a few simulators employing a variety of approaches to programming. Execution of the tests also follows the push of a button. Yet another button translates simulation to deployment tests. So, the RoboTest tester can trace back the results from the deployment tests to the simulation and the original model. So, the RoboTest tester is in a strong position to understand the reality gap between the simulation and the real world. The RoboTest tester knows that the verdicts for the tests are correct, and understands what the testing achieves; for example, it can be guaranteed to find faults of an identified class. So, the RoboTest tester can answer the very difficult question: have we tested enough? In conclusion, RoboTest will move the testing of mobile and autonomous robots onto a sound footing. RoboTest will make testing more efficient and effective in terms of person effort, and so, achieve longer term reduced costs.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V026518/1
    Funder Contribution: 3,315,000 GBP

    'Autonomous systems' are machines with some form of decision-making ability, which allows them to act independently from a human controller. This kind of technology is already all around us, from traction control systems in cars, to the helpful assistant in mobile phones and computers (Siri, Alexa, Cortana). Some of these systems have more autonomy than others, meaning that some are very predictable and will only react in the way they are initially set up, whereas others have more freedom and can learn and react in ways that go beyond their initial setup. This can make them more useful, but also less predictable. Some autonomous systems have the potential to change what they do, and we call this 'evolving functionality'. This means that a system designed to do a certain task in a certain way, may 'evolve' over time to either do the same task a different way, or to do a different task. All without a human controller telling it what to do. These kinds of systems are being developed because they are potentially very useful, with a wide range of possible applications ranging from minimal down-time manufacturing through to emergency response and robotic surgery. The ability to evolve in functionality offers the potential for autonomous systems to move from conducting well defined tasks in predictable situations, to undertaking complex tasks in changing real-world environments. However, systems that can evolve in function lead to legitimate concerns about safety, responsibility and trust. We learn to trust technology because it is reliable, and when a technology is not reliable, we discard it because it cannot be trusted to function properly. But it may be difficult to learn to trust technology whose function is changing. We might also ask important questions about how functional evolutions are monitored, tested and regulated for safety in appropriate ways. For example, just because a robot with the ability to adapt to handle different shaped objects passes safety testing in a warehouse does not mean that it will necessarily be safe if it is used to do a similar task in a surgical setting. It is also unclear who, if anyone, bears the responsibility for the outcome of functional evolution - whether positive or negative. This research seeks to explore and address these issues, by asking how we can, or should, place trust in autonomous systems with evolving functionality. Our approach is to use three evolving technologies - swarm systems, soft robotics and unmanned air vehicles - which operate in fundamentally different ways, to allow our findings to be used across a wide range of different application areas. We will study these systems in real time to explore both how these systems are developed and how features can be built into the design process to increase trustworthiness, termed Design-for-Trustworthiness. This will support the development of autonomous systems with the ability to adapt, evolve and improve, but with the reassurance that these systems have been developed with methods that ensure they are safe, reliable, and trustworthy.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.