Powered by OpenAIRE graph

Ocado Technology

Ocado Technology

3 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/V008102/1
    Funder Contribution: 1,718,900 GBP

    To be really useful, robots need to interact with objects in the world. The current inability of robots to grasp diverse objects with efficiency and reliability severely limits their range of application. Agriculture, mining and environmental clean-up arejust three examples where - unlike a factory - the items to be handled could have a huge variety of shapes and appearances, need to be identified amongst clutter, and need to be grasped firmly for transport while avoiding damage. Secure grasp of unknown objects amongst clutter remains an unsolved problem for robotics, despite improvements in 3Dsensing and reconstruction, in manipulator sophistication and the recent use of large-scale machine learning. This project proposes a new approach inspired by the high competence exhibited by ants when performing the closely equivalent task of collecting and manipulating diverse food items. Ants have relatvely simple, robot-like 'grippers' (their mouth-parts, called 'mandibles'), limited sensing (mostly tactile, using their antennae) and tiny brains. Yet they are able to pick up and carry a wide diversity of food items, from seeds to other insect prey, which can vary enormously in shape, size, rigidity and manouverability. They can quickly choose between multiple items and find an effective position to make their grasp, readjusting if necessary. Replicating even part of this competence on robots would be a significant advance. Grasping thus makes an ideal target for applying biorobotic methods that my group has previously used with substantial success to understand and mimic insect navigation behaviours on robots. How does an ant pick up an object? The first part of this project will be to set up the methods required to observe and analyse in detail the behaviour of ants interacting with objects. At the same time we will start to build both simulated and real robot systems that allow us to imitate the actions of an ant as it positions its body, head and mouth to make a grasp; using an omnidirectional robot base with an arm and gripper. We will also examine and imitate the sensory systems usedby the ant to determine the position, shape and size of the object before making a grasp. What happens in the ant's brain when it picks up an object? The second part will explore what algorithms insect brains need to compute to be able to make efficient and effective grasping decisions. Grasping is a task that contains in miniature many key issues in robot intelligence. It involves tight coupling of physical, perceptual and control systems. It involves a hierarchy of control decisions (whether to grasp, how to position the body and actuators, precise contact, dealing with uncertainty, detecting failure). It requires fusion of sensory information and transformation into the action state space, and involves prediction, planning and adaptation. We aim tounderstand how insects solve these problems as a route to efficient and effective solutions for robotics. Can a robot perform as well as an ant? The final part will test the systems we have developed in real world tasks. The first task will be to perform an object clearing task, which will also allow benchmarking of the developed system against existing research. The second task will be based ona pressing problem in environmental clean-up: detection and removal of small plastic items from amongst shoreline rocksand gravel. This novel area of research promises significant pay-off from translating biological understanding into technical advance because it addresses an important unsolved challenge for which the ant is an ideal animal model.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V026518/1
    Funder Contribution: 3,315,000 GBP

    'Autonomous systems' are machines with some form of decision-making ability, which allows them to act independently from a human controller. This kind of technology is already all around us, from traction control systems in cars, to the helpful assistant in mobile phones and computers (Siri, Alexa, Cortana). Some of these systems have more autonomy than others, meaning that some are very predictable and will only react in the way they are initially set up, whereas others have more freedom and can learn and react in ways that go beyond their initial setup. This can make them more useful, but also less predictable. Some autonomous systems have the potential to change what they do, and we call this 'evolving functionality'. This means that a system designed to do a certain task in a certain way, may 'evolve' over time to either do the same task a different way, or to do a different task. All without a human controller telling it what to do. These kinds of systems are being developed because they are potentially very useful, with a wide range of possible applications ranging from minimal down-time manufacturing through to emergency response and robotic surgery. The ability to evolve in functionality offers the potential for autonomous systems to move from conducting well defined tasks in predictable situations, to undertaking complex tasks in changing real-world environments. However, systems that can evolve in function lead to legitimate concerns about safety, responsibility and trust. We learn to trust technology because it is reliable, and when a technology is not reliable, we discard it because it cannot be trusted to function properly. But it may be difficult to learn to trust technology whose function is changing. We might also ask important questions about how functional evolutions are monitored, tested and regulated for safety in appropriate ways. For example, just because a robot with the ability to adapt to handle different shaped objects passes safety testing in a warehouse does not mean that it will necessarily be safe if it is used to do a similar task in a surgical setting. It is also unclear who, if anyone, bears the responsibility for the outcome of functional evolution - whether positive or negative. This research seeks to explore and address these issues, by asking how we can, or should, place trust in autonomous systems with evolving functionality. Our approach is to use three evolving technologies - swarm systems, soft robotics and unmanned air vehicles - which operate in fundamentally different ways, to allow our findings to be used across a wide range of different application areas. We will study these systems in real time to explore both how these systems are developed and how features can be built into the design process to increase trustworthiness, termed Design-for-Trustworthiness. This will support the development of autonomous systems with the ability to adapt, evolve and improve, but with the reassurance that these systems have been developed with methods that ensure they are safe, reliable, and trustworthy.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V026747/1
    Funder Contribution: 3,063,680 GBP

    Imagine a future where autonomous systems are widely available to improve our lives. In this future, autonomous robots unobtrusively maintain the infrastructure of our cities, and support people in living fulfilled independent lives. In this future, autonomous software reliably diagnoses disease at early stages, and dependably manages our road traffic to maximise flow and minimise environmental impact. Before this vision becomes reality, several major limitations of current autonomous systems need to be addressed. Key among these limitations is their reduced resilience: today's autonomous systems cannot avoid, withstand, recover from, adapt, and evolve to handle the uncertainty, change, faults, failure, adversity, and other disruptions present in such applications. Recent and forthcoming technological advances will provide autonomous systems with many of the sensors, actuators and other functional building blocks required to achieve the desired resilience levels, but this is not enough. To be resilient and trustworthy in these important applications, future autonomous systems will also need to use these building blocks effectively, so that they achieve complex technical requirements without violating our social, legal, ethical, empathy and cultural (SLEEC) rules and norms. Additionally, they will need to provide us with compelling evidence that the decisions and actions supporting their resilience satisfy both technical and SLEEC-compliance goals. To address these challenging needs, our project will develop a comprehensive toolbox of mathematically based notations and models, SLEEC-compliant resilience-enhancing methods, and systematic approaches for developing, deploying, optimising, and assuring highly resilient autonomous systems and systems of systems. To this end, we will capture the multidisciplinary nature of the social and technical aspects of the environment in which autonomous systems operate - and of the systems themselves - via mathematical models. For that, we have a team of Computer Scientists, Engineers, Psychologists, Philosophers, Lawyers, and Mathematicians, with an extensive track record of delivering research in all areas of the project. Working with such a mathematical model, autonomous systems will determine which resilience- enhancing actions are feasible, meet technical requirements, and are compliant with the relevant SLEEC rules and norms. Like humans, our autonomous systems will be able to reduce uncertainty, and to predict, detect and respond to change, faults, failures and adversity, proactively and efficiently. Like humans, if needed, our autonomous systems will share knowledge and services with humans and other autonomous agents. Like humans, if needed, our autonomous systems will cooperate with one another and with humans, and will proactively seek assistance from experts. Our work will deliver a step change in developing resilient autonomous systems and systems of systems. Developers will have notations and guidance to specify the socio-technical norms and rules applicable to the operational context of their autonomous systems, and techniques to design resilient autonomous systems that are trustworthy and compliant with these norms and rules. Additionally, developers will have guidance to build autonomous systems that can tolerate disruption, making the system usable in a larger set of circumstances. Finally, they will have techniques to develop resilient autonomous systems that can share information and services with peer systems and humans, and methods for providing evidence of the resilience of their systems. In such a context, autonomous systems and systems of systems will be highly resilient and trustworthy.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.