Powered by OpenAIRE graph

React AI

2 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/V006673/1
    Funder Contribution: 285,285 GBP

    Robots have the potential to deliver tremendous benefits to our society by assisting us in all aspects of our everyday life. For example, they could increase the quality of life of elderly people by allowing them to stay longer at home on their own, through preparing meals, cleaning the house, and assisting them to get dressed. However, robots such as legged robots are also very complex machines, which are highly prone to damage when they are not operating in the well-controlled environments of factories. Moreover, because of this complexity and the large variety of environments they might encounter, it impossible for engineers to anticipate all the damage situations that the robot may encounter and to program its reactions accordingly. A promising approach to overcome this difficulty is to enable robots to learn on their own how to face and how to respond to the different situations they encounter. This approach shares similarities with the way humans and animals react in analogous circumstances. For instance, a child with a sprained ankle learns on his own how to walk with only one foot in order to minimise the pain. The objective of this research project is to develop the algorithmic foundations that allow robots to do the same. In previous works, we have developed creative learning algorithms that enable (physical) legged robots to overcome the loss of a leg by learning how to walk forward in less than two minutes. However, in these works, the algorithms were configured to solve a single task (i.e., walking forward), which does not leverage the versatility of legged robots and their capability, for instance, to walk in every direction, to jump, and to crawl. The ambition of this project is to extend the adaptation capabilities of our algorithms to the entire range of the robots' abilities. This will be achieved by employing recent advances in hierarchical reinforcement learning to transfer knowledge during the adaptation process across the different skills of the robots. The combination of these hierarchical skill repertoires with our online-adaptation algorithms will enable robots to quickly transfer the result of their adaptation on one skill to the other skills. For instance, after finding a new way to walk forward, a robot might have discovered that it cannot rely on its front-left leg. With the proposed project, this information will be automatically used by the robot to speed-up the adaptation process when it will try, for instance, to learn to turn by avoiding to use the front-left leg too. In addition to damage recovery, the same algorithm will enable robots to adapt from changes in their environment, for instance by changing their behaviours depending on whether they walk on flat concrete floor or on sloping grassy ground. Increasing the adaptation capabilities of versatile robots aims in the long term to enable the use of robots to substitutes humans in the most dangerous task they have to perform. For instance, thanks to robots with improved adaptation abilities, it would be possible to send robots searching for survivors after an earthquake or to operate in a nuclear plant after a disaster. Improving the ability of robots to overcome unknown situations is one of the key requirements to enable them to be a significant part of our daily life. This research will be undertaken at Imperial College London, in the department of computing. The project will benefit from state of the art robotic facilities, including a quadruped robot, a hexapod robot and a motion capture system, to develop and experiment a new generation of learning algorithms for resilient robots.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S021795/1
    Funder Contribution: 5,114,490 GBP

    FARSCOPE-TU (Towards Ubiquity) will train a new generation of "T-shaped roboticists" in the priority area of Robotics and Autonomous Systems (RAS). T-shaping means graduates will combine the depth of individual PhD research experience with broad awareness of the priority area, including technical tools and topics spanning multiple disciplines. Breadth will be enhanced by strong understanding of the industrial and societal context in which future RAS will operate. These graduates will meet the need for future innovators in RAS, evidenced by industrial partner demand and growing research investment, to deliver potential UK global leadership in the RAS area. That need spans many applications and technologies, so FARSCOPE-TU adopts a broad and ambitious vision of RAS ubiquity, motivating the research challenge to make RAS that are significantly more interactive with their environments. The FARSCOPE-TU training experience has been carefully designed to support T-shaping by bringing in students from many disciplines and upskilling them through an integrated programme of individual research and cohort activities, which mix together throughout the four years of study. The FARSCOPE-TU research challenge necessitates multidisciplinary thinking, as the enabling technologies of computer science and engineering interface with questions of psychology, biology, policy, ethics, law and more. Students from this diverse range of backgrounds will be recruited, with reskilling supported through fundamental training and peer learning at the outset. The first year will be organized as a formal programme of study, equivalent to a Masters degree. The remaining three years will focus on PhD research, punctuated by mandatory cohort-based training to refresh first year content and all subject to annual progress monitoring. Topics will include responsible innovation, enterprise, public engagement, and industrial context. FARSCOPE-TU has formed partnerships with 19 organizations who share its vision, have helped co-create the training programme, and span technologies and applications that align with the CDT's broad interpretation of RAS. Partner engagement will be central to covering industrial context training. Partners and the FARSCOPE-TU team have also co-created a flexible programme of engagement mechanisms, designed to support a diverse set of partner sizes and interests, to allow collaborations to evolve, and to be responsive to potential new partners. The programme includes mentoring, mutual training by and for partners, collaboration on research and industry projects, sponsorship and leveraged funding opportunities. Partners have committed £2.5M in leverage to support FARSCOPE-TU including 15 studentships from the hosts and 12 sponsored places from industry. FARSCOPE-TU will promote equality, diversity and inclusion both internally and, since the vision includes robots interacting with society, in its research. For example, FARSCOPE-TU could consider how training data bias would affect equality of interaction between humans and home assistance robots. FARSCOPE-TU will instigate a high-profile Single Equality Scheme named "Inclusive Robotics" that combines operational initiatives, including explicit targets, with events and training, linked to responsible innovation and human interaction. FARSCOPE-TU will deliver a joint PhD award, badged by partners University of Bristol and University of the West of England. The CDT will be run through their established Bristol Robotics Lab partnership, providing over 4,500sqm dedicated RAS laboratory space and a community of over 50 supervisors. BRL's existing FARSCOPE CDT provides the security of a strong track record, with 46 students recruited in four cohorts so far and an approved joint programme. FARSCOPE-TU builds on that experience with a revised first year to support diverse intake and early partner engagement, enhanced contextual training, the new T-shape concept and the wider ubiquity vision.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.