Powered by OpenAIRE graph

Oracle (United States)

Oracle (United States)

12 Projects, page 1 of 3
  • Funder: UK Research and Innovation Project Code: EP/G065802/1
    Funder Contribution: 12,610,100 GBP

    Horizon will tackle the challenge of harnessing the power of ubiquitous computing for the digital economy in a way that is acceptable to our society and increases the quality of life for all. This will involve establishing a world-leading and sustainable centre of excellence for research and knowledge transfer for the ubiquitous digital economy. Horizon will conduct a five-year programme of research into the key scientific challenges involved in the widespread adoption of ubiquitous computing; collaborate with users to create, demonstrate and study next generation services; deliver a knowledge transfer programme that ensures that the results of our research are fully connected to the digital economy; train a new generation of researchers to meet the demands of industry for skilled interdisciplinary staff; engage with policy makers and the wider public in order to address societal concerns; and provide a focal point for international, national and regional research in this area.Horizon will exploit the distinctive nature of hub funding to develop a unique approach to this challenge. Our Collaborative Research Programme will be driven by the overarching concept of a lifelong contextual footprint, the idea that each of us throughout our lifetimes will lay down a digital trail that captures our patterns of interaction with digital services. Our research will explore the major infrastructural, human and business challenges associated with this concept, adopting a unique multidisciplinary approach that integrates insights from computer science, psychology, sociology, business, economics and the arts and humanities. We will collaborate with over 30 users from different sectors of the Digital Economy in order to create, deploy and study a series of next generation services 'in the wild' so as to drive our underlying research. We will initially focus on the creative industries and transportation sectors, but subsequently extend our focus to additional sectors in partnership with other hubs and major initiatives. In parallel, our Transformation Programme will drive knowledge transfer and long-term economic impact through partnership management, public engagement, international outreach, incubation of new ventures, the transfer of people, and training for 24 associated PhD students, funded by the University.Our team draws on leading groups at Nottingham spanning computer science, engineering, business, psychology and sociology, complemented by expertise at two spokes: distributed systems and communications at Cambridge, and mathematical modelling and advertising at Reading. A series of further mini-spokes will enable us to introduce other key individuals through hub fellowships.These multiple disciplines and partners will be brought together in a new centre at Nottingham where they will be able to engage with a critical-mass cohort of research staff and students to explore innovative and challenging new projects. The Hub will be directed by Professor Derek McAuley who brings extensive experience of working in academia, directing major industrial research laboratories, and also launching spin-out companies. He will be supported by Professor Tom Rodden, an EPSRC Senior Research Fellow who previously directed the Equator IRC. The net result will be a unique partnership between EPSRC, industry, the public, and the University, with the latter committing 16M of its own funds to match the 12M requested from EPSRC.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V007165/1
    Funder Contribution: 209,756 GBP

    Most modern computer applications depend in some way or another on computations that are performed by server applications on the internet. More and more of these server applications are now built as so-called microservices, which allow developers to gradually update or fix issues in unrelated parts of a larger application, and therefore, have become popular. Many of these microservices avoid certain types of concurrency issues by design. Unfortunately, they still suffer from other kinds of concurrency issues, for example when multiple online customers try to reserve the same seats at the same time. For software engineers, it is hard to test for all possible concurrent interactions. In practice, this means that only simple concurrency issues are reliably detected during testing. Complex issues can however easily slip through and make it into server applications and then handle client requests incorrectly. One example of such a concurrency issue appeared at Nasdaq when the Facebook stock was traded for the first time, resulting in the loss of millions of dollars. Our goal is to develop techniques that detect concurrency issues automatically at run time, to be able to circumvent them, and enable developers to fix them, using detailed information gathered by the detection techniques. Researchers have shown that one can detect and avoid issues, for instance by changing the order in which client requests are processed. In practice however, current techniques slow server applications down significantly, which make these techniques too costly to be used. Our aim is to dynamically balance the need for accurate information and minimize slow down. We conjecture that we can get most practical benefits while only rarely tracking precise details of how program code executes. In addition to automatically preventing concurrency issues to cause problems, we will also use the obtained information to provide feedback to developers so that they can fix the underlying issue in their software. Thus, overall the goal of this research project is to make server applications, and specifically microservices, more robust and resilient to software bugs that are hard to test for and therefore typically remain undiscovered until they cause major issues for customers or companies. Our work will result in the development of adaptive techniques that detect concurrency issues, and automatically tradeoff accuracy and run-time overhead, to be usable in practice. Furthermore, the detection techniques will be used to provide actionable input to the software developers, so that the concurrency issue can be fixed and therefore be prevented reliably in the future. To evaluate this work, we will collect various different types of concurrency issues and make them openly available. This collection will be based on issues from industrial systems and derived from theoretical scenarios for highly complex bugs. We include these theoretical scenarios, since such complex bugs are hard to diagnose and test for, they likely remain undiagnosed and undocumented in practice, but have the potential of causing major disruptions. Finally, we will build and evaluate our proposed techniques based on a system designed for concurrency research. The system uses the GraalVM technology of Oracle Labs, which allows us to prototype at the level of state-of-the-art systems, while keeping the development effort manageable for a small team.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/K008730/1
    Funder Contribution: 4,135,050 GBP

    The last decade has seen a significant shift in the way computers are designed. Up to the turn of the millennium advances in performance were achieved by making a single processor, which could execute a single program at a time, go faster, usually by increasing the frequency of its clock signal. But shortly after the turn of the millennium it became clear that this approach was running into a brick wall - the faster clock meant the processor got hotter, and the amount of heat that can be dissipated in a silicon chip before it fails is limited; that limit was approaching rapidly! Quite suddenly several high-profile projects were cancelled and the industry found a new approach to higher performance. Instead of making one processor go ever faster, the number of processor cores could be increased. Multi-core processors had arrived: first dual core, then quad-core, and so on. As microchip manufacturing capability continues to increase the number of transistors that can be integrated on a single chip, the number of cores continues to rise, and now multi-core is giving way to many-core systems - processors with 10s of cores, running 10s of programs at the same time. This all seems fine at the hardware level - more transistors means more cores - but this change from one to many programs running at the same time has caused many difficulties for the programmers who develop applications for these new systems. Writing a program that runs on a single core is much better understood than writing a program that is actually 10s of programs running at the same time, interacting with each other in complex and hard-to-predict ways. To make life for the programmer even harder, with many-core systems it is often best not to make all the cores identical; instead, heterogeneous many-core systems offer the promise of much higher efficiency with specialised cores handling specialised parts of the overall program, but this is even harder for the programmer to manage. The Programme of projects we plan to undertake will bring the most advanced techniques in computer science to bear on this complex problem, focussing particularly on how we can optimise the hardware and software configurations together to address the important application domain of 3D scene understanding. This will enable a future smart phone fitted with a camera to scan a scene and not only to store the picture it sees, but also to understand that the scene includes a house, a tree, and a moving car. In the course of addressing this application we expect to learn a lot about optimising many-core systems that will have wider applicability too, and the prospect of making future electronic products more efficient, more capable, and more useful.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/L016427/1
    Funder Contribution: 4,746,530 GBP

    Overview: We propose a Centre for Doctoral Training in Data Science. Data science is an emerging discipline that combines machine learning, databases, and other research areas in order to generate new knowledge from complex data. Interest in data science is exploding in industry and the public sector, both in the UK and internationally. Students from the Centre will be well prepared to work on tough problems involving large-scale unstructured and semistructured data, which are increasingly arising across a wide variety of application areas. Skills need: There is a significant industrial need for students who are well trained in data science. Skilled data scientists are in high demand. A report by McKinsey Global Institute cites a shortage of up to 190,000 qualified data scientists in the US; the situation in the UK is likely to be similar. A 2012 report in the Harvard Business Review concludes: "Indeed the shortage of data scientists is becoming a serious constraint in some sectors." A report on the Nature web site cited an astonishing 15,000% increase in job postings for data scientists in a single year, from 2011 to 2012. Many of our industrial partners (see letters of support) have expressed a pressing need to hire in data science. Training approach: We will train students using a rigorous and innovative four-year programme that is designed not only to train students in performing cutting-edge research but also to foster interdisciplinary interactions between students and to build students' practical expertise by interacting with a wide consortium of partners. The first year of the programme combines taught coursework and a sequence of small research projects. Taught coursework will include courses in machine learning, databases, and other research areas. Years 2-4 of the programme will consist primarily of an intensive PhD-level research project. The programme will provide students with breadth throughout the interdisciplinary scope of data science, depth in a specialist area, training in leadership and communication skills, and appreciation for practical issues in applied data science. All students will receive individual supervision from at least two members of Centre staff. The training programme will be especially characterized by opportunities for combining theory and practice, and for student-led and peer-to-peer learning.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S030832/1
    Funder Contribution: 1,215,070 GBP

    The vision of this collaborative multi-centre project is to safeguard and transform current operation protocols of emergency teams by providing sensing, situation awareness, cognitive assistance and mobile autonomy capabilities working synergistically as a single system. Statistics collected by the Home Office report 346 fire related fatalities in England during 2016/2017, the highest figure since 2011/12. Over a 10-year period in USA, 2775 firefighters died on duty. Where there is a need to save and evacuate people from a burning or flooded building, it is important for the chief incident commander to have increased situational awareness and to be able to effectively coordinate the rescue operation, and for individual responders to have enhanced visibility of surrounding hazards and dangers. To this end, we need to combine UK-based expertise in mobile autonomy and people localisation, with internationally leading expertise on welfare monitoring and cognitive assistance at the Univ. of Virginia, and on robotic vision applied to aerial vehicles at the Queensland University of Technology. The proposed work involves four distinct research directions: 1) providing an integrated system for situation awareness that involves localisation of the emergency responders, monitoring of their welfare and mapping of the dynamically changing environment; 2) exploring how situation awareness information should be fed into cognitive assistance tools, in order to provide helpful triggers and alerts to the incident commander and their team; 3) introducing various levels of autonomy enabling aerial vehicles to simultaneously perform tasks of mapping, communication and localisation; and 4) integrating the above capabilities and building the first end-to-end response system that implements the full feedback loop from sensor acquisition to emergency responders and back to sensor actuation. Sensors on people's wearable devices together with sensors mounted on aerial vehicles will contribute to data acquisition for welfare, location and environment monitoring. This in turn will provide input to cognitive assistance for emergency response teams, helping them to assess the situation. They will then in turn provide feedback to sensor systems to prioritise monitoring of specific areas, people or tasks, thus dynamically influencing the next round of situation awareness, and so on. This feedback loop will be a step change providing a whole new approach to safety for emergency responders.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.