Oracle for Research
Oracle for Research
8 Projects, page 1 of 2
assignment_turned_in Project2014 - 2024Partners:IBM, Microsoft Research Ltd, Agilent Technologies UK Ltd, Oracle for Research, Agilent Technologies (United Kingdom) +29 partnersIBM,Microsoft Research Ltd,Agilent Technologies UK Ltd,Oracle for Research,Agilent Technologies (United Kingdom),Qualcomm Incorporated,Qualcomm Technologies, Inc.,University of Edinburgh,IBM Corporation (International),ACE,ARM Ltd,Oswego State University of New York,Wolfson Microelectronics,Wolfson Microelectronics,Amazon Development Centre Scotland,ARM Ltd,Critical Blue Ltd,Freescale Semiconductor Uk Ltd,Altran UK Ltd,MICROSOFT RESEARCH LIMITED,SICSA,Geomerics Ltd,Critical Blue Ltd,Codeplay Software,Altran UK Ltd,Codeplay Software Ltd,Freescale Semiconductor (United Kingdom),SICSA,Oracle (United States),Sun Microsystems Inc,Associated Compiler Experts,IBM,Oswego State University of New York,Amazon Development Centre ScotlandFunder: UK Research and Innovation Project Code: EP/L01503X/1Funder Contribution: 3,937,630 GBPThe worldwide software market, estimated at $250 billion per annum, faces a disruptive challenge unprecedented since its inception: for performance and energy reasons, parallelism and heterogeneity now pervade every layer of the computing systems infrastructure, from the internals of commodity processors (manycore), through small scale systems (GPGPUs and other accelerators) and on to globally distributed systems (web, cloud). This pervasive parallelism renders the hierarchies, interfaces and methodologies of the sequential era unviable. Heterogeneous parallel hardware requires new methods of compilation for new programming languages supported by new system development strategies. Parallel systems, from nano to global, create difficult new challenges for modelling, simulation, testing and verification. This poses a set of urgent interconnected problems of enormous significance, impacting and disrupting all research and industrial sectors which rely upon computing technology. Our CDT will generate a stream of more than 50 experts, prepared to address these challenges by taking up key roles in academic and industrial research and development labs, working to shape the future of the industry. The research resources and industrial connections available to our CDT make us uniquely well placed within the UK to deliver on these aspirations. The "pervasive parallelism challenge" is to undertake the fundamental research and design required to transform methods and practice across all levels of the ICT infrastructure, in order to exploit these new technological opportunities. Doing so will allow us to raise the management of heterogeneous concurrency and parallelism from a niche activity in the care of experts, to a regularised component of the mainstream. This requires a steady flow of highly educated, highly skilled practitioners, with the ability to relate to opportunities at every level and to communicate effectively with specialists in related areas. These highly skilled graduates must not only have deep expertise in their own specialisms, but crucially, an awareness of relationships to the surrounding computational system. The need for fundamental work on heterogeneous parallelism is globally recognised by diverse interest groups. In the USA, reports undertaken by the Computing Community Consortium and the National Research Council recognise the paradigm shift needed for this technology to be incorporated into research and industry alike. Both these reports were used as fundamental arguments in initiating the call for proposals by the National Science Foundation (NSF) on Exploiting Parallelism and Scalability, in the context of the NSF's Advanced Computing Infrastructure: Vision and Strategic Plan which calls for fundamental research to answer the question of "how to enable the computational systems that will support emerging applications without the benefit of near-perfect performance scaling from hardware improvements." Similarly, the European Union has identified the need for new models of parallelism as part of its Digital Agenda. Under the agenda goals of Cloud Computing and Software and Services, parallelism plays a crucial role and the Commission asserts the need for a deeper understanding and new models of parallel computation that will enable future technology. Given the UK's global leadership status it is imperative that similar questions be posed and answered here.
more_vert assignment_turned_in Project2024 - 2032Partners:Think Cyber Security Ltd, Arqit Quantum Inc., LV= General Insurance, Forescout, BT plc +21 partnersThink Cyber Security Ltd,Arqit Quantum Inc.,LV= General Insurance,Forescout,BT plc,NCC Group,Siemens plc (UK),Science Card,ARM Ltd,University of Bristol,QinetiQ,Oracle for Research,Airbus Endeavr Wales,THALES UK LIMITED,Royal United Services Institute,Cybsafe Limited,TU Darmstadt,Immersive Labs,HP Labs,University of Adelaide,Amazon Web Services EMEA SARL,University of Leuven,Exalens,Ofcom,Carnegie Mellon University,Vodafone UK LimitedFunder: UK Research and Innovation Project Code: EP/Y035313/1Funder Contribution: 8,266,800 GBPDigitalisation has generated a new era of technological innovations whose value can only be maximised with equally innovative cyber security. Our specific focus is on the cyber security of digitalisation and data in large-scale, intermeshed systems and infrastructures - where the boundaries between systems are blurred, data distributed with strong localisation and sovereignty claims, and there exist numerous, intricate inter-dependencies between service architectures. With the increasing shortage of cyber security professionals - both globally and in the UK - there is an urgent need for future research leaders who will have the capability to anticipate the challenges and develop innovative solutions to cyber security in a world where technology operates without concrete, clearly delineated digital boundaries. This capability is critical to ensure that digital infrastructures are secure and resilient and security professionals have suitable methods, tools, techniques and insights for securing the digital societal fabric. The Centre for Doctoral Training (CDT) 'Cyber Secure Everywhere: Resilience in a World of Disappearing System Boundaries' will train at least 50 new doctoral-level graduates to address this capability gap. We will do this by educating PhD students in both the technical skills needed to study and analyse blended infrastructures, while simultaneously training them to understand the challenges as fundamentally human too. The training involves close involvement with industry and practitioners who have played a key role in co-creating the programme. The training also leverages state-of-the-art research testbeds and labs at universities of Bristol and Bath as well as partner industry organisations and international research centres. The programme builds on the best practices developed as part of our current CDT on Trust, Identity, Privacy and Security in Large-Scale Infrastructures (TIPS-at-Scale). The first year will involve a series of taught modules providing core knowledge in cyber security (both technical and human & organisational aspects). There will be a programme of co-creation activities with industry as well as deep dives on particular research topics and industry challenges. This co-creation and collaboration ethos will continue throughout their research projects. Throughout the 4-year programme, students will also receive skills training on a number of fundamental computational and analytical techniques as well as intellectual property, entrepreneurship and commercialisation. They will work collaboratively with students in-year and across-years on shared problems and explore responsible innovation in real-world contexts. Through their projects and state-of-the-art experimental infrastructures, they will develop knowledge and expertise on rigorous, evidence-based research on cyber security. The CDT is an exciting, novel way to develop future research and industry leaders who are not only able to tackle cyber security in emerging and future digital infrastructures but can do so in a way that is based on rigorous experimental work and a core ethos of responsible innovation.
more_vert assignment_turned_in Project2022 - 2025Partners:Imperial College London, Oracle for Research, Oracle (United States), Sun Microsystems IncImperial College London,Oracle for Research,Oracle (United States),Sun Microsystems IncFunder: UK Research and Innovation Project Code: EP/W001012/1Funder Contribution: 293,360 GBPLossless compression is a key optimisation technique when storing, transmitting or processing datasets of any size and kind. Compression allows managing datasets larger than primary storage medium and reduces the bandwidth need for data access in applications ranging from physics simulations to machine learning models to sensor data management. Some compression schemes allow computation to be performed directly on compressed data, often reducing computational complexity compared to uncompressed data - such schemes are commonly called "lightweight". For example, summing values in a Run-Length-Encoded (RLE) array requires effort in the order of the size of the compressed input, i.e., usually significantly less than the uncompressed input. Similarly, relational selections and joins can be prefiltered on the dictionary of a dictionary-compressed relation or the minimum-value in a frame-of-reference encoded column. Implementing algorithms to work directly on compressed data is challenging: the computation has to be tightly integrated with the (de-)compression code and inefficiencies in the implementation can easily outweigh benefits in data transfer or processing performance. Consequently, most lightweight compression schemes are bespoke, i.e., developed and tuned for specific, well-understood domains such as relational databases, image processing or linear algebra. While the state-of-the-art is to implement them manually, virtually all lightweight compression schemes can be expressed as sequences of primitive transformations such as Run-Length-Encoding (RLE), dictionary compression or Huffman-coding. Examples of such "compression pipelines" are PFOR-delta (in the Vector DBMS), Vertipaq (in Microsoft SQL Server) or the Compressed-Sparse-Row matrix representation (in many linear algebra packages). However, there are three fundamental problems with the state of the art: first, there is no principled way to assemble these pipelines. Second, these schemes are tied to a specific data/processing model (relational algebra, linear algebra, etc.). Third, and most importantly, the implementation effort is high as every application needs to implement compression from scratch. Unsurprisingly many applications that could benefit from compression shy away from that implementation effort: in particular for domain-scientists writing code in languages like Python or R, low implementation effort takes precedence over efficiency. Our vision is to make the benefits of performance-oriented compression available to applications beyond the mentioned few. For that purpose, we will develop an algebraic framework for the representation and optimisation of bespoke compression schemes in general-purpose programming languages. Instead of "weaving" hundreds of lines of compression-related code into an application's logic, developers will express compression schemes as annotations on collections. The backend transparently transforms code that operates on the vector to take advantage of the compression strategy. This allows even non-experts to implement bespoke compression schemes. Simplifying the interface even further, we will implement a fully automated approach that determines the most appropriate compression scheme for a program, dataset and hardware platform using cost-based optimisation rather than requiring to have it explicitly specified.
more_vert assignment_turned_in Project2013 - 2018Partners:Advanced Risc Machines (Arm), University of Glasgow, Oracle for Research, University of Glasgow, Google UK +6 partnersAdvanced Risc Machines (Arm),University of Glasgow,Oracle for Research,University of Glasgow,Google UK,Amazon Web Services (Not UK),Amazon Web Services, Inc.,Google UK,Oracle (United States),Sun Microsystems Inc,Advanced Risc Machines (Arm)Funder: UK Research and Innovation Project Code: EP/L000725/1Funder Contribution: 1,166,420 GBPThe ecosystem of compute devices is highly connected, and likely to become even more so as the internet-of-things concept is realized. There is a single underlying global protocol for communication which enables all connected devices to interact, i.e. internet protocol (IP). In this project, we will create a corresponding single underlying global protocol for computation. This will enable wireless sensors, smartphones, laptops, servers and cloud data centres to co-operate on what is conceptually a single task, i.e. an AnyScale app. A user might run an AnyScale app on her smartphone, then when the battery is running low, or wireless connectivity becomes available, the app may shift its computation to a cloud server automatically. This kind of runtime decision making and taking is made possible by the AnyScale framework, which uses a cost/benefit model and machine learning techniques to drive its behaviour. When the app is running on the phone, it cannot do very complex calculations or use too much memory. However in a powerful server, the computations can be much larger and complicated. The AnyScale app will behave in an appropriate way based on where it is running. In this project, we will create the tools, techniques and technology to enable software developers to create and deploy AnyScale apps. Our first case study will be to design a movement controller app, that allows a biped robot with realistic humanoid limbs to 'walk' over various kinds of terrain. This is a complex computational task - generally beyond the power of embedded chips inside robotic limbs. Our AnyScale controller will offload computation to computers on-board the robot, or wirelessly to nearby servers or cloud-based systems. This is an ideal scenario for robotic exploration, e.g. of nuclear disaster sites.
more_vert assignment_turned_in Project2014 - 2024Partners:BBC Television Centre/Wood Lane, CMU, Selex-Galileo, Microsoft Research Ltd, Digital Curation Centre +92 partnersBBC Television Centre/Wood Lane,CMU,Selex-Galileo,Microsoft Research Ltd,Digital Curation Centre,TU Berlin,Digital Curation Centre,Selex-Galileo,Open Data Institute (ODI),Quorate Technology Limited,Amor Group,BrightSolid Online Innovation,Agilent Technologies UK Ltd,Quorate Technology Ltd,Google Inc,University of Pennsylvania,Amazon Development Centre Scotland,BrightSolid Online Innovation,Saarland University,HSBC Holdings plc,AlertMe,University of Edinburgh,Institut de recherche Idiap,Freescale Semiconductor Uk Ltd,Scottish Power,Center for Math and Computer Sci CWI,Cloudsoft Corporation,Scottish Power (United Kingdom),James Hutton Institute,Xerox Europe,Oracle for Research,Apple,Agilent Technologies (United Kingdom),Skyscanner Ltd,Carnego Systems Limited,Massachusetts Institute of Technology,Amor Group,James Hutton Institute,UCB Pharma (United Kingdom),University of Washington,Carnego Systems (United Kingdom),City of Edinburgh Council,IBM UNITED KINGDOM LIMITED,MICROSOFT RESEARCH LIMITED,Digital Catapult,IST Austria,Massachusetts Institute of Technology,Pharmatics Ltd,Freescale Semiconductor (United Kingdom),SICSA,Oracle (United States),TimeOut,Sun Microsystems Inc,AlertMe,Pharmatics Ltd,TimeOut,Biomathematics & Statistics Scotland,Scottish Power (United Kingdom),Psymetrix Limited,Rangespan Ltd,UCB Celltech (UCB Pharma S.A.) UK,IBM (United Kingdom),Washington University in St. Louis,Helsinki Institute for Information Techn,MIT,The University of Texas at Austin,IST Austria (Institute of Sci & Tech),Skyscanner,British Broadcasting Corporation - BBC,IBM (United Kingdom),Google Inc,University of Pennsylvania,Rangespan Ltd,BBC,HSBC BANK PLC,SICSA,CLOUDSOFT CORPORATION LIMITED,Royal Bank of Scotland Plc,Apple, Inc.,Xerox Europe,Saarland University,City of Edinburgh Council,UCB UK,ODI,University of Washington,HSBC Bank Plc,TU Berlin,Royal Bank of Scotland Plc,Yahoo! Labs,CITY OF EDINBURGH COUNCIL,Carnegie Mellon University,Centrum Wiskunde & Informatica,THE JAMES HUTTON INSTITUTE,Connected Digital Economy Catapult,Yahoo! Labs,Psymetrix Limited,Amazon Development Centre ScotlandFunder: UK Research and Innovation Project Code: EP/L016427/1Funder Contribution: 4,746,530 GBPOverview: We propose a Centre for Doctoral Training in Data Science. Data science is an emerging discipline that combines machine learning, databases, and other research areas in order to generate new knowledge from complex data. Interest in data science is exploding in industry and the public sector, both in the UK and internationally. Students from the Centre will be well prepared to work on tough problems involving large-scale unstructured and semistructured data, which are increasingly arising across a wide variety of application areas. Skills need: There is a significant industrial need for students who are well trained in data science. Skilled data scientists are in high demand. A report by McKinsey Global Institute cites a shortage of up to 190,000 qualified data scientists in the US; the situation in the UK is likely to be similar. A 2012 report in the Harvard Business Review concludes: "Indeed the shortage of data scientists is becoming a serious constraint in some sectors." A report on the Nature web site cited an astonishing 15,000% increase in job postings for data scientists in a single year, from 2011 to 2012. Many of our industrial partners (see letters of support) have expressed a pressing need to hire in data science. Training approach: We will train students using a rigorous and innovative four-year programme that is designed not only to train students in performing cutting-edge research but also to foster interdisciplinary interactions between students and to build students' practical expertise by interacting with a wide consortium of partners. The first year of the programme combines taught coursework and a sequence of small research projects. Taught coursework will include courses in machine learning, databases, and other research areas. Years 2-4 of the programme will consist primarily of an intensive PhD-level research project. The programme will provide students with breadth throughout the interdisciplinary scope of data science, depth in a specialist area, training in leadership and communication skills, and appreciation for practical issues in applied data science. All students will receive individual supervision from at least two members of Centre staff. The training programme will be especially characterized by opportunities for combining theory and practice, and for student-led and peer-to-peer learning.
more_vert
chevron_left - 1
- 2
chevron_right
