Facebook (United States)
Facebook (United States)
18 Projects, page 1 of 4
assignment_turned_in Project2023 - 2026Partners:University of Glasgow, Facebook (United States), Microsoft Research (United Kingdom)University of Glasgow,Facebook (United States),Microsoft Research (United Kingdom)Funder: UK Research and Innovation Project Code: EP/X037525/1Funder Contribution: 457,857 GBPMemory management is an essential feature of computing software. During execution time, a program will frequently call a memory allocator library to request space on the computer memory to store data. Unfortunately, memory management systems are vulnerable. According to recent studies from Microsoft and Google, memory bugs account for 70% of critical software vulnerabilities, which can crash our important IT infrastructures and leak confidential and personal data. Such problems are frequently caused by (mis)management of dynamically allocated memory. At the same time, memory allocation is also significant for performance. A large proportion of program execution time is devoted to memory management routines. The hardware industry responds to severe memory vulnerabilities by adding secure extensions to mainstream processor families. These new hardware facilities provide fundamental mechanisms for more secure memory allocators. Still, their full benefits are yet to be seen due to the massive developer effort required to write or optimise a memory management codebase and the further effort to re-target the code to new architectures. For example, the state-of-the-art snmalloc secure memory allocator comprises 25,000 lines of code painstakingly developed by leading industrial practitioners over four years. Yet, it only supports a small set of hardware security features. A crisis is looming - without a solution, either hardware innovation in security mechanisms will stall as software cannot fit, or we will have to continue to suffer from frequent security issues due to memory bugs. Such a crisis requires us to fundamentally rethink how we implement memory management libraries. This project will develop an entirely new way to build memory allocators. It aims to massively reduce human involvement in developing and optimising memory management libraries to target a diverse range of hardware architectures. Our approach involves specifying the required memory management attributes and then synthesising performant memory management code to satisfy these specifications. Our approach ensures such code is correct by construction by using model-checking techniques to verify the generated code match the expected behaviour and enhanced security requirements. Further, we will support a range of processor backends featuring recently proposed novel secure extensions for hardware architectures. Our work is enabled by the recent advance in deep learning for code generation and formal methods for modelling large-scale software systems. The recent breakthrough of ML in generating new and better matrix multiplication implementation and its demonstrated effectiveness in game playing, natural language processing and autonomous systems gives us the confidence that it is now possible to generate correct and performant memory management libraries. If AI can learn to drive a car, it must be able to reason about carefully designed security properties and primitives to generate memory allocator code. This ambitious project, if successful, will have a transformative impact on how we design memory management libraries. Our software prototype will be open-sourced and be applied to real-life applications. Given the accelerated and disrupted changes in hardware security design and the massive mismatch between software and hardware, success in this project will be of interest to companies that provide hardware IP and software development tools, two areas in which the UK is world-leading. It will also help reduce the number of memory-related bugs and improve application performance so that the general public can benefit from more secure and efficient computer systems. We believe we have the team, partners and work plan to achieve the ambitious goal. We are ideally placed to carry out the proposed research, possessing key skills in the primary research areas of memory management, formal verification and ML-based code synthesis and optimisation.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::7b809667ad761082cfc1ac38aa12352d&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::7b809667ad761082cfc1ac38aa12352d&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2023 - 2026Partners:UNIVERSITY OF EXETER, University of Exeter, Facebook (United States)UNIVERSITY OF EXETER,University of Exeter,Facebook (United States)Funder: UK Research and Innovation Project Code: MR/X011135/1Funder Contribution: 557,968 GBPHard optimisation problems are ubiquitous across the breadth of science, engineering and economics. For example, in water system planning and management, water companies are often interested in optimising several system performance measures of their infrastructures. They are particularly interested in providing sustainable and resilient water/wastewater services that are able to cope with and recover from disruption, as well as wider challenges brought by climate change and population increase. As a classic discipline, significant advances in both theory and algorithms have been achieved in optimisation. However, almost all traditional optimisation solvers, ranging from classic methods to nature-inspired computational intelligence techniques, ignore some important facts: (i) real-world optimisation problems seldom exist in isolation; and (ii) artificial systems are designed to tackle a large number of problems over their lifetime, many of which are repetitive or inherently related. Instead, optimisation is run as a 'one-off' process, i.e. it is started from scratch by assuming zero prior knowledge each time. Therefore, knowledge/experience from solving different (but possibly related) optimisation exercises (either previously completed or currently underway), which can be useful for enhancing the target optimisation task at hand, will be wasted. Although the Bayesian optimisation considers incorporating some decision maker's knowledge as a prior, the gathered experience during the optimisation process is discarded afterwards. In this case, we cannot expect any automatic growth of their capability with experience. This practice is counter-intuitive from the cognitive perspective where humans routinely grow from a novice to domain experts by gradually accumulating problem-solving experience and making use of existing knowledge to tackle new unseen tasks. In machine learning, leveraging knowledge gained from related source tasks to improve the learning of the new task is known as transfer learning, an emerging field that considerable success has been witnessed in a wide range of application domains. There have been some attempts on applying transfer learning in evolutionary computation, but they do not consider the optimisation as a closed-loop system. Moreover, the recurrent patterns within problem-solving exercises have been discarded after optimisation, thus experience cannot be accumulated over time. The proposed research will develop a revolutionary general-purpose optimiser (as known as a transfer optimisation system) that will be able to learn knowledge/experience from previous optimisation process and then continuously, autonomously, and selectively transfer such knowledge to new unseen optimisation tasks in open-ended dynamic environments. The transfer optimisation system places adaptive automation at the heart of the development process and explores novel synergies at the crossroads of several disciplines including nature-inspired computation, machine learning, human-computer interaction and high-performance parallel computing. The outputs will bring automation in industry, including an optimised/shortened production cycle, reduced resource consumption and more balanced and innovative products, which have great potentials to result in economic savings and an increase of turnover. The proposed methods will be rigorously evaluated by the industrial partners, first in water industry and will be expanded to a boarder range of sectors which put the optimisation at the heart of their regular production/management process (e.g. software engineering, renewable energy, healthcare, automotive, appliance and medicine manufacturers).
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::8b0140a14614c64e149d9858bda98994&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::8b0140a14614c64e149d9858bda98994&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2023 - 2025Partners:Facebook (United States), Alibaba Group, University of Leeds, Facebook, Alibaba Group (China) +1 partnersFacebook (United States),Alibaba Group,University of Leeds,Facebook,Alibaba Group (China),University of LeedsFunder: UK Research and Innovation Project Code: EP/X018202/1Funder Contribution: 202,424 GBPCompilers are a crucial component of our computing stack. A compiler translates the high-level source code to low-level machine instructions to run on the underlying hardware. It is responsible for ensuring software runs efficiently so that our computers can provide more real-time information, faster services, and better user experience, and has a less environmental impact. While being a vital software infrastructure, today's compilers still rely on techniques developed several decades ago. They are limited by many sub-optimal choices used to work around the constraints of computers designed 30 years ago. As a result, today's compiler infrastructure is too old to utilise advanced algorithms and is too complex for any compiler developer to reason about successfully. Worse, existing compilers are all out-of-date and fail to capitalise on modern hardware design, causing huge performance loss and energy inefficiency. This compiler-hardware mismatch, in turn, leads to poor user experience and hinders scientific discovery and business innovation. A crisis is looming - without a solution, either hardware innovation will stall as software cannot fit, or computing performance and energy efficiency will suffer. Such a crisis requires us to rethink how we design and implement compilers fundamentally. This project aims to bring compiler technology to the 21st century to allow compilers to take advantage of machine learning (ML) and artificial intelligence (AI) techniques and modern computing hardware. Our goal is to massively reduce the human involvement in developing compiler optimisations so that compilers can quickly catch up with the ever-changing hardware to deliver scalable performance on the current and future computing hardware. We believe that ML is entirely capable of constructing efficient compiler optimisation heuristics from simple rules with zero human guidance. This idea of fully relying on ML to learn code analysis and optimisation strategies is highly speculative and has not been tested before. However, the recent breakthrough effectiveness of ML in domains like game playing, natural language processing, drug discovery, chip design, and autonomous systems gives us the confidence that this is now possible in compilers. If AI can learn to drive a car, it must be able to reason about programs to perform optimisations like scheduling machine instructions. This ambitious project, if successful, will have a transformative impact on how we design compilers. Our software prototype will be open-sourced and integrated with a key compiler infrastructure. It opens up a new way to automate the entire compiler development process, allowing compilers to get the most out of new computer hardware architecture. It will help to safeguard the massive $400B investment in today's software-hardware ecosystem and provide a pathway to greater performance in the future. The current push for specialised computer processors will not be effective if the software cannot utilise the hardware. By significantly reducing expert involvement in compiler development, this project offers a sustainable way for software to manage the hardware complexity, enabling innovation and continued growth in computing hardware. Given the accelerated and disrupted changes in hardware technology and the massive mismatch between software and hardware, success in this project will be of interest to companies that provide hardware IP and software development tools, two areas in which the UK is world-leading. It will also help ensure continued performance improvement for end-users, despite the radical changes in computer systems due to the end of Moore's Law. We believe that we have the skills, expertise, partners and work plan to achieve the ambitious goal. We are world-leading in ML-based code optimisation, have pioneered in employing deep learning for compiler optimisation and have collaborative links with key industry stakeholders in the areas.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::e59119ce4ac24831122aa35eb9072201&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::e59119ce4ac24831122aa35eb9072201&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2024 - 2028Partners:University of Oxford, Autodesk, Imperial College London, Snap Group Ltd, Novartis Pharma AG +2 partnersUniversity of Oxford,Autodesk,Imperial College London,Snap Group Ltd,Novartis Pharma AG,Facebook (United States),Roche (United States)Funder: UK Research and Innovation Project Code: MR/Y018818/1Funder Contribution: 1,650,520 GBP328.77 million terabytes of data are created each day. To put this in perspective, if you were to store all this data on CDs, you would need over 1.5 trillion CDs each day. Modern machine learning (ML), specifically deep learning (DL), works to interpret this massive data, uncover fascinating patterns, and make predictions. DL has been transformative in numerous areas, from healthcare and retail to finance and manufacturing. This rapid advancement, often led by large technology corporations, is evidenced by breakthroughs in conversational AI, like ChatGPT / GPT4, and text-guided image synthesis. Today, one in seven UK businesses have adopted at least one form of ML technology. Despite this success, a challenge lurks in the realm of modern ML. The data we collect from various sources tends to be unstructured and complex. For instance, our Facebook comments are influenced not only by our past conversations, mood, and thoughts but also by the intricate interplay between these factors. Similarly, the interaction between proteins depends on their shape and other interactions. To extract meaningful insights from data and solve real-world problems, we need to consider these complex 'higher-order relationships', which play a key role in areas such as creating accurate 3D models for safer self-driving cars, predicting drug-target interactions for effective drug repurposing during pandemics, and accurately modeling brain neurochemistry for developing life-saving medicine against Alzheimer's disease. Unfortunately, most current machine learning systems focus mainly on modeling pairwise connections and overlook these higher-order relationships. This limits their capability to represent and analyze complex data, especially acquired in scientific areas by X-ray scanners, electron microscopy, or 3D laser scanners. My fellowship aims to harness the potential of big data by developing a new paradigm of deep learning, which encompasses higher order relations at its core and considers the data topology - an important branch of mathematics studying the "shape of data". My proposed research will achieve this in three key objectives: (1) I will develop Unifying Complexes (UCs), novel data representations that simplify working with higher-order relationships while preserving the hierarchical nature of data. At present, the industry standard relies on graphs, which only model pairwise relationships. (2) Existing deep learning models won't readily adapt to the novel UCs I will be developing in Objective 1. I will therefore create a variety of deep learning models tailored to work natively with these UCs. From discriminative to generative, these models will enable learning from rich and complex data. (3) Lastly, I will deploy the UCs and the models developed in Objectives 1 and 2 to address a variety of challenges in multiple applications, including 3D computer vision, drug screening, discovery and design, and in building new and practically relevant theories of deep learning. Thanks to the resources and the uninterrupted time provided by Future Leaders Fellowship (FLF), as a result of UNTOLD, I will be delivering an open-source comprehensive software suite designed to harness the full potential of big, complex data. Beyond scientific dissemination, the widespread adoption of the DL models I develop as part of UNTOLD, will have substantial socioeconomic impacts such as improved augmented and virtual reality, safer self-driving cars, personalized medicine, and better understanding of rare diseases. This will both position the UK as a leader in cutting-edge ML research and will gradually enhance its presence across all sectors using ML to convert complex data into actionable insights.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::eeb327fc507ebedcf6902d2cc63db2ba&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::eeb327fc507ebedcf6902d2cc63db2ba&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2019 - 2024Partners:Max Planck Institutes, Imperial College London, Royal Free London NHS Foundation Trust, Max-Planck-Gymnasium, Oculus VR, LLC +2 partnersMax Planck Institutes,Imperial College London,Royal Free London NHS Foundation Trust,Max-Planck-Gymnasium,Oculus VR, LLC,Oculus VR, LLC,Facebook (United States)Funder: UK Research and Innovation Project Code: EP/S010203/1Funder Contribution: 1,350,280 GBPRecently, computer vision is witnessing a paradigm shift. Standard robust features, such as Scale Invariant Feature Transform (SIFT), Histogram of Oriented Gradienst (HoGs), etc., are replaced by learnable filters via the application of Deep Convolutional Neural Networks (DCNNs). Furthermore, for applications (e.g., detection, tracking, recognition, etc.) that involve deformable objects, such as human bodies/faces/hands etc., traditional statistical or physics-based deformable models are combined with DCNNs with very good results. The current progress is made due to the abundance of complex visual data in the Big Data era, spread mostly through the Internet via web services such as Youtube, Flickr, and Google Images. The latter has led to the development of huge databases (such as ImageNet, Microsoft COCO, and 300W, etc.) consisting of visual data captured "in-the wild". Furthermore, the scientific and industrial community has undertaken large-scale annotation tasks. For example, me and my group have made huge efforts to annotate over 30K facial images and 500K video frames with regards to a large number of facial landmarks. The COCO team has annotated thousands of body images with regards to body joints, etc. All the above annotations generally refer to a set of sparse parts of objects and/or their segments, which can be annotated by humans (e.g., through crowd sourcing). In order to make the next step in automatic understanding of a scene in general, and humans and their actions, in particular, the community needs to acquire 3D dense information. Even though the collection of 2D intensity images is now a relatively easy and inexpensive process, the collection of high-resolution 3D scans of deformable objects, such as humans and their (body) parts, still remains an expensive and laborious process. This is the principal reason why very limited efforts have been made in collecting large-scale databases of 3D faces, heads, hands, bodies, etc. In DEFORM, I propose to perform large-scale collection of high-resolution 4D sequences of humans. Furthermore, I propose new lines of research in order to provide high quality annotations regarding the correspondences between the 2D intensity "in-the-wild" images and the dense 3D structure of deformable objects' shapes and in particular of humans and their parts. Establishing dense 2D-to-3D correspondences can effortlessly solve many image-level tasks such as landmark (part) localisation, dense semantic part segmentation, estimation of deformations (i.e., behaviour), etc.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::b479f885bad3bfb2abf13589b38e5c9d&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::b479f885bad3bfb2abf13589b38e5c9d&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu
chevron_left - 1
- 2
- 3
- 4
chevron_right