Powered by OpenAIRE graph

French Institute for Research in Computer Science and Automation

French Institute for Research in Computer Science and Automation

6 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/F036345/1
    Funder Contribution: 813,748 GBP

    Computer Science is undergoing a difficult transition. The continual performance improvements of past decades were achieved primarily by speeding up sequential computation. Constraints in device manufacture, especially the problem of power consumption, are driving a shift to ubiquitous concurrent computation, with multicore processors becoming commonplace. Programming these, however, to deliver high-performance and reliable systems, remains very challenging. There are two key difficulties, which we address here. Firstly, the concurrent algorithms that are being developed, such as non-blocking datastructures and implementations of software transactional memory, are very subtle, so informal reasoning cannot give high confidence in their correctness. Secondly, the extensive prior work on software verification for concurrency (including temporal logics, rely-guarantee reasoning, separation logic, and process calculi) neglects what is now a key phenomenon: relaxed memory models. For performance reasons, typical multiprocessors do not provide a sequentially consistent memory model. Instead, memory accesses may be reordered in various constrained ways, making it still harder to reason about executions. In this project we will establish accurate semantics for the behaviour of real-world processors, such as x86, PowerPC, and ARM architectures, covering their memory models and fragments of their instruction sets. We will experimentally validate these, building on our previous experience with realistic large-scale semantics. Above these, we will develop theoretical and practical tools for specifying and proving correctness of modern algorithms, building on our experience with separation logic, mechanized reasoning, and algorithm design. We will thereby lay the groundwork for verified compilation targeting real multicore processors, providing both high performance and high confidence for future applications.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/K040561/1
    Funder Contribution: 98,537 GBP

    In the past few years, computer processors have reached a speed limit imposed by semiconductor physics. Before, increased performance came from running a single program faster, but now it comes from running more programs concurrently, on multiple "cores". Multi-core processors also support low-power applications, and are becoming popular on mobile devices, such as smart phones, where several slow cores use less battery power than a single fast core. To write software for multi-core processors, programmers must decompose tasks into cooperating programs, ideally one per core. However, even experts cannot write these programs without tremendous effort, and the programs often have subtle bugs. Programmers have not been given the intellectual tools necessary for managing the complexity of multi-core computation. This project focuses on a critical challenge posed by multi-core processors: their relaxed memory models. Conceptually, the processor's cores are connected to a single memory, and programs running on different cores communicate by writing data to the memory for others to read. In reality, the processor achieves good performance by not giving the programmer a globally consistent picture of the memory: at any point in time the cores can appear to disagree on its contents. The processor does make some guarantees about the memory, so that the programmer can write working programs, but it carefully avoids making others. A relaxed memory model specifies which guarantees are made and which are not. Our objectives are to improve the theory of relaxed memory models, and to apply this theory to a new model that is easier to understand in practice. Most of the time, programming in a high-level language should have advantages over programming in the processor's low-level assembly language: advantages in, for example, reliability, security, and cost of development. However, this is not the case with relaxed memory models: the high-level language is more complicated because it has to account for the variety of significantly different processors that the high-level language can be compiled to, and it has to account for the compiler's optimisations too. The primary tension is between usability/security (for example, that sensitive data will not be leaked by a malicious program forging pointers to the data) and efficiency, with the latter driving existing designs. The Java Memory Model attempts to give basic security guarantees, but several underlying flaws have been discovered. On the other extreme, the new C and C++ models make no attempt to provide security guarantees. The design space for relaxed memory models has not been thoroughly explored. In this project, we will design a relaxed memory model for a high level language that gives stronger guarantees to programmers, making it easier to write, reason about, and verify concurrent programs. Our approach to the design combines a focus on real-world concurrent algorithms, to ensure that it is practical, with mathematical rigor, to ensure that it supports robust reasoning principles that will ultimately help programmers to understand it and to write high quality concurrent software systems.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R010811/1
    Funder Contribution: 100,840 GBP

    The human cardiovascular system consists of two large arteries: the aorta (AO), which supplies oxygenated blood to the body, and the pulmonary artery (PA), which supplies deoxygenated blood to the lungs for oxygenation. In healthy individuals, the pressure in the AO is significantly higher than the pressure in the PA. Pulmonary artery hypertension (PAH) is a disease in which the PA pressure is abnormally elevated and is classified as one of the most devastating disorders by the pulmonary artery association UK. This is evidenced by a mean survival time after diagnosis of less than 30 months in adults and less than 12 months in children. A recently proposed treatment for severe PAH, i.e. the case when PA pressure is higher than the pressure in AO, is to create a connection, known as the Potts shunt, between the PA and the AO. Just as a connection between two pipes carrying fluids with high and low pressures will lead to a reduction of pressure in the high-pressure pipe and an increase of pressure in the low-pressure pipe, the idea is that a Potts shunt can lead to a reduction of PA pressure in severe PAH patients. This reduction in PA pressure is desirable but will also result in mixing of oxygenated and deoxygenated blood, an undesirable effect. Clinical experience has shown favourable results of this treatment in some patients and unfavourable in others, which is attributed largely to a reduction in cardiac output, the total volume of blood ejected by the heart in one cardiac cycle. This project aims to develop computational models to assess three measures of Potts shunt treatment: 1) reduction of pulmonary artery pressure, 2) mixing of oxygenated and deoxygenated blood, and 3) reduction in cardiac output. Through the computational models, this project will assess the mechanisms behind the success/failure of Potts shunt in relation to the above measures. The end-product will be a computer model which, given a new patient, can determine if a Potts shunt is likely to succeed in the patient. Furthermore, technology to optimise the design of Potts shunt for each patient individually, such that maximal clinical benefit is achieved, will be developed.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R016690/1
    Funder Contribution: 1,074,830 GBP

    In 1965 Gordon Moore observed that the number of components on a computer chip doubles every 18 months. For almost 50 years we have enjoyed this exponentially increasing computer power. This has transformed society, heralding today's computer-age. This growth is based on a fundamental contract between hardware and software that until recently has rarely been questioned. The contract is: hardware may change radically "under the hood", but the code you ran on yesterday's machine will run just the same on tomorrow's - but even faster. Hardware may change, but it looks the same to software, always speaking the same language. This common, consistent language allows the decoupling of software development from hardware development. It has allowed programmers to invest significant effort in software development, secure in the knowledge that it will have decades of use. Alarmingly, this contract will begin to fall apart, putting in jeopardy the massive investment in software. The reason for the breakdown is due to the end of Moore's Law. Technology can no longer be relied upon to scale smaller and provide performance; the end of Moore's Law is beginning to force new approaches to computer design. The cost of maintaining the common consistent language contract is enormous. If we break the contract and develop specialised hardware, there is potentially a 10,000 x massive performance gain available. For this reason, it is clear that in future, hardware will be increasingly specialised and heterogeneous. Currently, however, there is no clear way of programming and using such hardware. As it stands, either hardware evolution will stall as software cannot fit or software will be unable to exploit hardware innovation. Such a crisis requires a fundamental re-think of how we design, program and use heterogeneous systems. What we need is an approach that liberates hardware from the uniform language contract and efficiently connects existing and future software to the emerging heterogeneous landscape. This project proposes a way of doing this by re-thinking how we connect software and hardware by a more flexible language interface which can change from one processor to the next. It allows existing software to use future hardware and allows hardware innovation to connect to new and emerging applications areas such robotics, augmented reality and deep learning. If successful, it will usher in an era of change in systems design where, rather than deny and fear the end of Moore's law, we embrace and exploit it.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/N026314/1
    Funder Contribution: 1,005,750 GBP

    The computational demands of modern computer applications make the pursuit of high performance more critical than ever, and mobile, battery-powered devices, as well as concerns related to climate change, require high performance to co-exist with energy-efficiency. Due to physical limits, the traditional means for improving hardware performance by increasing processor frequency now carries an unacceptably high energy cost. Advances in processor fabrication technology instead allow the construction of many-core processors, where hundreds or thousands of processing elements are placed on a single chip, promising high performance and energy-efficiency through sheer volume of processing elements. Many-core devices are present in practically all consumer devices, including smartphones and tablets. As a result, the general public in developed countries interact with many-core software daily. Many-core technology is also used to accelerate safety-critical software in domains such as medical imaging and autonomous vehicle navigation. It is thus important that many-core software should be reliable. This requires reliable software from programmers, but also a reliable "stack" to support this software, including compilers that allow software to execute on many-core devices, and the many-core devices themselves. Recent work on formal verification and testing by myself and other researchers has identified serious technical problems spanning the many-core stack. These problems undermine confidence in applications of many-core technology: defective many-core software could risk fatal accidents in critical domains, and impact negatively on users in other important application areas. My long-term vision is that the reliability of many-core programming can be transformed through breakthroughs in programming language specification, formal verification and test case generation, enabling automated tools to assist programmers and platform vendors in constructing reliable many-core applications and language implementations. The aim of this five-year Fellowship is to undertake foundational research to investigate a number of open problems whose solution is key to enabling this long-term vision. First, I seek to investigate whether it is possible to precisely express the intricacies of many-core programming language using formal mathematics, providing a rigorous basis on which software and language implementations can be constructed. Second, I aim to tackle several open problems that stand in the way of effective formal verification of many-core software, which would allow developers to obtain strong guarantees that such software will operate as required. Third, I will investigate raising this level of rigour beyond many-core languages. A growing trend is for applications to be written in relatively simple, high-level representations, and then automatically translated into high-performance many-core code. This translation process must preserve the meaning of programs; I will investigate methods for formally certifying that it does. Fourth, I will formulate new methods for testing many-core language implementations, exploiting the rigorous language definitions brought by my approach to enable high test coverage of subtle language features. Collectively, progress on these problems promises to enable a *high-assurance* many-core stack. I will demonstrate one instance of such a stack for the industry-standard OpenCL language and the PENCIL high-level language, showing that high-level PENCIL programs can be reliably compiled into rigorously-defined OpenCL, integrated with verified library components, and deployed on thoroughly tested implementations from many-core vendors. Partnership with four leading many-core technology vendors, AMD, ARM, Imagination Technologies and NVIDIA, provides excellent opportunities for the advances the Fellowship makes to have broad industrial impact.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.