Numerical Algorithms Group Ltd (NAG) UK
Numerical Algorithms Group Ltd (NAG) UK
23 Projects, page 1 of 5
assignment_turned_in Project2008 - 2011Partners:University of Hertfordshire, Numerical Algorithms Group Ltd, QinetiQ, NAG, Qioptiq Ltd +2 partnersUniversity of Hertfordshire,Numerical Algorithms Group Ltd,QinetiQ,NAG,Qioptiq Ltd,University of Hertfordshire,Numerical Algorithms Group Ltd (NAG) UKFunder: UK Research and Innovation Project Code: EP/F069383/1Funder Contribution: 236,783 GBPGiven a Fortran program which evaluates numerically a scalar output y = f(x) from a vector x of input values, we are frequently interested in evaluating the gradient vector g = f '(x) whose components are the derivatives (sensitivities) dy/dx.Automatic Differentiation is a set of techniques for automatically transforming the program for evaluating f into a program for evaluating f '. In particular the adjoint, or reverse, mode of Automatic Differentiation can produce numerical values for all components of the gradient g at a computational cost of about three evaluations of f, even if there are millions of components in x and g. This is done by using the chain rule from calculus (but applied to floating point numerical values, rather than to symbolic expressions) so as to evaluate numerically the sensitivity of the output with respect to each floating point calculation performed. However, doing this requires making the program to run backwards, since these sensitivities must be evaluated starting with dy/dy = 1 and ending with dy/dx = g, which is the reverse order to the original calculation. It also requires the intermediate values calculated by f to be either stored on the forward pass, or recomputed on the reverse pass by the adjoint program. Phase II of the CompAD project has already produced the first industrial strength Fortran compiler in the world able to perform this adjoint transformation (and reverse program flow) automatically. Previous Automatic Differentiation tools used either overloading (which was hard to optimize) or source transformation (which could not directly utilize low level compiler facilities).The adjoint Fortran compiler produced by phase II is perfectly adequate for small to medium sized problems (up to a few hundred input variables), and meets the objectives of the second phase of the project. However even moderately large problems (many thousands of input variables) require the systematic use and placement of checkpoints, in order to manage efficiently the tradeoff between storage on the way forward and recomputation on the way back. With the present prototype, the user must place and manage these checkpoints explicitly. This is almost acceptable for experienced users with very large problems which they already understand well, but it is limiting and timeconsuming for users without previous experience of using Automatic Differentiation, and represents a barrier to the uptake of numerical methods based upon Automatic Differentiation. The objective of Phase III of the CompAD project is to automate the process of trading off storage and recomputation in a way which is close to optimal. Finding a tradeoff which is actually optimal is known to be an NP-hard problem, so we are seeking solutions which are almost optimal in a particular sense. Higher order derivatives (eg directional Hessians) can be generated automatically by feeding back into the compiler parts of its own output during the compilation process. We intend to improve the code transformation techniques used in the compiler to the point where almost optimally efficient higher order derivative code can be generated automatically in this way.A primary purpose of this project is to explore alternative algorithms and representations for program analysis and code transformation in order to solve certain hard problems and lay the groundwork for future progress with others. But we will be using some hard leading edge numerical applications from our industrial partners to guide and prove the new technology we develop, and the Fortran Compiler resulting from this phase of the project is designed to be of widespread direct use in Scientific Computing.
more_vert assignment_turned_in Project2018 - 2019Partners:UCL, N8 Research Partnership, Wolfram Research Europe Ltd, University of Edinburgh, Wolfram Research Europe Ltd +15 partnersUCL,N8 Research Partnership,Wolfram Research Europe Ltd,University of Edinburgh,Wolfram Research Europe Ltd,3DS,Dassault Systemes UK Ltd,Microsoft Research Ltd,The University of Manchester,N8 Research Partnership,Maplesoft,The Mathworks Ltd,Maplesoft,University of Leeds,MICROSOFT RESEARCH LIMITED,The Mathworks Ltd,University of Leeds,NAG,University of Salford,Numerical Algorithms Group Ltd (NAG) UKFunder: UK Research and Innovation Project Code: EP/N018958/2Funder Contribution: 305,534 GBP"Software is the most prevalent of all the instruments used in modern science" [Goble 2014]. Scientific software is not just widely used [SSI 2014] but also widely developed. Yet much of it is developed by researchers who have little understanding of even the basics of modern software development with the knock-on effects to their productivity, and the reliability, readability and reproducibility of their software [Nature Biotechnology]. Many are long-tail researchers working in small groups - even Big Science operations like the SKA are operationally undertaken by individuals collectively. Technological development in software is more like a cliff-face than a ladder - there are many routes to the top, to a solution. Further, the cliff face is dynamic - constantly and quickly changing as new technologies emerge and decline. Determining which technologies to deploy and how best to deploy them is in itself a specialist domain, with many features of traditional research. Researchers need empowerment and training to give them confidence with the available equipment and the challenges they face. This role, akin to that of an Alpine guide, involves support, guidance, and load carrying. When optimally performed it results in a researcher who knows what challenges they can attack alone, and where they need appropriate support. Guides can help decide whether to exploit well-trodden paths or explore new possibilities as they navigate through this dynamic environment. These guides are highly trained, technology-centric, research-aware individuals who have a curiosity driven nature dedicated to supporting researchers by forging a research software support career. Such Research Software Engineers (RSEs) guide researchers through the technological landscape and form a human interface between scientist and computer. A well-functioning RSE group will not just add to an organisation's effectiveness, it will have a multiplicative effect since it will make every individual researcher more effective. It has the potential to improve the quality of research done across all University departments and faculties. My work plan provides a bottom-up approach to providing RSE services that is distinctive from yet complements the top-down approach provided by the EPRSC-funded Software Sustainability Institute. The outcomes of this fellowship will be: Local and National RSE Capability: A RSE Group at Sheffield as a credible roadmap for others pump-priming a UK national research software capability; and a national Continuing Professional Development programme for RSEs. Scalable software support methods: A scalable approach based on "nudging", to providing research software support for scientific software efficiency, sustainability and reproducibility, with quality-guidelines for research software and for researchers on how best to incorporate research software engineering support within their grant proposals. HPC for long-tail researchers: 'HPC-software ramps' and a pathway for standardised integration of HPC resources into Desktop Applications fit for modern scientific computing; a network of HPC-centric RSEs based around shared resources; and a portfolio of new research software courses developed with partners. Communication and public understanding: A communication campaign to raise the profile of research software exploiting high profile social media and online resources, establishing an informal forum for research software debate. References [Goble 2014] Goble, C. "Better Software, Better Research". IEEE Internet Computing 18(5): 4-8 (2014) [SSI 2014] Hettrick, S. "It's impossible to conduct research without software, say 7 out of 10 UK researchers" http://www.software.ac.uk/blog/2014-12-04-its-impossible-conduct-research-without-software-say-7-out-10-uk-researchers (2014) [Nature 2015] Editorial "Rule rewrite aims to clean up scientific software", Nature Biotechnology 520(7547) April 2015
more_vert assignment_turned_in Project2022 - 2024Partners:University of Sheffield, University of Sheffield, nVIDIA, Numerical Algorithms Group Ltd, NAG +3 partnersUniversity of Sheffield,University of Sheffield,nVIDIA,Numerical Algorithms Group Ltd,NAG,[no title available],nVIDIA,Numerical Algorithms Group Ltd (NAG) UKFunder: UK Research and Innovation Project Code: EP/X019349/1Funder Contribution: 161,026 GBPResearch in Particle Physics (PP) in the next decade will be dominated by a 10x increase in the amount of experimental data, leading to unprecedented precision. Analysing and interpreting these data requires advanced simulation techniques and is an important use-case for Exascale computing worldwide. This project aims to develop novel algorithms and paradigms for large scale simulations to maximise the performance extracted from heterogeneous parallel hardware architectures that are being deployed at large HPC centres across the world. The ExaTEPP proposal puts the particle physics use-case at the centre of the ExCALIBUR programme, through the use of existing and future testbeds and the collaboration and exchange of ideas with other working groups. Our goal is to develop the tools needed in the UK to exploit HPC in the next decade and to focus on the transferable skills acquired by RSEs working on this use-case. Research projects in both theoretical and experimental particle physics are based on large international collaborations, and collaborative values are deeply embedded in the research culture of the field. ExaTEPP is built upon existing international collaborations with the goal of providing world-leading contributions to future developments. Collaboration with industry is crucial to gain and exchange technical knowledge and fully exploit advancements in both hardware and software. Leading HPC industries have endorsed the activities of ExaTEPP, committing representatives of theirs to actively contribute to our programme and to the management board of the project in order to foster a dynamic, bidirectional knowledge exchange. The activities of ExaTEPP are strongly aligned with the four pillars of the ExCALIBUR programme. While delivering the new software needed by the community, ExaTEPP will contribute directly to advance the ExCALIBUR goals, integrate with cross-cutting themes and exploit the available hardware testbeds for software optimisation. The proposal is structured into three work packages (WP). WP1 focuses on training, knowledge exchange and communication with other ExCALIBUR working groups representing other scientific disciplines in the UK. WP2 focuses on development of simulations on HPCs as an essential tool to address urgent particle physics questions that dominate the international research landscape and are highly relevant for UK science, such as the nature of the Higgs boson or the understanding of the muon gyromagnetic factor (g-2). Benchmarking work is proposed in WP3 to monitor the efficiency of the software developed to maximise the physics output per kWh of power, contributing to the decarbonisation agenda. Our work will primarily impact the scientific community, both in our specific fields and more broadly in high-performance scientific computing, including the wider ExCALIBUR programme, and the supercomputing industry. Our outputs will be disseminated in the PP scientific community through participation in conferences, organisation of workshops and training events, and scientific publications in highly reputed journals. To promote and disseminate the code and the material that we shall develop, we will open events such as hackathons and schools to other ExCALIBUR funded working groups, to industry and to the wider community. Contributions to already open-source software will be made available following the development processes for each project; new projects will be made available as open source through publicly accessible repositories (e.g., GitHub), and we will work with the authors of any currently proprietary software touched by the project to enable them to open-source their projects. The training material will similarly be freely licensed and made available on dedicated open web sites and YouTube channels.
more_vert assignment_turned_in Project2016 - 2018Partners:UCL, Dassault Systèmes (United Kingdom), Microsoft Research Ltd, The University of Manchester, University of Edinburgh +20 partnersUCL,Dassault Systèmes (United Kingdom),Microsoft Research Ltd,The University of Manchester,University of Edinburgh,Wolfram Research Europe Ltd,3DS,Dassault Systemes UK Ltd,N8 Research Partnership,Maplesoft,Wolfram Research Europe Ltd,N8 Research Partnership,Numerical Algorithms Group Ltd,MICROSOFT RESEARCH LIMITED,The Mathworks Ltd,NAG,Cybernet Systems Corporation (Canada),Maplesoft,[no title available],The Mathworks Ltd,University of Sheffield,University of Salford,University of Sheffield,University of Manchester,Numerical Algorithms Group Ltd (NAG) UKFunder: UK Research and Innovation Project Code: EP/N018958/1Funder Contribution: 507,674 GBP"Software is the most prevalent of all the instruments used in modern science" [Goble 2014]. Scientific software is not just widely used [SSI 2014] but also widely developed. Yet much of it is developed by researchers who have little understanding of even the basics of modern software development with the knock-on effects to their productivity, and the reliability, readability and reproducibility of their software [Nature Biotechnology]. Many are long-tail researchers working in small groups - even Big Science operations like the SKA are operationally undertaken by individuals collectively. Technological development in software is more like a cliff-face than a ladder - there are many routes to the top, to a solution. Further, the cliff face is dynamic - constantly and quickly changing as new technologies emerge and decline. Determining which technologies to deploy and how best to deploy them is in itself a specialist domain, with many features of traditional research. Researchers need empowerment and training to give them confidence with the available equipment and the challenges they face. This role, akin to that of an Alpine guide, involves support, guidance, and load carrying. When optimally performed it results in a researcher who knows what challenges they can attack alone, and where they need appropriate support. Guides can help decide whether to exploit well-trodden paths or explore new possibilities as they navigate through this dynamic environment. These guides are highly trained, technology-centric, research-aware individuals who have a curiosity driven nature dedicated to supporting researchers by forging a research software support career. Such Research Software Engineers (RSEs) guide researchers through the technological landscape and form a human interface between scientist and computer. A well-functioning RSE group will not just add to an organisation's effectiveness, it will have a multiplicative effect since it will make every individual researcher more effective. It has the potential to improve the quality of research done across all University departments and faculties. My work plan provides a bottom-up approach to providing RSE services that is distinctive from yet complements the top-down approach provided by the EPRSC-funded Software Sustainability Institute. The outcomes of this fellowship will be: Local and National RSE Capability: A RSE Group at Sheffield as a credible roadmap for others pump-priming a UK national research software capability; and a national Continuing Professional Development programme for RSEs. Scalable software support methods: A scalable approach based on "nudging", to providing research software support for scientific software efficiency, sustainability and reproducibility, with quality-guidelines for research software and for researchers on how best to incorporate research software engineering support within their grant proposals. HPC for long-tail researchers: 'HPC-software ramps' and a pathway for standardised integration of HPC resources into Desktop Applications fit for modern scientific computing; a network of HPC-centric RSEs based around shared resources; and a portfolio of new research software courses developed with partners. Communication and public understanding: A communication campaign to raise the profile of research software exploiting high profile social media and online resources, establishing an informal forum for research software debate. References [Goble 2014] Goble, C. "Better Software, Better Research". IEEE Internet Computing 18(5): 4-8 (2014) [SSI 2014] Hettrick, S. "It's impossible to conduct research without software, say 7 out of 10 UK researchers" http://www.software.ac.uk/blog/2014-12-04-its-impossible-conduct-research-without-software-say-7-out-10-uk-researchers (2014) [Nature 2015] Editorial "Rule rewrite aims to clean up scientific software", Nature Biotechnology 520(7547) April 2015
more_vert assignment_turned_in Project2015 - 2018Partners:QUB, Numerical Algorithms Group Ltd, NAG, Numerical Algorithms Group Ltd (NAG) UKQUB,Numerical Algorithms Group Ltd,NAG,Numerical Algorithms Group Ltd (NAG) UKFunder: UK Research and Innovation Project Code: EP/M01147X/1Funder Contribution: 963,928 GBPMoore's Law and Dennard scaling have led to dramatic performance increases in microprocessors, the basis of modern supercomputers, which consist of clusters of nodes that include microprocessors and memory. This design is deeply embedded in parallel programming languages, the runtime systems that orchestrate parallel execution, and computational science applications. Some deviations from this simple, symmetric design have occurred over the years, but now we have pushed transistor scaling to the extent that simplicity is giving way to complex architectures. The absence of Dennard scaling, which has not held for about a decade, and the atomic dimensions of transistors have profound implications on the architecture of current and future supercomputers. Scalability limitations will arise from insufficient data access locality. Exascale systems will have up to 100x more cores and commensurately less memory space and bandwidth per core. However, in-situ data analysis, motivated by decreasing file system bandwidths will increase the memory footprints of scientific applications. Thus, we must improve per-core data access locality and reduce contention and interference for shared resources. Energy constraints will fundamentally limit the performance and reliability of future large-scale systems. These constraints lead many to predict a phenomenon of "dark silicon" in which half or more of the transistors on each chip must be powered down for safe operation. Low-power processor technologies based on sub-threshold or near-threshold voltage operation are a viable alternative. However, these techniques dramatically decrease the mean time to failure at scale and, thus, require new paradigms to sustain throughput and correctness. Non-deterministic performance variation will arise from design process variation that leads to asymmetric performance and power consumption in architecturally symmetric hardware components. The manifestations of the asymmetries are non-deterministic and can vary with small changes to system components or software. This performance variation produces non-deterministic, non-algorithmic load imbalance. Reliability limitations will stem from the massive number of system components, which proportionally reduces the mean-time-to-failure, but also from the component wear and from low-voltage operation, which introduces timing errors. Infrastructure-level power capping may also compromise application reliability or create severe load imbalances. The impact of these changes on technology will travel as a shockwave throughout the software stack. For decades, we have designed computational science applications based on very strict assumptions that performance is uniform and processors are reliable. In the future, hardware will behave unpredictably, at times erratically. Software must compensate for this behavior. Our research anticipates this future hardware landscape. Our ecosystem will combine binary adaptation, code refactoring, and approximate computation to prepare CSE applications. We will provide them with scale-freedom - the ability to run well at scale under dynamic execution conditions - with at most limited, platform-agnostic code refactoring. Our software will provide automatic load balancing and concurrency throttling to tame non-deterministic performance variations. Finally, our new form of user-controlled approximate computation will enable execution of CSE applications on hardware with low supply voltages, or any form of faulty hardware, by selectively dropping or tolerating erroneous computation that arises from unreliable execution, thus saving energy. Cumulatively, these tools will enable non-intrusive reengineering of major computational science libraries and applications (2DRMP, Code_Saturne, DL_POLY, LB3D) and prepare them for the next generation of UK supercomputers. The project partners with NAG a leading UK HPC software and service provider.
more_vert
chevron_left - 1
- 2
- 3
- 4
- 5
chevron_right
