IBM UK Labs Ltd
IBM UK Labs Ltd
27 Projects, page 1 of 6
assignment_turned_in Project2006 - 2011Partners:Airbus (Germany), DaimlerChrysler AG Germany, Motorola Ltd, University of York, University of York +6 partnersAirbus (Germany),DaimlerChrysler AG Germany,Motorola Ltd,University of York,University of York,IBM (United States),TTPCom Ltd,Motorola,IBM (United Kingdom),DaimlerChrysler AG Germany,IBM UK Labs LtdFunder: UK Research and Innovation Project Code: EP/D050618/1Funder Contribution: 784,416 GBPCurrent software engineering practice is a human-led search for solutions which meet needs and constraints under limited resources. Often there will be conflict, both between and within functional and non-functional criteria. Naturally, like other engineers, we search for a near optimal solution. As systems get bigger, more distributed, more dynamic and more critical, this labour-intensive search will hit fundamental limits. We will not be able to continue to develop, operate and maintain systems in the traditional way, without automating or partly automating the search for near optimal solutions. Automated search based solutions have a track record of success in other engineering disciplines, characterised by a large number of potential solutions, where there are many complex, competing and conflicting constraints and where construction of a perfect solution is either impossible or impractical. The SEMINAL network demonstrated that these techniques provide robust, cost-effective and high quality solutions for several problems in software engineering. Successes to date can be seen as strong pointers to search having great potential to serve as an overarching solution paradigm. The SEBASE project aims to provide a new approach to the way in which software engineering is understood and practised. It will move software engineering problems from human-based search to machine-based search. As a result, human effort will move up the abstraction chain, to focus on guiding the automated search, rather than performing it. This project will address key issues in software engineering, including scalability, robustness, reliability and stability. It will also study theoretical foundations of search algorithms and apply the insights gained to develop more effective and efficient search algorithms for large and complex software engineering problems. Such insights will have a major impact on the search algorithm community as well as the software engineering community.
more_vert assignment_turned_in Project2011 - 2016Partners:Scottish and Southern Energy, Mott Macdonald, Scottish and Southern Energy SSE plc, Agilent Technologies (United States), E.ON E&P UK Ltd +18 partnersScottish and Southern Energy,Mott Macdonald,Scottish and Southern Energy SSE plc,Agilent Technologies (United States),E.ON E&P UK Ltd,University of Strathclyde,Agilent Technologies UK Ltd,Accenture (United Kingdom),National Grid PLC,University of Strathclyde,KEMA,National Grid,P B Power,E ON Central Networks plc,P B Power,Agilent Technologies (United Kingdom),IBM (United States),IBM (United Kingdom),IBM UK Labs Ltd,KEMA,Accenture,Mott Macdonald (United Kingdom),Accenture plc (UK)Funder: UK Research and Innovation Project Code: EP/I031650/1Funder Contribution: 3,429,100 GBPThis proposal focuses on the electricity network of 2050. In the move to a decarbonised energy network the heat and transport sectors will be fully integrated into the electricity system. Therefore, the grand challenge in energy networks is to deliver the fundamental changes in the electrical power system that will support this transition, without being constrained by the current infrastructure, operational rules, market structure, regulations, and design guidelines. The drivers that will shape the 2050 electricity network 2050 are numerous: increasing energy prices; increased variability in the availability of generation; reduced system inertia; increased utilisation due to growth of loads such as electric vehicles and heat pumps; electric vehicles as randomly roving loads and energy storage; increased levels of distributed generation; more diverse range of energy sources contributing to electricity generation; and increased customer participation. These changes mean that the energy networks of the future will be far more difficult to manage and design than those of today, for technical, social and commercial reasons. In order to cater for this complexity, future energy networks must be organised to provide increased flexibility and controllability through the provision of appropriate real time decision-making techniques. These techniques must coordinate the simultaneous operation of a large number of diverse components and functions, including storage devices, demand side actions, network topology, data management, electricity markets, electric vehicle charging regimes, dynamic ratings systems, distributed generation, network power flow management, fault level management, supply restoration and fuel choice. Additionally, future flexible grids will present many more options for energy trading philosophies and investment decisions. The risks and implications associated with these decisions and the real-time control of the networks will be harder to identify and quantify due to the increased uncertainty and complexity.We propose the design of an autonomic power system for 2050 as the grand challenge to be investigated. This draws upon the computer science community's vision of autonomic computing and extends it into the electricity network. The concept is based on biological autonomic systems that set high-level goals but delegate the decision making on how to achieve them to the lower level intelligence. No centralised control is evident, and behaviour often emerges from low-level interactions. This allows highly complex systems to achieve real-time and just-in-time optimisation of operations. We believe that this approach will be required to manage the complex trans-national power system of 2050 with many millions of active devices. The autonomic power system will be self-configuring, self-healing, self-optimising and self-protecting. This proposal is not focused on the application of established autonomic computing techniques to power systems (as they don't exist) but the design of an autonomic power system, which relies on distributed intelligence and localised goal setting. This is a significant step forward from the current Smart Grid vision and roadmaps. The autonomic power system is a completely integrated and distributed control system which self-manages and optimises all network operational decisions in real time. To deliver this, fundamental research is required to determine the level of distributed control achievable (or the balance between distributed, centralised, and hierarchical controls) and its impact on investment decisions, resilience, risk and control of a transnational interconnected electricity network. The research within the programme is ambitious and challenges many current philosophies and design approaches. It is also multi-disciplinary, and will foster cross-fertilisation between power systems, complexity science, computer science, mathematics, economics and social sciences.
more_vert assignment_turned_in Project2013 - 2015Partners:University of Bristol, Forum for the Future, Mobile Pie, RWE npower, RWE Generation +10 partnersUniversity of Bristol,Forum for the Future,Mobile Pie,RWE npower,RWE Generation,Bristol Green Doors,IBM UK Labs Ltd,Bristol City Council,University of Bristol,Bristol City Council,IBM (United States),Bristol Green Doors,IBM (United Kingdom),Forum for the Future,Mobile PieFunder: UK Research and Innovation Project Code: EP/K012509/1Funder Contribution: 235,650 GBPDomestic environmental technologies (DETs) such as solid wall insulation, ground source heat pumps and rainwater harvesting have an important role to play if the UK is to meet its environmental objectives around carbon emissions, water conservation and energy use. Many such technologies are cost effective and simple to install, and schemes such as the forthcoming Green Deal make them financially possible for more people. However, if they are to become widely adopted they must be seen as a 'social norm' within communities. An effective way to do this is to encourage interaction between 'local experts' who have installed such technologies, and their neighbours. In this way, best practice can be spread through a community. Digital technology can be used to promote this, by providing information about local experts, mediating communication, creating enjoyable games through which people interact, and rewarding those who contribute their time. This project will work with Bristol Green Doors, a community interest company which promotes events to support communities in shared learning around DETs, to develop and assess a set of distributed mobile services to inform, entertain and engage local community members in sharing best practice. It will also look at how local business recommendations that emerge from such sharing can be tracked and assessed for effectiveness, and so potentially monetised in an online business model. The project will investigate (i) whether and how digital technology can be used to catalyse the spread of best practice within communities, and (ii) whether and how this results in a demonstrable impact on the local economy which can be digitally tracked. It's four objectives are: A. Understand how best-practice sharing around DETs currently takes place in the community, what barriers there are, and what ideas stakeholders have for improving it. B. Develop in collaboration with the community a set of distributed services to support best-practice sharing and recommendation tracking. C. Assess the effectiveness in spreading best practice, acceptability to the community, and impact on local business of different feature sets and functionality the distributed services provide. D. Assess the project with regard to generality of lessons and insights; identify both general and more situation-specific learnings for use in digital enablement of community best-practice sharing and the stimulation of local business. The resulting services will be deployed in the Bristol area by Bristol Green Doors, resulting in increased engagement with DETs by the community. The service platform will be released open source, for use by other organisations involved in community engagement with DETs, and training will be provided through engagement workshops and documentation. The more general research results will be shared with businesses, policy makers and community organisations interested in the spreading of best practice within communities through workshops and publication in both academic and popular venues.
more_vert assignment_turned_in Project2008 - 2013Partners:Atkins UK, Imperial College London, BP British Petroleum, BP Exploration Operating Company Ltd, C S C Computer Sciences Ltd +18 partnersAtkins UK,Imperial College London,BP British Petroleum,BP Exploration Operating Company Ltd,C S C Computer Sciences Ltd,IBM (United States),Ove Arup & Partners Ltd,BP International,IBM UK Labs Ltd,Laing O'Rourke,IBM (United Kingdom),Arup Group,Southern Housing Group,Arup Group Ltd,Atkins UK,CSC (UK) Ltd,Laing O'Rourke plc,B P International Ltd,BP (UK),BP (International),GlaxoSmithKline R & D Ltd,GlaxoSmithKline,Southern Housing GroupFunder: UK Research and Innovation Project Code: EP/F036930/1Funder Contribution: 5,419,790 GBPThis proposal sets out the terms for the continuation funding for the IMRC at Imperial College. All objectives, research plans and beneficiaries information has previously been approved though the 3rd year review of the existing Centre.
more_vert assignment_turned_in Project2007 - 2011Partners:Imperial College London, Reuters Ltd, IBM (United States), IBM (United Kingdom), IBM UK Labs Ltd +1 partnersImperial College London,Reuters Ltd,IBM (United States),IBM (United Kingdom),IBM UK Labs Ltd,Reuters LtdFunder: UK Research and Innovation Project Code: EP/F010192/1Funder Contribution: 471,268 GBPTwo key R&D questions emerge from the recent unprecedently rapid growth in both data and storage capacity: how best to map data onto physical disk devices, and on what factors to base the choice of this mapping? From a user perspective, it is important to ensure that an adequate quality of service (as dictated by application requirements) is delivered at reasonable cost. Additionally, since the total cost of ownership of disk storage is dominated by administration and management activities, ease-of-management and autonomous operation are vital. The technology pull outlined above has led naturally to the development and widespread adoption of virtualised storage infrastructures that incorporate intelligent storage fabrics. The physical resources underlying such systems are organised into storage tiers, each of which delivers a different cost/capacity ratio against a certain quality of service. Besides providing a single point of control and uniform view of storage system components across all tiers, an important management challenge for the intelligent storage fabric is to place data onto the most appropriate tier and then migrate it from one tier to another as the access profile evolves. Device selection and data placement within tiers is also critical. For example, to support the performance requirements of video streaming applications, it may be necessary to stripe video data across a number of RAID sub-systems, leveraging not only the capacity of the storage devices but also the performance of several RAID controllers. A popular platform for implementing high-performance virtualised storage systems is the Storage Area Network (SAN). This is a high-speed special-purpose network that interconnects different kinds of storage devices with associated data servers. Several commercial vendors offer SAN-based storage solutions including IBM, NetApp, EMC, Hitachi and Compellent. According to published literature, the mechanisms for fabric intelligence in these systems are relatively simple with inter-tier migration policies that are centred on capacity utilisation and failure recovery and that are not sensitive to any dimension of the access profile other than access frequency. The most sophisticated tiered SAN available today offers fixed-interval block-level data migration based on access frequency. All data within a tier is subject to a single static protection level and each tier has separate, static address spaces for live data and snapshots. Consequently, data-specific quality of service cannot be guaranteed, and space utilisation is potentially inefficient; large enterprises are therefore reluctant to adopt storage virtualisation for mission-critical applications. The focus of the present proposal is to develop more sophisticated fabric intelligence that is able to autonomously and transparently migrate data across tiers and organise data within tiers to deliver the required quality of service in terms of factors such as response time, availability, reliability, resilience, storage cost and power utilisation. This composite goal entails both the provision of intelligent data placement and migration strategies as well as the development of performance evaluation tools to assess their benefits quantitatively.The project is backed by two industrial partners who have committed senior technical staff to help us to validate our work in a realistic context. The news agency Reuters will provide the focus of our primary case study by helping us to understand their data architecture and storage-related quality of service requirements. The storage development team at IBM (Hursley), who design and implement Storage Area Network controllers, will provide us with I/O workload traces, will host a project PhD student for six months and will provide us with insights into the operation of state-of-the-art SAN-based storage solutions.
more_vert
chevron_left - 1
- 2
- 3
- 4
- 5
chevron_right
