Powered by OpenAIRE graph

IBM (United Kingdom)

IBM (United Kingdom)

Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
108 Projects, page 1 of 22
  • Funder: UK Research and Innovation Project Code: EP/R002320/1
    Funder Contribution: 60,027 GBP

    Energy system modelling has been driven, at best, annual data series at national or regional level. The roll-out of smart meters along with the increasing availability of new forms of user data from crowdsourced platforms such as social media, mobile phones and apps offers an immense opportunity to improve our understanding of consumer's energy behaviours and preferences and UK's changing energy mix in near real-time at a low geographical resolution. Combining this data with that collected from other non-energy domains and the use of techniques like machine learning and hierarchical analytic methods means that future energy system research can recognise tripping points, emerging patterns, interdependencies and end-user behaviours in near real time. Beyond creating a world leading, state-of-the-art research programme, generating such insights is important both for industry and policy. On the former, understanding consumer demand patterns and development of generation mix in near real time would enable a more effective operation of the network in a future energy system supplied by intermittent renewable resources. Yet, the trajectory of this low carbon transition is highly uncertain as characterised by a large number of future energy system scenarios. Moreover, combining and linking data from multiple sources can support the development of new services, firms and business models. These new approaches can also contribute to develop a more nuanced policy approach to respond to consumer behaviours whilst utilising differences across the energy system in terms of diversity of actors, socio-economic, geographic and network characteristics, demand patterns and interdependencies of energy sector with other sectors such as transport. Otherwise the risks would be widening of existing socio-economic differences and tripping points leading to major bottlenecks on the networks and exacerbating social inequalities.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/F010192/1
    Funder Contribution: 471,268 GBP

    Two key R&D questions emerge from the recent unprecedently rapid growth in both data and storage capacity: how best to map data onto physical disk devices, and on what factors to base the choice of this mapping? From a user perspective, it is important to ensure that an adequate quality of service (as dictated by application requirements) is delivered at reasonable cost. Additionally, since the total cost of ownership of disk storage is dominated by administration and management activities, ease-of-management and autonomous operation are vital. The technology pull outlined above has led naturally to the development and widespread adoption of virtualised storage infrastructures that incorporate intelligent storage fabrics. The physical resources underlying such systems are organised into storage tiers, each of which delivers a different cost/capacity ratio against a certain quality of service. Besides providing a single point of control and uniform view of storage system components across all tiers, an important management challenge for the intelligent storage fabric is to place data onto the most appropriate tier and then migrate it from one tier to another as the access profile evolves. Device selection and data placement within tiers is also critical. For example, to support the performance requirements of video streaming applications, it may be necessary to stripe video data across a number of RAID sub-systems, leveraging not only the capacity of the storage devices but also the performance of several RAID controllers. A popular platform for implementing high-performance virtualised storage systems is the Storage Area Network (SAN). This is a high-speed special-purpose network that interconnects different kinds of storage devices with associated data servers. Several commercial vendors offer SAN-based storage solutions including IBM, NetApp, EMC, Hitachi and Compellent. According to published literature, the mechanisms for fabric intelligence in these systems are relatively simple with inter-tier migration policies that are centred on capacity utilisation and failure recovery and that are not sensitive to any dimension of the access profile other than access frequency. The most sophisticated tiered SAN available today offers fixed-interval block-level data migration based on access frequency. All data within a tier is subject to a single static protection level and each tier has separate, static address spaces for live data and snapshots. Consequently, data-specific quality of service cannot be guaranteed, and space utilisation is potentially inefficient; large enterprises are therefore reluctant to adopt storage virtualisation for mission-critical applications. The focus of the present proposal is to develop more sophisticated fabric intelligence that is able to autonomously and transparently migrate data across tiers and organise data within tiers to deliver the required quality of service in terms of factors such as response time, availability, reliability, resilience, storage cost and power utilisation. This composite goal entails both the provision of intelligent data placement and migration strategies as well as the development of performance evaluation tools to assess their benefits quantitatively.The project is backed by two industrial partners who have committed senior technical staff to help us to validate our work in a realistic context. The news agency Reuters will provide the focus of our primary case study by helping us to understand their data architecture and storage-related quality of service requirements. The storage development team at IBM (Hursley), who design and implement Storage Area Network controllers, will provide us with I/O workload traces, will host a project PhD student for six months and will provide us with insights into the operation of state-of-the-art SAN-based storage solutions.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/J017728/2
    Funder Contribution: 2,667,740 GBP

    SOCIAM - Social Machines - will research into pioneering methods of supporting purposeful human interaction on the World Wide Web, of the kind exemplified by phenomena such as Wikipedia and Galaxy Zoo. These collaborations are empowering, as communities identify and solve their own problems, harnessing their commitment, local knowledge and embedded skills, without having to rely on remote experts or governments. Such interaction is characterised by a new kind of emergent, collective problem solving, in which we see (i) problems solved by very large scale human participation via the Web, (ii) access to, or the ability to generate, large amounts of relevant data using open data standards, (iii) confidence in the quality of the data and (iv) intuitive interfaces. "Machines" used to be programmed by programmers and used by users. The Web, and the massive participation in it, has dissolved this boundary: we now see configurations of people interacting with content and each other, typified by social web sites. Rather than dividing between the human and machine parts of the collaboration (as computer science has traditionally done), we should draw a line around them and treat each such assembly as a machine in its own right comprising digital and human components - a Social Machine. This crucial transition in thinking acknowledges the reality of today's sociotechnical systems. This view is of an ecosystem not of humans and computers but of co-evolving Social Machines. The ambition of SOCIAM is to enable us to build social machines that solve the routine tasks of daily life as well as the emergencies. Its aim is to develop the theory and practice so that we can create the next generation of decentralised, data intensive, social machines. Understanding the attributes of the current generation of successful social machines will help us build the next. The research undertakes four necessary tasks. First, we need to discover how social computing can emerge given that society has to undertake much of the burden of identifying problems, designing solutions and dealing with the complexity of the problem solving. Online scaleable algorithms need to be put to the service of the users. This leads us to the second task, providing seamless access to a Web of Data including user generated data. Third, we need to understand how to make social machines accountable and to build the trust essential to their operation. Fourth, we need to design the interactions between all elements of social machines: between machine and human, between humans mediated by machines, and between machines, humans and the data they use and generate. SOCIAM's work will be empirically grounded by a Social Machines Observatory to track, monitor and classify existing social machines and new ones as they evolve, and act as an early warning facility for disruptive new social machines. These lines of interlinked research will initially be tested and evaluated in the context of real-world applications in health, transport, policing and the drive towards open data cities (where all public data across an urban area is linked together) in collaboration with SOCIAM's partners. Putting research ideas into the field to encounter unvarnished reality provides a check as to their utility and durability. For example the Open City application will seek to harness citywide participation in shared problems (e.g. with health, transport and policing) exploiting common open data resources. SOCIAM will undertake a breadth of integrated research, engaging with real application contexts, including the use of our observatory for longitudinal studies, to provide cutting edge theory and practice for social computation and social machines. It will support fundamental research; the creation of a multidisciplinary team; collaboration with industry and government in realization of the research; promote growth and innovation - most importantly - impact in changing the direction of ICT.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/J012564/1
    Funder Contribution: 636,717 GBP

    In March 2011, Japan suffered from its biggest earthquake and devastating tsunami. Severe damage were inflicted on its Fukushima nuclear plants and more than 100,000 people had to be evacuated after the radiation levels became unsafe. Workers were not able to operate on site, preventing them from securing safety at the atomic power plant and averting a major radiation leak. One month after the disaster, in order to assess the severity of the damage to the nuclear plant from above, a small aerial vehicle equipped with cameras was sent to take pictures and videos of the affected areas. The video footage obtained brought valuable information to the rescue teams that could not have been acquired otherwise. But the use of aerial vehicles still remains limited by the fact that they require a remote operator at transmission range to control them. It is also necessary to have an operator to control the camera and interpret the data. In order to work autonomously, these systems need to be highly intelligent and rational so that they can become reliable: they must have high levels of knowledge to accomplish their AI-complex missions which occur in any other information environment. This implies that they should adapt to any unexpected situations such as recent changes not reflected in prior information on the environment and possible loss of GPS due to obstructing buildings or indoor exploration; reliable operation under such conditions would, for instance, enable them to return safely to their base station. In a multi-UAV setting, they should additionally be able to communicate with each other to simplify their goals, to learn from each other's information, and to update and share their knowledge. Given that any mission is unique in terms of deployment areas, tasks and goals to be achieved, etc., and can be critical in the sense that human lives may be involved, the implementation must be verified to be correct with respect to a formal specification. A famous example of an implementation error and a failure to comply with the specification is the self-destruction of Ariane 5 in 1996 immediately after take-off, caused by a numeric overflow due to an implementation that was not suitable for all possible situations. In 1996, the Lockheed Martin/Boeing Darkstar long-endurance UAV crashed following what the Pentagon called a "mishap [..] directly traceable to deficiencies in the modelling and simulation of the flight vehicle". To achieve the reliability required, we will need to develop a formalism that represents the sets of actions each Unmanned Aerial Vehicle (UAV) can perform while allowing capture of the kinetic constraints of the UAVs. We will then verify that the behaviours of each UAV modelled using this formalism lead to the individual or overall goal of the mission they are to achieve. These need to be extended from individual behaviours to a cooperative level amongst the multiple UAVs. Next, we plan to link the low-level code to high-level abstraction and verify it via advanced model-checking techniques. Finally, logical tools will be used to exhaustively reason about learning as a result of information flow among UAVs and their environment.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/E002013/1
    Funder Contribution: 382,173 GBP

    The impact of road traffic on local air quality is a major public policy concern and has stimulated a substantial body of research aimed at improving underlying vehicle and traffic management technologies and informing public policy action. Recent work has begun to exploit the capability of a variety of vehicle-based, person-based and infrastructure-based sensor systems to collect real time data on important aspects of driver and traffic behaviour, vehicle emissions, pollutant dispersion, concentration and human exposure.The variety, pervasiveness and scale of these sensor data will increase significantly in the future as a result of technological developments that will enable sensors to become cheaper, smaller and lower in power consumption. This will open up enormous opportunities to improve our understanding of urban air pollution and hence improve urban air quality. However, handing the vast quantities of real time data that will be generated by these sensors will be a formidable task and will require the application of advanced forms computing, communication and positioning technologies and the development of ways of combining and interpreting many different forms of data.Technologies developed in EPSRC's e-Science research programme offer many of the tools necessary to meet these challenges. The aim of the PMESG project is to take these tools and by extending them where necessary in appropriate ways develop and demonstrate practical applications of e-Science technologies to enable researchers and practitioners to coherently combine data from disparate environmental sensors and to develop models that could lead to improved urban air quality.The PMESG project is led by Imperial College London, and comprises a consortium of partners drawn from the Universities of Cambridge, Southampton, Newcastle and Leeds who will work closely with one another and with a number of major industrial partners and local authorities.Real applications will be carried out in London, Cambridge, Gateshead and Leicester which will build on the Universities' existing collaborative arrangements with the relevant local authorities in each site and will draw on substantial existing data resources, sensor networks and ongoing EPSRC and industrially funded research activities. These applications will address important problems that to date have been difficult or impossible for scientists and engineers working is this area of approach, due to a lack or relevant data. These problems are of three main types; (i) measuring human exposure to pollutants, (ii) the validation of various detailed models of traffic behaviour and pollutant emission and dispersion and (iii) the development of transport network management and control strategies that take account not just of traffic but also air quality impacts. The various case studies will look at different aspects of these questions and use a variety of different types of sensor systems to do so. In particular, the existing sensor networks in each city will be enhanced by the selective deployment of a number of new sensor types (both roadside and on-vehicle/person) to increase the diversity of sensor inputs.The e-Science technologies will be highly general in nature meaning that will have applications not only in transport and air quality management but also in many other fields that generate large volume of real time location-specific sensor data.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • 4
  • 5
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.