Citrix (United Kingdom)
Citrix (United Kingdom)
4 Projects, page 1 of 1
assignment_turned_in Project2014 - 2017Partners:Citrix Systems, Imperial College London, Citrix (United Kingdom), NetApp (United States), NetAppCitrix Systems,Imperial College London,Citrix (United Kingdom),NetApp (United States),NetAppFunder: UK Research and Innovation Project Code: EP/L00738X/1Funder Contribution: 368,053 GBPThe continuing revolutionary growth of data volumes and the increasing diversity of data-intensive applications demands an urgent investigation of effective means for efficient storage management. In the summer of 2012, the volume of data in the world was around 10 to the power of 21 bytes, about 1.1TB per internet user, and this volume continues to increase at about 50% Compound Annual Growth Rate. It has been said that "By 2013, storage systems will no longer be manually tunable for performance or manual data placement. Similar to virtual memory management, the storage array's algorithms will determine data placement (The Future of Storage Management, Gartner 2010). Meeting service-level objective/agreement (SLO/SLA) requirements for data-intensive applications is not straightforward and will become increasingly more challenging. In particular, there is an increasing need for intelligent mechanisms to manage the underlying architectures' infrastructure, taking into account the advent of new device technologies. To cope with this challenge, we propose a research program in the mainstream of EPSRC's theme "Towards an intelligent information infrastructure (TI3)", specifically with reference to the "deluge of data" and the exploration of "emerging technologies for low power, high speed, high density, low cost memory and storage solutions". Today, with the widespread distribution of storage, for example in cloud storage solutions, it is difficult for an infrastructure provider to decide where data resides, on what type of device, co-located with what other data owned by which other (maybe competing) user, and even in what country. The need to meet energy-consumption targets compounds this problem. These decisional problems motivate the present research proposal, which aims at developing new model-based techniques and algorithms to facilitate the effective administration of data-intensive applications and their underlying storage device infrastructure. We propose to develop techniques and tools for the quantitative analysis and optimisation of multi-tiered data storage systems. The primary objective is to develop novel modelling approaches to define and facilitate the most appropriate data placement and data migration strategies. These strategies share the common aim of placing data on the most effective target device in a tiered storage architecture. In the proposed research, the allocation algorithm will be able to decide the placement strategy and trigger data migrations to optimize an appropriate utility function. Our research will also take into account the likely quantitative impact of evolving storage and energy-efficiency technologies, by developing suitable models of these and integrating them into our tier-allocation methodologies. In essence, our models will be specialised for different storage and power technologies (e.g. fossil fuel, solar, wind). The models, optimisers and methodologies that we produce will be tested in pilot implementations on our in-house cloud (already purchased); on Amazon EC2 resources; and finally in an industrial, controlled production environment as part of our collaboration with NetApp. This will provide feedback to enable us to refine, enhance and extend our techniques, and hence to further improve the utility of the biggest of storage systems.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::afe63531452474a73d69150fc0e1ae04&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::afe63531452474a73d69150fc0e1ae04&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2013 - 2017Partners:Imperial College London, Citrix Systems, Xilinx Corp, Netronome, NetApp +4 partnersImperial College London,Citrix Systems,Xilinx Corp,Netronome,NetApp,Xilinx (United States),Citrix (United Kingdom),NetApp (United States),NetronomeFunder: UK Research and Innovation Project Code: EP/K032968/1Funder Contribution: 666,148 GBPCloud computing has significantly changed the IT landscape. Today it is possible for small companies or even single individuals to access virtually unlimited resources in large data centres (DCs) for running computationally demanding tasks. This has triggered the rise of "big data" applications, which operate on large amounts of data. These include traditional batch-oriented applications, such as data mining, data indexing, log collection and analysis, and scientific applications, as well as real-time stream processing, web search and advertising. To support big data applications, parallel processing systems, such as MapReduce, adopt a partition/aggregate model: a large input data set is distributed over many servers, and each server processes a share of the data. Locally generated intermediate results must then be aggregated to obtain the final result. An open challenge of the partition/aggregate model is that it results in high contention for network resources in DCs when a large amount of data traffic is exchanged between servers. Facebook reports that, for 26% of processing tasks, network transfers are responsible for more than 50% of the execution time. This is consistent with other studies, showing that the network is often the bottleneck in big data applications. Improving the performance of such network-bound applications in DCs has attracted much interest from the research community. A class of solutions focuses on reducing bandwidth usage by employing overlay networks to distribute data and to perform partial aggregation. However, this requires applications to reverse-engineer the physical network topology to optimise the layout of overlay networks. Even with perfect knowledge of the physical topology, there are still fundamental inefficiencies: e.g. any logical topology with a server fan-out higher than one cannot be mapped optimally to the physical network if servers have only a single network interface. Other proposals increase network bandwidth through more complex topologies or higher-capacity networks. New topologies and network over-provisioning, however, increase the DC operational and capital expenditures-up to 5 times according to some estimates-which directly impacts tenant costs. For example, Amazon AWS recently introduced Cluster Compute instances with full-bisection 10 Gbps bandwidth, with an hourly cost of 16 times the default. In contrast, we argue that the problem can be solved more effectively by providing DC tenants with efficient, easy and safe control of network operations. Instead of over-provisioning, we focus on optimising network traffic by exploiting application-specific knowledge. We term this approach "network-as-a-service" (NaaS) because it allows tenants to customise the service that they receive from the network. NaaS-enabled tenants can deploy custom routing protocols, including multicast services or anycast/incast protocols, as well as more sophisticated mechanisms, such as content-based routing and content-centric networking. By modifying the content of packets on-path, they can efficiently implement advanced, application-specific network services, such as in-network data aggregation and smart caching. Parallel processing systems such as MapReduce would greatly benefit because data can be aggregated on-path, thus reducing execution times. Key-value stores (e.g. memcached) can improve their performance by caching popular keys within the network, which decreases latency and bandwidth usage compared to end-host-only deployments. The NaaS model has the potential to revolutionise current cloud computing offerings by increasing the performance of tenants' applications -through efficient in-network processing- while reducing development complexity. It aims to combine distributed computation and network communication in a single, coherent abstraction, providing a significant step towards the vision of "the DC is the computer".
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::ef91be3a2ef1135ad314463689094dc4&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::ef91be3a2ef1135ad314463689094dc4&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2013 - 2017Partners:Eastern Cancer Reg and Info Centre, Citrix Systems, Imperial College London, Cambridgeshire County Council, New Zealand eScience Infrastructure +13 partnersEastern Cancer Reg and Info Centre,Citrix Systems,Imperial College London,Cambridgeshire County Council,New Zealand eScience Infrastructure,Nexor Ltd,The Cabinet Office,Morgan Stanley (United States),Nexor (United Kingdom),Eastern Cancer Reg and Info Centre,BAE Systems (UK),MS,Citrix (United Kingdom),BAE Systems (Sweden),BAE Systems (United Kingdom),Government of the United Kingdom,New Zealand eScience Infrastructure,Cambridgeshire County CouncilFunder: UK Research and Innovation Project Code: EP/K008129/1Funder Contribution: 524,117 GBPCloud computing promises to revolutionise how companies, research institutions and government organisations, including the National Health Service (NHS), offer applications and services to users in the digital economy. By consolidating many services as part of a shared ICT infrastructure operated by cloud providers, cloud computing can reduce management costs, shorten the deployment cycle of new services and improve energy efficiency. For example, the UK government's G-Cloud initiative aims to create a cloud ecosystem that will enable government organisations to deploy new applications rapidly, and to share and reuse existing services. Citizens will benefit from increased access to services, while public-sector ICT costs will be reduced. Security considerations, however, are a major issue holding back the widespread adoption of cloud computing: many organisations are concerned about the confidentiality and integrity of their users' data when hosted in third-party public clouds. Today's cloud providers struggle to give strong security guarantees that user data belonging to cloud tenants will be protected "end-to-end", i.e. across the entire workflow of a complex cloud-hosted distributed application. This is a challenging problem because data protection policies associated with applications usually require the strict isolation of certain data while permitting the sharing of other data. As an example, consider a local council with two applications on the G-Cloud: one for calculating unemployment benefits and one for receiving parking ticket fines, with both applications relying on a shared electoral roll database. How can the local council guarantee that data related to unemployment benefits will never be exposed to the parking fine application, even though both applications share a database and the cloud platform? The focus of the CloudSafetNet project is to rethink fundamentally how platform-as-a-service (PaaS) clouds should handle security requirements of applications. The overall goal is to provide the CloudSafetyNet middleware, a novel PaaS platform that acts as a "safety net", protecting against security violations caused by implementation flaws in applications ("intra-tenant security") or vulnerabilities in the cloud platform itself ("inter-tenant security"). CloudSafetyNet follows a "data-centric" security model: the integrity and confidentiality of application data is protected according to data flow policies -- agreements between cloud tenants and the provider specifying the permitted and prohibited exchanges of data between application components. It will enforce data flow policies through multiple levels of security mechanisms following a "defence-in-depth" strategy: based on policies, it creates "data compartments" that contain one or more components and isolate user data. A small privileged kernel, which is part of the middleware and constitutes a trusted computing base (TCB), tracks the flow of data between compartments and prevents flows that would violate policies. Previously such information flow control (IFC) models have been used successfully to enhance programming language, operating system and web application security. To make such a secure PaaS platform a reality, we plan to overcome a set of research challenges. We will explore how cloud application developers can express data-centric security policies that can be translated automatically into a set of data flow constraints in a distributed system. An open problem is how these constraints can be tied in with trusted enforcement mechanisms that exist in today's PaaS clouds. Addressing this will involve research into new lightweight isolation and sand-boxing techniques that allow the controlled execution of software components. In addition, we will advance software engineering methodology for secure cloud applications by developing new software architectures and design patterns that are compatible with compartmentalised data flow enforcement.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::426f696d4f85a9b925ddaf1a29eff9c1&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::426f696d4f85a9b925ddaf1a29eff9c1&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2013 - 2021Partners:HPLB, Thales (United Kingdom), Lockheed Martin (United Kingdom), TRTUK, Nokia Corporation +21 partnersHPLB,Thales (United Kingdom),Lockheed Martin (United Kingdom),TRTUK,Nokia Corporation,Malvern Cyber Security Cluster,Barclays (United Kingdom),Microsoft Ukp Ltd,IBM (United Kingdom),Malvern Cyber Security Cluster,University of Oxford,Thales Research and Technology UK Ltd,Sophos plc,Hewlett-Packard (United Kingdom),BARCLAYS BANK PLC,Intel (United States),Sophos Group (United Kingdom),Nokia (Finland),Nokia Corporation,Citrix (United Kingdom),Lockheed Martin UK,Citrix Systems,Microsoft Ukp Ltd,IBM (United Kingdom),Intel (United States),IBM UNITED KINGDOM LIMITEDFunder: UK Research and Innovation Project Code: EP/K035606/1Funder Contribution: 3,675,520 GBPThe great majority of the CDT's research will fit into the four themes listed below, whether focussed upon application domains or on underpinning research challenges. These represent both notable application areas and emerging cyber security goals, and taken together cover some of the most pressing cyber security challenges our society faces today. 1. Security of 'Big Data' covers the acquisition, management, and exploitation of data in a wide variety of contexts. Security and privacy concerns often arise here - and may conflict with each other - together with issues for public policy and economic concerns. Not only must emerging security challenges be ad-dressed, new potential attack vectors arising from the volume and form of the data, such as enhanced risks of de-anonymisation, must be anticipated - having regard to major technical and design challenges. A major application area for this research is in medical re-search, as the formerly expected boundaries between public data, research, and clinical contexts crumble: in the handling of genomic data, autonomous data collection, and the co-management of personal health data. 2. Cyber-Physical Security considers the integration and interaction of digital and physical environments, and their emergent security properties; particularly relating to sensors, mobile devices, the internet of things, and smart power grids. In this way, we augment conventional security with physical information such as location and time, enabling novel security models. Applications arise in critical infrastructure monitoring, transportation, and assisted living. 3. Effective Systems Verification and Assurance. At its heart, this theme draws on Oxford's longstanding strength in formal methods for modelling and abstraction applied to hardware and software verification, proof of security, and protocol verification. It must al-so address issues in procurement and supply chain management, as well as criminology and malware analysis, high-assurance systems, and systems architectures. 4. Real-Time Security arises in both user-facing and network-facing tools. Continuous authentication, based on user behaviour, can be less intrusive and more effective than commonplace one-time authentication methods. Evolving access control allows decisions to be made based on past behaviour instead of a static policy. Effective use of visual analytics and machine learning can enhance these approaches, and apply to network security management, anomaly detection, and dynamic reconfiguration. These pieces con-tribute in various ways to an integrated goal of situational awareness. These themes link to many existing research strengths of the University, and extend their horizon into areas where technology is rapidly emerging and raising pressing cyber security concerns. The proposal has strong support from a broad sweep of relevant industry sectors, evidenced by letters of support attached from HP Labs, Sophos, Nokia, Barclays, Citrix, Intel, IBM, Microsoft UK, Lockheed Martin, Thales, and the Malvern Cyber Security Cluster of SMEs.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::0dd747d44466aa622641d1afb09e14e7&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::0dd747d44466aa622641d1afb09e14e7&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu