Powered by OpenAIRE graph

Meta

3 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: MR/Z00036X/1
    Funder Contribution: 594,294 GBP

    Terrorist risks and violent extremist threats are increasingly identified and governed via new forms of data analytics and automated decision-making (ADM) made possible by advances in machine learning (ML) and artificial intelligence (AI). Private actors, including social media platforms, airlines and financial institutions, are collaborating with states, global bodies and international organisations (IOs) to implement ambitious data-led global counterterrorism projects. The UN Security Council, for example, has called on all states to intensify the exchange of information about suspected terrorists by building and sharing watchlists and analysing biometric data, bolstering the use of ML to predictively identify 'future terrorists' in advance. Social media platforms are using AI to detect extremist content online and regulate global data flows on an unprecedented scale. Passenger data from the aviation industry is analysed to identify suspicious 'patterns of behaviour' and control the movements of risky travellers. Financial data is mined by banks to spot suspicious transactions and terrorist 'associations'. These changes are putting new and far-reaching global security infrastructure projects into motion. Yet the implications of these shifts for how international law is practiced, global security threats known and powerful actors held accountable remain uncertain. The data infrastructures underlying global governance have been largely neglected in legal scholarship. And whilst potential problems that AI poses (discrimination and privacy violations) are becoming clearer, solutions remain elusive - especially in the security domain, where secrecy is key and the inner workings of algorithms are 'black-boxed' even more than usual. Regulatory theorists argue that we urgently need to 'expand our frame of rights discourse to encompass our socio-technical architecture' to respond to the accountability challenges of AI (Yeung 2019). Studying global security infrastructures in action might help us in reimagining how data, security and rights could be reconnected in our digital present. This project rethinks global security law and governance from the 'infrastructure space' it is creating. It focuses on three areas: (i) digital bordering infrastructures for controlling the cross-border movements of 'risky' people (Work Package 1); (ii) platform infrastructures for moderating terrorist and violent extremist content online (Work Package 2); and (iii) counterterrorism watchlisting infrastructures (Work Package 3). The project contends that the most far-reaching changes to global security governance are not being written in the language of international law or created through the formal powers of states and IOs but built through new socio-technical infrastructures and the data-driven security expertise they are enabling. I use the concept of 'infra-legalities' (or, the co-productive effects of data infrastructure, law and regulation) to analyse these shifts and develop a novel approach for studying international law and regulation in the age of algorithmic global governance. Infrastructure is often disregarded as an invisible substrate on which powerful actors act, but it helps create and shape power, knowledge and governance. Drawing from Science and Technology Studies, computer science, critical data studies and critical security studies, this project performs what Bowker and Star (1999) call an 'infrastructural inversion' by mapping the seemingly mundane governance work of AI-driven global security infrastructures. By 'following the data' - and tracing the socio-technical relations, norms, knowledge practices and power asymmetries that security infrastructures are enacting - a different method of studying global governance can emerge. Studying the infra-legalities of global security opens space for addressing key challenges and shaping policy debates on security, responsibility and accountability in the age of AI and automation.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/Y028732/1
    Funder Contribution: 7,691,560 GBP

    Artificial intelligence (AI) is on the verge of widespread deployment in ways that will impact our everyday lives. It might do so in the form of self-driving cars or of navigation systems optimising routes on the basis of real-time traffic information. It might do so through smart homes, in which usage of high-power devices is timed intelligently based on real- time forecasts of renewable generation. It might do so by automatically coordinating emergency vehicles in the event of a major incident, natural or man-made, or by coordinating swarms of small robots collectively engaged in some task, such as search-and-rescue. Much of the research on AI to date has focused on optimising the performance of a single agent carrying out a single well-specified task. There has been little work so far on emergent properties of systems in which large numbers of such agents are deployed, and the resulting interactions. Such interactions could end up disturbing the environments for which the agents have been optimised. For instance, if a large number of self-driving cars simultaneously choose the same route based on real-time information, it could overload roads on that route. If a large number of smart homes simultaneously switch devices on in response to an increase in wind energy generation, it could destabilise the power grid. If a large number of stock-trading algorithmic agents respond similarly to new information, it could destabilise financial markets. Thus, the emergent effects of interactions between autonomous agents inevitably modify their operating environment, raising significant concerns about the predictability and robustness of critical infrastructure networks. At the same time, they offer the prospect of optimising distributed AI systems to take advantage of cooperation, information sharing, and collective learning. The key future challenge is therefore to design distributed systems of interacting AIs that can exploit synergies in collective behaviour, while being resilient to unwanted emergent effects. Biological evolution has addressed many such challenges, with social insects such as ants and bees being an example of highly complex and well-adapted responses emerging at the colony level from the actions of very simple individual agents! The goal of this project is to develop the mathematical foundations for understanding and exploiting the emergent features of complex systems composed of relatively simple agents. While there has already been considerable research on such problems, the novelty of this project is in the use of information theory to study fundamental mathematical limits on learning and optimisation in such systems. Information theory is a branch of mathematics that is ideally suited to address such questions. Insights from this study will be used to inform the development of new algorithms for artificial agents operating in environments composed of large numbers of interacting agents. The project will bring together mathematicians working in information theory, network science and complex systems with engineers and computer scientists working on machine learning, AI and robotics. The aim goal is to translate theoretical insights into algorithms that are deployed onreal world applications real systems; lessons learned from deploying and testing the algorithms in interacting systems will be used to refine models and algorithms in a virtuous circle.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/Y034813/1
    Funder Contribution: 7,873,680 GBP

    The EPSRC Centre for Doctoral Training in Statistics and Machine Learning (StatML) will address the EPSRC research priority of the 'physical and mathematical sciences powerhouse' through an innovative cohort-based training program. StatML harnesses the combined strengths of Imperial and Oxford, two world-leading institutions in statistics and machine learning, in collaboration with a broad spectrum of industry partners, to nurture the next generation of leaders in this field. Our students will be at the forefront of advancing the core methodologies of data science and AI, crucial for unlocking the value inherent in data to benefit industry and society. They will be equipped with advanced research, technical, and practical skills, enabling them to make tangible real-world impacts. Our students will be ethical and responsible innovators, championing reproducible research and open science. Collaborating with students, charities and equality experts, StatML will also pioneer a comprehensive strategy to promote inclusivity, attract individuals from diverse backgrounds and eliminate biases. This will help diversify the UK's future statistics and machine learning workforce, essential for ensuring data science is used for public good. Data science and AI are now part of our everyday lives, transforming all sectors of the economy. To future-proof the UK's prosperity and security, it is essential to develop new methodology, specifically tailored to meet the big societal challenges of the future. The techniques underpinning such methods are founded in statistics and machine learning. Through close collaboration with a broad range of industry partners, our cohort-based training will support the UK in producing a critical mass of world-leading researchers with expertise in developing cutting-edge, impactful statistical and machine learning methodology and theory. It is well documented in government and learned society reports that the UK economy has an urgent need for these people. The significant level of industry support for our proposal also highlights the necessity of filling this gap in the UK data science ecosystem. StatML will learn from and build upon our previous successful experiences in cohort training of doctoral students (our existing StatML CDT funded in 2018, as well as other CDTs at Imperial and Oxford). Our students will continue to produce impactful, internationally leading research in statistics and machine learning (as evidenced by our students' impressive publication record and our world-leading research environment, as rated by the REF 2021 evaluation), while complementing this with a bespoke cohort-based Advanced Training program in Statistics and Machine Learning (StatML-AT). StatML-AT has been developed from our experience and in partnership with industry. It will be responsive to emerging technologies and equip our students with the practical skills required to transform how data is used. It will be delivered by our outstanding academics from both institutions alongside with industry leaders to ensure that students receive training in cutting edge technologies, along with the latest ideas in ethics, responsible innovation, sustainability and entrepreneurship. This will be complemented by industrial and academic placements to allow the students to develop their own international network and produce high-impact research. Together, StatML and its partners will train 90+ students over 5 cohorts. More than half of these will be funded from external sources, including 25+ by industry, representing excellent value for money. Our diverse cohorts will benefit from a unique and responsive training program combining academic excellence, industry engagement, and interdisciplinary culture. This will make StatML a vibrant research environment inspiring the next methodological advancements to transform the use of data and AI across industry and society.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.