Powered by OpenAIRE graph

CGI IT UK Ltd

4 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/S012079/1
    Funder Contribution: 395,301 GBP

    Female academics, particularly in STEM subjects, score consistently lower than male academics in metrics measuring international [1] and industrial collaborations [2]. These two related assessment criteria are key at all stages in academic careers and particularly important at senior levels to secure the highest value research grants and promotions. While several barriers have been identified to academic career advancement for women and have led to strategic interventions at national and institutional levels, there remains a lack of data and action specifically targeting networking and collaboration - the focus of this VisNET programme. Our vision is 1) To identify key barriers to international collaboration for female engineering academics 2) To design and demonstrate interventions and new best practices in networking and collaborations to define a new and more effective normal. The emergence and rapid development of technologies that support geographically remote working relationships presents a timely opportunity. Effective use of such tools could help to correct the disadvantages experienced by women in international collaboration. We propose an intervention to determine and remodel the implicit 'rules' of networking and collaboration. This pilot project is aimed at a cohort of female post-doctoral researchers (PDRAs). Transition from post-doc to academic is a key attrition point for women in engineering. Success is reliant on demonstrating the means to develop academic independence. Possession of a strong network can be crucial. At the same time this group has relative freedom to trial new approaches of working and represents a critical mass to demonstrate and embed novel methods, including a route to involve more established academics. Thus, the interdisciplinary academic and industrial consortium we have brought together will lead the way in developing, integrating and advocating a new approach where networking and collaboration is conducted predominantly in situ (i.e. from home institutions). We believe that at this critical postdoctoral stage implementation of strategic networking and collaboration can be career defining, providing crucial routes to build confidence, establish future academic independence and funding success. Furthermore, it has the potential to mitigate the impact of future career breaks and parenthood. By demonstrating that networks can be built without frequent travel, it will also address the perception that an academic career is incompatible with work-life balance or family responsibilities, factors identified by junior researchers when consulted about their choice to leave academia [3]. While we see here an opportunity to have a rapid tangible impact on the academic career of a finite group of women, VisNET will also act as an effective route to embed our approaches into the working practices of our universities. Effective in situ networking has the potential to directly tackle negative perceptions of work-life balance in academia, contribute to the promotion of flexible working patterns and advance inclusivity for other minority academic communities such as academics with disabilities or remotely located. The coordinated outcome of this programme fits directly into EPSRC's and our Universities' strategic plans to build leadership, accelerate impact and balance capabilities ensuring the continued progression of UK emerging research leaders by enhancing their experiences and embedding career robustness. [1] Larivière et al., "Bibliometrics: Global gender disparities in science," Nat. News, vol. 504, no. 7479, p. 211, 2013 [2] Tartari & A. Salter, "The engagement gap: Exploring gender differences in University - Industry collaboration activities," Res. Policy, vol. 44, no. 6, pp. 1176-1191, 2015 [3] Shaw & Stanton, "Leaks in the pipeline: separating demographic inertia from ongoing gender differences in academia," Proc. R. Soc. B Biol. Sci., vol. 279, no. 1743, p. 3736, 2012

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V025295/2
    Funder Contribution: 1,301,720 GBP

    The Office for Artificial Intelligence (AI) estimates that AI could add £232 billion to the UK economy by 2030, increasing productivity in some industries by 30%. However, to be truly transformational, the integration of AI throughout the global economy requires understanding and trust in the AI systems deployed. The super-human ability for decision-making in new AI systems requires huge volumes of data with thousands of variables, dependencies and uncertainties. Unregulated application of uncertified data-driven AI, limited by data bias and a lack of transparency, brings huge risks and necessitates a community-wide change. AI systems of the future must also be able to learn on-the-job to avoid becoming a high-interest credit card of huge technical debt. There is thus a timely and unmet need for a new theory and framework to enable the creation and analysis of data-driven AI systems that are adaptive, resilient, robust, explainable, and certifiable, with provable and practically relevant performance guarantees. This ambitious fellowship, ARaISE, will deliver a radically new framework for the creation of beneficial data-driven AI systems advancing far beyond classical theories by including certifiable robustness and learning in the problem setting. These new theories will enable a formal understanding of the fundamental limits of large-scale data-driven AI, independent of the application area and learning algorithms. This will enable AI practitioners, through understanding such limitations, to influence policy and prevent incidents before they occur. By connecting different and disparate areas of AI and Machine Learning, working with a world-class team of experts, and by engaging with stakeholders across strategic UK industries and sectors (Healthcare, Manufacturing, Space and Earth Observation, Smart Materials, and Security), ARaISE will create high-value, trustworthy, transformative and responsible AI, capable of reliably 'learning on-the-job' from humans to guarantee capability and trust. Novel human-centric AI, designed to function for the benefit of society, will complement and connect to existing work in the AI research arena, enabling co-development with project partners and focus on strategic industry challenges to ensure real-world relevance is built into research programme and its outputs, facilitating capacity and capability growth. ARaISE will generate gold standard tools for tasks that are currently heavily reliant upon human input and will support long-term global transformation. Impact and knowledge exchange activities, embedded throughout this programme of work, will support uptake of developed novel AI systems and, through leadership and ambassadorial activities, will support a step-change in how AI systems are built and maintained to ensure resilient, robust, adaptive and trustworthy operation. The inclusive research programme has been designed to support the career development of the project team and wider stakeholder group maximising the potential for flexible career paths whilst maintaining flexibility to creatively support the team to develop exciting new technology with real world relevance and guide future AI research. The issues of AI and ethics underpin the programme with responsible research and innovation embedded throughout its activities. Raising public and AI practitioners' awareness, and ultimately influencing policy by active engagement with the UK and AI ethics expertise and policymakers, will ensure that the outcomes are socially beneficial, ethical, trusted and deployable in real world situations. Planned engagement with the ATI, CDTs, partners, and their networks, the development of new partnerships, methodologies and applications, will encourage links between these organisations, build UK expertise, skills and capacity in AI and contribute to realising government investment in UK Societal Challenges and ensure that the UK remains at the forefront of the AI revolution.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V025295/1
    Funder Contribution: 1,463,400 GBP

    The Office for Artificial Intelligence (AI) estimates that AI could add £232 billion to the UK economy by 2030, increasing productivity in some industries by 30%. However, to be truly transformational, the integration of AI throughout the global economy requires understanding and trust in the AI systems deployed. The super-human ability for decision-making in new AI systems requires huge volumes of data with thousands of variables, dependencies and uncertainties. Unregulated application of uncertified data-driven AI, limited by data bias and a lack of transparency, brings huge risks and necessitates a community-wide change. AI systems of the future must also be able to learn on-the-job to avoid becoming a high-interest credit card of huge technical debt. There is thus a timely and unmet need for a new theory and framework to enable the creation and analysis of data-driven AI systems that are adaptive, resilient, robust, explainable, and certifiable, with provable and practically relevant performance guarantees. This ambitious fellowship, ARaISE, will deliver a radically new framework for the creation of beneficial data-driven AI systems advancing far beyond classical theories by including certifiable robustness and learning in the problem setting. These new theories will enable a formal understanding of the fundamental limits of large-scale data-driven AI, independent of the application area and learning algorithms. This will enable AI practitioners, through understanding such limitations, to influence policy and prevent incidents before they occur. By connecting different and disparate areas of AI and Machine Learning, working with a world-class team of experts, and by engaging with stakeholders across strategic UK industries and sectors (Healthcare, Manufacturing, Space and Earth Observation, Smart Materials, and Security), ARaISE will create high-value, trustworthy, transformative and responsible AI, capable of reliably 'learning on-the-job' from humans to guarantee capability and trust. Novel human-centric AI, designed to function for the benefit of society, will complement and connect to existing work in the AI research arena, enabling co-development with project partners and focus on strategic industry challenges to ensure real-world relevance is built into research programme and its outputs, facilitating capacity and capability growth. ARaISE will generate gold standard tools for tasks that are currently heavily reliant upon human input and will support long-term global transformation. Impact and knowledge exchange activities, embedded throughout this programme of work, will support uptake of developed novel AI systems and, through leadership and ambassadorial activities, will support a step-change in how AI systems are built and maintained to ensure resilient, robust, adaptive and trustworthy operation. The inclusive research programme has been designed to support the career development of the project team and wider stakeholder group maximising the potential for flexible career paths whilst maintaining flexibility to creatively support the team to develop exciting new technology with real world relevance and guide future AI research. The issues of AI and ethics underpin the programme with responsible research and innovation embedded throughout its activities. Raising public and AI practitioners' awareness, and ultimately influencing policy by active engagement with the UK and AI ethics expertise and policymakers, will ensure that the outcomes are socially beneficial, ethical, trusted and deployable in real world situations. Planned engagement with the ATI, CDTs, partners, and their networks, the development of new partnerships, methodologies and applications, will encourage links between these organisations, build UK expertise, skills and capacity in AI and contribute to realising government investment in UK Societal Challenges and ensure that the UK remains at the forefront of the AI revolution.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/W032473/1
    Funder Contribution: 2,794,280 GBP

    AP4L is a 3-year program of interdisciplinary research, centring on the online privacy & vulnerability challenges that people face when going through major life transitions. Our central goal is to develop privacy-by-design technologies to protect & empower people during these transitions. Our work is driven by a narrative that will be familiar to most people. Life often "just happens", leading people to overlook their core privacy and online safety needs. For instance, somebody undergoing cancer treatment may be less likely to finesse their privacy setting on social media when discussing the topic. Similarly, an individual undergoing gender transition may be unaware of how their online activities in the past may shape the treatment into the future. This project will build the scientific and theoretical foundations to explore these challenges, as well as design and evaluate three core innovations that will address the identified challenges. AP4L will introduce a step-change, making online safety and privacy as painless and seamless as possible during life transitions To ensure a breadth of understanding, we will apply these concepts to four very different transitions through a series of carefully designed co-creation activities, devised as part of a stakeholder workshop held in Oct'21. These are relationship breakdowns; LBGT+ transitions or transitioning gender; entering/ leaving employment in the Armed Forces; and developing a serious illness or becoming terminally ill. Such transitions can significantly change privacy considerations in unanticipated or counter-intuitive ways. For example, previously enabled location-sharing with a partner may lead to stalking after a breakup; 'coming out' may need careful management across diverse audiences (e.g - friends, grandparents) on social media. We will study these transitions, following a creative security approach, bringing together interdisciplinary expertise in Computer Science, Law, Business, Psychology and Criminology. We will systematise this knowledge, and develop fundamental models of the nature of transitions and their interplay with online lives. These models will inform the development of a suite of technologies and solutions that will help people navigate significant life transitions through adaptive, personalised privacy-enhanced interventions that meet the needs of each individual and bolster their resilience, autonomy, competence and connection. The suite will comprise: (1) "Risk Playgrounds", which will build resilience by helping users to explore potentially risky interactions of life transitions with privacy settings across their digital footprint in safe ways (2) "Transition Guardians", which will provide real-time protection for users during life transitions. (3) "Security Bubbles", which will promote connection by bringing people together who can help each other (or who need to work together) during one person's life transition, whilst providing additional guarantees to safeguard everyone involved. In achieving this vision, and as evidenced by £686K of in-kind contributions, we will work with 26 core partners spanning legal enforcement agencies (e.g., Surrey Police), tech companies (e.g., Facebook, IBM), support networks (e.g., LGBT Foundation, Revenge Porn Helpline) and associated organisations (e.g., Ofcom, Mastercard, BBC). Impact will be delivered through various activities including a specially commissioned BBC series on online life transitions to share knowledge with the public; use of the outputs of our projects by companies & social platforms (e.g., by incorporating into their products, & by designing their products to take into consideration the findings of our project) & targeted workshops to enable knowledge exchange with partners & stakeholders.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.