Powered by OpenAIRE graph

Atlas Elektronik UK Ltd

Atlas Elektronik UK Ltd

4 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: EP/R002665/1
    Funder Contribution: 420,710 GBP

    In recent years, there has been an immense interest in developing underwater acoustic communication (UAC) systems related to remote control and telemetry applications for the off-shore oil & gas industry. In practice, the only feasible method to achieve sub-sea communications is by means of acoustic signals. However, due to the limited bandwidth of the UAC channel, past research concentrated on the half-duplex (HD) mode of operation using time-division duplexing (TDD). Recently, full-duplex (FD) transmission attracted attention in wireless communications due to its potential to nearly double the throughput of single-hop wireless communication links. However, there is an evident absence of equivalent in-depth research in FD for UAC systems, despite the severe bandwidth limitations of the UAC channel. Hence, we outline 3 crucial challenges to be addressed in this research project: Challenge 1-Understanding the Self Interference (SI) in FD UAC systems: FD comes with the promise of theoretically doubling the throughput. However, in practice, SI induced by the large power difference between the distant and local transmissions will result in signal to interference and noise loss, and in turn throughput performance degradation. For acoustic waveforms and UAC modems little is known with regard to the statistical properties of SI and the impact of non-ideal/non-linear characteristics of hardware components operating in FD mode. In order to design effective self interference cancellation (SIC) methods, a comprehensive understanding and accurate models of SI are required. Challenge 2-SIC methods: To fully exploit the potential of FD transmission, effective SIC methods are required capable of providing cancellation up to approximately 100 dB. Passive and active SIC methods have been proposed for wireless communications, however, they have not been investigated at all for UAC waveforms, and we believe that there is significant potential in their utilisation, as well as in developing new and improved approaches. Challenge 3-To realise the benefits of FD in UAC networks: The enhanced physical layer capability offered by FD links can only be fully realised if the medium access control (MAC) layer is suitably designed for simultaneous transmission and reception on the same frequency channel. This calls for highly adaptive scheduling based on varying traffic demands, channel conditions and local interference. The long propagation delays demand efficient assignment of capacity using methods adopted for satellite systems, including free, predictive assignment of capacity, and FD-enabled physical layer network coding. To address these challenges we propose 5 work packages (WP) at Newcastle University (NU) and University of York (UoY) with the aim to design an FD-enabled UAC system that nearly doubles the throughput of equivalent HD systems under the same power and bandwidth constraints. WP A (NU) will study the effects of SI for UAC waveforms and hardware, and provide analytical models capturing the characteristics of SI. WP B (UoY) will study the performance of joint analog and digital SIC and beamforming methods to enable FD operation of acoustic modems. WP C (NU) and WP D (UoY) will investigate the design and performance of FD single and multi-hop relaying methods at physical layer and efficient MAC protocols. WP E (NU) will be used for experimental validation, refinement and integration of the proposed FD system. Experiments will be carried out in the anechoic water tank at NU and using full-scale sea trials conducted in the North Sea in realistic shallow-water channels using NU's research vessel. The research in this proposal is potentially transformative and will contribute to the development of FD-based underwater networking and communication capabilities required by applications such as oil & gas exploration, oceanographic data collection, pollution monitoring, disaster prevention, and security.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R02572X/1
    Funder Contribution: 12,256,900 GBP

    Nuclear facilities require a wide variety of robotics capabilities, engendering a variety of extreme RAI challenges. NCNR brings together a diverse consortium of experts in robotics, AI, sensors, radiation and resilient embedded systems, to address these complex problems. In high gamma environments, human entries are not possible at all. In alpha-contaminated environments, air-fed suited human entries are possible, but engender significant secondary waste (contaminated suits), and reduced worker capability. We have a duty to eliminate the need for humans to enter such hazardous environments wherever technologically possible. Hence, nuclear robots will typically be remote from human controllers, creating significant opportunities for advanced telepresence. However, limited bandwidth and situational awareness demand increased intelligence and autonomous control capabilities on the robot, especially for performing complex manipulations. Shared control, where both human and AI collaboratively control the robot, will be critical because i) safety-critical environments demand a human in the loop, however ii) complex remote actions are too difficult for a human to perform reliably and efficiently. Before decommissioning can begin, and while it is progressing, characterization is needed. This can include 3D modelling of scenes, detection and recognition of objects and materials, as well as detection of contaminants, measurement of types and levels of radiation, and other sensing modalities such as thermal imaging. This will necessitate novel sensor design, advanced algorithms for robotic perception, and new kinds of robots to deploy sensors into hard-to-reach locations. To carry out remote interventions, both situational awareness for the remote human operator, and also guidance of autonomous/semi-autonomous robotic actions, will need to be informed by real-time multi-modal vision and sensing, including: real-time 3D modelling and semantic understanding of objects and scenes; active vision in dynamic scenes and vision-guided navigation and manipulation. The nuclear industry is high consequence, safety critical and conservative. It is therefore critically important to rigorously evaluate how well human operators can control remote technology to safely and efficiently perform the tasks that industry requires. All NCNR research will be driven by a set of industry-defined use-cases, WP1. Each use-case is linked to industry-defined testing environments and acceptance criteria for performance evaluation in WP11. WP2-9 deliver a variety of fundamental RAI research, including radiation resilient hardware, novel design of both robotics and radiation sensors, advanced vision and perception algorithms, mobility and navigation, grasping and manipulation, multi-modal telepresence and shared control. The project is based on modular design principles. WP10 develops standards for modularisation and module interfaces, which will be met by a diverse range of robotics, sensing and AI modules delivered by WPs2-9. WP10 will then integrate multiple modules onto a set of pre-commercial robot platforms, which will then be evaluated according to end-user acceptance criteria in WP11. WP12 is devoted to technology transfer, in collaboration with numerous industry partners and the Shield Investment Fund who specialise in venture capital investment in RAI technologies, taking novel ideas through to fully fledged commercial deployments. Shield have ring-fenced £10million capital to run alongside all NCNR Hub research, to fund spin-out companies and industrialisation of Hub IP. We have rich international involvement, including NASA Jet Propulsion Lab and Carnegie Melon National Robotics Engineering Center as collaborators in USA, and collaboration from Japan Atomic Energy Agency to help us carry out test-deployments of NCNR robots in the unique Fukushima mock-up testing facilities at the Naraha Remote Technology Development Center.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S000631/1
    Funder Contribution: 4,092,210 GBP

    Persistent real-time, multi-sensor, multi-modal surveillance capabilities will be at the core of the future operating environment for the Ministry of Defence; such techniques will also be a core technology in modern society. In addition to traditional physics-based sensors, such as radar, sonar, and electro-optic, 'human sensors', e.g. from phones, analyst reports, social media, will provide new valuable signals and information that could advance situational awareness, information superiority, and autonomy. Transforming and processing this broad range of data into actionable information that meets these requirements presents many new challenges to existing sensor signal processing techniques. In a future where a large-scale deployment of multi-modal, multi-source sensors will be distributed across a range of environments, new signal processing techniques are required. It is therefore timely to consider the fundamental questions of scalability, adaptability, and resource management of multi-source data, when dealing with data that is high-volume, high-velocity, from non-traditional sources, and with high uncertainty. The UDRC Phase 3 project, Signal Processing in an Information Age is an ambitious initiative that brings together internationally leading experts from 5 leading centres for signal processing, data science and machine learning with 10 industry partners. Led by the Institute of Digital Communications at the University of Edinburgh, in collaboration with the School of Informatics at Edinburgh, Heriot-Watt University, University of Strathclyde and Queen's University Belfast. This multi-disciplinary consortium brings together unique expertise in sensing, processing and machine learning from across these research centres. The consortium has been involved in defence signal processing research through the UDRC phases 1 & 2, the MOD's Centre for Defence Enterprise, and the US Office of Naval Research. The team have significant experience in technology transfer, including: tracking and surveillance (Dstl), advanced radar processing (Leonardo, SEA); broadband beamforming (Thales); automotive Lidar and radar systems (ST Microelectronics, Jaguar Land Rover), and deep learning face recognition for security (AnyVision). This project will investigate fundamental mathematical signal and data processing techniques that will underpin future technologies required in the future operating environment. We will develop the underpinning inference algorithms to provide actionable information, that are computationally efficient, scalable, and multi-dimensional, and incorporate non-conventional and heterogeneous information sources. We will investigate multi-objective resource management of dynamic sensor networks that include both physical and human sensors. We will also use powerful machine learning techniques, including deep learning, to enable faster and robust learning of new tasks, anomalies, threats, and opportunities, relevant to operational security.

    more_vert
  • Funder: UK Research and Innovation Project Code: ST/R005265/1
    Funder Contribution: 301,348 GBP

    Machine learning is a computational data analysis technique that offers tremendous benefits over traditional methods. In particular, algorithms can be developed that can automatically identify objects or features of interest in digital imaging. Some machine learning algorithms require extensive training through labelled examples, but unsupervised algorithms can learn from the data itself, requiring no pre-labelled training set. This makes such algorithms incredibly versatile and can be easily applied to many different types of imaging. In principle, the algorithm's performance should improve over time as it 'experiences' more examples of input data. We are developing just an unsupervised machine learning algorithm for use in large-scale astronomical surveys that can also be applied in other 'remote sensing' data, such as underwater sonar imaging of the sea bed and aerial/satellite imagery. Such an algorithm can, for example, help determine the local terrain and identify hazards in complex, changing environments that could be missed by a human inspector. This could feed into AI-assisted navigation units in autonomous vehicles for example. Our goal in this project is to develop a versatile, robust algorithm that can be deployed in a variety of practical areas, with a view to performing real-time image classification and analysis on input data, both from astrophysics and 'real-world' industrial sectors.

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.