Powered by OpenAIRE graph

DYNNIQ UK LTD

Country: United Kingdom
Funder
Top 100 values are shown in the filters
Results number
arrow_drop_down
6 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/H012710/1
    Funder Contribution: 103,457 GBP

    Visual analysis by human operators or service personnel is widely acknowledged to benefit from a fused representation, where images or video information from different spectral bands are combined into a single representation. To provide maximum utility fused data, or its constituent components, must be delivered in a timely manner, must facilitate simple and flexible processing and must be robust to loss and network congestion.Non infrastructure-based Mobile Ad-Hoc Networks are emerging as suitable platforms for exchanging and fusing real-time multi-sensor content. Such networks are characterised by the highly dynamic behaviour of the transmission routes and high path outage probabilities. They exemplify the type of complex, heterogeneous end-end transmission environments which will be commonly encountered in future military scenarios. The low-bandwidth, noisy nature of the physical channel in many sensor networks represents the most serious challenge to implementation of the digital battlefield of the future. One of the key challenges in such complex networking environments is the need to reliably transport and fuse real time video. Video is acknowledged to be inherently difficult to transmit and this is compounded by the need to support multiple sources to aid fusion and situational awareness while maintaining data security. We will focus our work on embedded video bitstreams (MPEG-4 (SVC) which offer scalability and enhanced flexibility for adaptation to varying channel types, interference levels, network structures and content types. These mitigate the need for highly inefficient video transrating processes and instead present a more tractable requirement in the form of dynamic bitstream management.A multisource approach to streaming is proposed which will support video fusion in a bandwidth-efficient manner while having the potential to significantly increase the robustness of real-time transmission in complex heterogeneous networks. Source coding and fusion will be based on the concept of scalability using an embedded bitstream. This means that the source need only be encoded once and that the coded representation can be truncated to support multiple diverse terminal types and to provide inherent congestion management without feedback. Such a system must be designed to maintain optimum fusion performance and hence intelligibility in the presence of bitstream truncation. The potential advantages of this scheme include:- A joint framework for scalable fusion and compression supporting both lossless and lossy representations. - Flexibility for optimisation depending on content type and application.- Graceful degradation: the capability of the fused video bitstream to adapt to differing terminal types and dynamic network conditions - Error resilience: the structure of the code stream can aid subsequent error correction systems alleviating catastrophic decoding failures.- Secure delivery: the ability to design encryption schemes which support truncation.- Region-of-Interest coding: supporting definition of ROIs for priority transmission.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/I003061/1
    Funder Contribution: 101,894 GBP

    This proposal aims to advance the state-of-the-art in 3D face recognition by means of a novel, non-intrusive and highly efficient skin reflectance capture technology. The techniques developed will, in-turn, enable rapid facial geometry analysis and enhanced recognition rates.Face recognition is currently a rapidly growing area of research within industry and academia. Indeed, 2D face recognition is now at a stage where a few industrial applications are possible. However, these methods, which just use a single 2D image of a face to perform the recognition, are excessively limited by the fact that the face becomes unrecognisable when variations such as pose, illumination, make-up or expression are present. However, the 3D shape of the face does not change at all with many of these variations, and changes only minimally with expression. Consequently, an increasing amount of face recognition research is focussing on ways to use the 3D shape of the face for identification.Here, we are proposing to use a Photometric Stereo (PS) method for 3D shape estimation. The main advantages of the proposed method compared to other 3D face shape capture devices will be (1) cheaper to construct hardware, (2) fast acquisition and processing, (3) largely unaffected by ambient illumination, (4) person-specific reflectance considered, (5) more accurate than standard PS, (6) possibility of using the reflectance properties to aid recognition, and (7) minimal calibration required.A large number of methods for using the 3D facial geometry have been proposed in the scientific literature and very promising results have been attained. However, the question of how to capture a subject's 3D face shape prior to recognition is an open one. Existing approaches use technology that is too expensive and too slow for most applications. This proposal is motivated by the need to address this question.The main contributions of the proposed work will be in two areas: photometric stereo (PS) and reflectance analysis. Photometric stereo is a method of estimating the 3D geometry of an object by imaging it under three or more illumination directions. For this project, we will be using five light sources, and aim to simultaneously acquire both shape and reflectance information. We will be using a high speed light-camera synchronisation device developed here at UWE for this task. This will allow deducing a mapping between the orientations of the recovered surface and the measured pixel intensities which will form a quantitative measure of the skin reflectance properties. An iterative method will then be used to update the surface estimate and the reflectance properties until convergence. Thus, we will arrive at a lookup-table set of reflectance measurements and an optimal shape estimate, which will allow for improved face recognition. This is a novel approach to PS and should allow us to diminish some of the strong assumptions on surface orientation that most current methods impose. The main challenge here will be in forming the relationships between the image-based skin reflectance measurements and the skin orientation for the whole face in order to acquire the optimal 3D shape estimate.The final stage of the project will involve applying face recognition methods developed previously both at the MVL and at other institutions for a comparative analysis. This will demonstrate improvements in recognition rates compared to 3D methods using standard PS and other techniques.

    more_vert
  • Funder: European Commission Project Code: 723390
    Overall Budget: 3,836,350 EURFunder Contribution: 3,836,350 EUR

    As the introduction of automated vehicles becomes feasible, even in urban areas, it will be necessary to investigate their impacts on traffic safety and efficiency. This is particularly true during the early stages of market introduction, where automated vehicles of all SAE levels, connected vehicles (able to communicate via V2X) and conventional vehicles will share the same roads with varying penetration rates. There will be zones and situations on the roads where high automation can be granted, and others where it is not allowed or not possible due to missing sensor inputs, high complexity situations, etc. In the areas where those zones merge many automated vehicles will change their activated level of automation. Therefore, we refer to these areas as “Transition Areas”. TransAID will develop and demonstrate traffic management procedures and protocols to enable smooth coexistence of automated, connected and conventional vehicles especially at Transition Areas. A hierarchical approach will be followed where control actions will be implemented at different layers including centralised traffic management, infrastructure and vehicles. First, simulations will be performed to find optimal infrastructure-assisted management solutions to control connected, automated and conventional vehicles at Transition Areas, taking into account traffic safety and efficiency metrics. Then, communication protocols for the cooperation between connected/automated vehicles and the road infrastructure are developed. Measures to detect and inform conventional vehicles will also be addressed. The most promising solutions will be implemented as real world prototypes and demonstrated under real urban conditions. Finally, guidelines for advanced infrastructure-assisted driving will be formulated. The guidelines will also include a roadmap defining activities and needed upgrades of road infrastructure in the upcoming 15 years in order to guarantee a smooth coexistence of conventional, connected and automated vehicles.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/E028845/1
    Funder Contribution: 268,478 GBP

    We propose to construct a system for 3D face recognition. We propose to use photometric stereo for face reconstruction in order to by pass the problems of conventional stereo (that needs to solve the matching problem first), structured light (that does not supply colour information) and photometric stereo with spectrally distinct light sources (that relies on the assumption of uniformly coloured imaged objects). Photometric stereo (PS) can reproduce structural details and colour on a per pixel basis in a way that no other 3D system can. The proposed scheme will be appropriate for use in a controlled environment for authentication purposes, but also in a general environment e.g. the entrance of a public event. We shall use two routes: surface reconstruction from the data and direct extraction of facial characteristics from the PS set. In the first approach, once surface normal and albedo is recovered, images of the face may be synthetically rendered under arbitrary new pose and illumination conditions to allow novel viewing conditions. We also aim to use a new multi-scale facial feature matching approach in the recognition process, where facial features range from overall face and head shape to fine skin dermal topography, reflectance and texture. The latter may be thought of as a form of detailed surface bump map forming a unique skin-print or signature and represents a new approach. Hence both the 3D shape and 2D intensity data will be used in recognition or authentication tasks. We propose to use scalable methods for matching, so we can cope with large databases. 3D matching will be done with the newly proposed invaders algorithm which is FFT cross-correlation based, and more detailed matching will be done by using features and classifier combination. The novelty of our approach lies in the use of PS to extract 3D information, the use of detailed facial characteristics like moles, scratches, and skin texture, and in the design of the system so that it can operate while the person is moving, with minimum intrusion and maximum efficiency. We have two industrial collaborators who will contribute in system design, data gathering and exploitation and support from the Home Office. We shall evaluate our system following three possible scenaria: a face searched in the crowd (real time face recognition), a person has to be identified (off-line face recognition) and a person has to be checked against a claimed identity (face authentication). We shall install the first prototype system in the offices of one of our industrial partners in month 12, so that data can be collected. We envisage a door like structure with lights flashing in succession as a person walks through, while a camera is capturing images. We propose to investigate the optimal number of lights in terms of efficiency and accuracy of the reconstruction, and the option of using non-visible light to avoid problems with people sensitive to flashes. We shall also investigate the relationship between detail that has to be captured and the geometry of the construction.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/E028659/1
    Funder Contribution: 257,046 GBP

    We propose to construct a system for 3D face recognition. We propose to use photometric stereo for face reconstruction in order to by pass the problems of conventional stereo (that needs to solve the matching problem first), structured light (that does not supply colour information) and photometric stereo with spectrally distinct light sources (that relies on the assumption of uniformly coloured imaged objects). Photometric stereo (PS) can reproduce structural details and colour on a per pixel basis in a way that no other 3D system can. The proposed scheme will be appropriate for use in a controlled environment for authentication purposes, but also in a general environment e.g. the entrance of a public event. We shall use two routes: surface reconstruction from the data and direct extraction of facial characteristics from the PS set. In the first approach, once surface normal and albedo is recovered, images of the face may be synthetically rendered under arbitrary new pose and illumination conditions to allow novel viewing conditions. We also aim to use a new multi-scale facial feature matching approach in the recognition process, where facial features range from overall face and head shape to fine skin dermal topography, reflectance and texture. The latter may be thought of as a form of detailed surface bump map forming a unique skin-print or signature and represents a new approach. Hence both the 3D shape and 2D intensity data will be used in recognition or authentication tasks. We propose to use scalable methods for matching, so we can cope with large databases. 3D matching will be done with the newly proposed invaders algorithm which is FFT cross-correlation based, and more detailed matching will be done by using features and classifier combination. The novelty of our approach lies in the use of PS to extract 3D information, the use of detailed facial characteristics like moles, scratches, and skin texture, and in the design of the system so that it can operate while the person is moving, with minimum intrusion and maximum efficiency. We have two industrial collaborators who will contribute in system design, data gathering and exploitation and support from the Home Office. We shall evaluate our system following three possible scenaria: a face searched in the crowd (real time face recognition), a person has to be identified (off-line face recognition) and a person has to be checked against a claimed identity (face authentication). We shall install the first prototype system in the offices of one of our industrial partners in month 12, so that data can be collected. We envisage a door like structure with lights flashing in succession as a person walks through, while a camera is capturing images. We propose to investigate the optimal number of lights in terms of efficiency and accuracy of the reconstruction, and the option of using non-visible light to avoid problems with people sensitive to flashes. We shall also investigate the relationship between detail that has to be captured and the geometry of the construction.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.