Powered by OpenAIRE graph

Foundry (United Kingdom)

Foundry (United Kingdom)

9 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/M021793/1
    Funder Contribution: 99,139 GBP

    Scene modelling is central to many applications in our society including quality control in manufacturing, robotics, medical imaging, visual effects production, cultural heritage and computer games. It requires accurate estimation of the scene's shape (its 3D surface geometry) and reflectance (how its surface reflects light). However, there is currently no method capable of capturing the shape and reflectance of dynamic scenes with complex surface reflectance (e.g. glossy surfaces). This lack of generic methods is problematic as it limits the applicability of existing techniques to scene categories which are not representative of the complexity of natural scenes and materials. This project will introduce a general framework to enable the capture of shape and reflectance of complex dynamic scenes thereby addressing an important gap in the field. Current image or video-based shape estimation techniques rely on the assumption that the scene's surface reflectance is diffuse (it reflects light uniformly in all directions) or assume it is known a priori thus limiting the applicatibility to simple scenes. Reflectance estimation requires estimation of a 6-dimensional function (the BRDF) which describes how light is reflected at each surface point as a function of incident light direction and viewpoint direction. Due to high dimensionality, reflectance estimation remains limited to static scenes or requires use of expensive specialist equipment. At present, there is no method capable of accurately capturing both shape and reflectance of general dynamic scenes, yet scenes with complex unknown reflectance properties are omnipresent in our daily lives. The proposed research will address this gap by introducing a novel framework which enables estimation of shape and reflectance for arbitrary dynamic scenes. The approach is based on two key scientific advances which tackle the high dimensionality issue of shape and reflectance estimation. First, a general methodology for decoupling shape estimation from reflectance estimation will be proposed; this will allow decomposition of the original high dimensional problem, which is ill-posed, into smaller sub-problems that are tractable. Second, a space-time formulation of reflectance estimation will be introduced; this will utilise dense surface tracking techniques to extend reflectance estimation to the temporal domain and thereby increase the number of observations available to overcome the inherently low number of observations at a single time instant. This will build on the PI's pioneering research in 3D reconstruction of scenes with arbitrary unknown reflectance properties and his expertise in dynamic scene reconstruction, surface tracking/animation and reflectance estimation. This research represents a radical shift in scene modelling which will result in several major technical contributions: 1) a reflectance independent shape estimation methodology for dynamic scenes, 2) a non-rigid surface tracking method suitable for general scenes with complex and unknown reflectance and 3) a general and scalable reflectance estimation method for dynamic scenes. This will benefit all areas requiring accurate acquisition of shape and reflectance for real-world scenes with complex dynamic shape and reflectance without the requirement for complex and restrictive hardware setups; such scenes are a common occurrence in natural environments, manufacturing (metallic surfaces) and medical imaging (human tissue) but accurate capture of shape is not possible with existing approaches which assume diffuse reflectance and fail dramatically for such cases. This will achieve for the first time accurate modelling of dynamic scenes with arbitrary surface reflectance properties thus opening up novel avenues in scene modelling. The application of this technology will be demonstrated in digital cinema in collaboration with industrial partners to support the development of the next generation of visual effects.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S001050/1
    Funder Contribution: 555,408 GBP

    The goal of my Innovation Fellowship is to create a new form of immersive 360-degree VR video. We are massive consumers of visual information, and as new forms of visual media and immersive technologies are emerging, I want to work towards my vision of making people feel truly immersed in this new form of video content. Imagine, for instance, what it would be like to experience the International Space Station as if you were there - without leaving the comfort of your own home. The Problem: To feel truly immersed in virtual reality, one needs to be able to freely look around within a virtual environment and see it from the viewpoints of one's own eyes. Immersion requires 'freedom of motion' in six degrees-of-freedom ('6-DoF'), so that viewers see the correct views of an environment. As viewers move their heads, the objects they see should move relative to each other, with different speeds depending on their distance to the viewer. This is called motion parallax. Viewers need to perceive correct motion parallax regardless of where they are (3 DoF) and where they are looking (+3 DoF). Currently, only computer-generated imagery (CGI) fully supports 6-DoF content with motion parallax, but it remains extremely challenging to match the visual realism of the real world with computer graphics models. Viewers therefore either lose photorealism (with CGI) or immersion (with existing VR video). To date, it is not possible to capture or view high-quality 6-DoF VR video of the real world. My Goal: Virtual reality is a new kind of medium that requires new ways to author content. My goal is therefore to create a new form of immersive 360-degree VR video that overcomes the limitations of existing 360-degree VR video. This new form of VR content - 6-DoF VR video - will achieve unparalleled realism and immersion by providing freedom of head motion and motion parallax, which is a vital depth cue for the human visual system and entirely missing from existing 360-degree VR video. Specifically, the aim of this Fellowship is to accurately and comprehensively capture real-world environments, including visual dynamics such as people and moving animals or plants, and to reproduce the captured environments and their dynamics in VR with photographic realism, correct motion parallax and overall depth perception. 6-DoF VR video is a significant virtual reality capability that will be a significant step forward for overall immersion, realism and quality of experience. My Approach: To achieve 6-DoF VR video that enables photorealistic exploration of dynamic real environments in 360-degree virtual reality, my group and I will develop novel video-based capture, 3D reconstruction and rendering techniques. We first explore different approaches for capturing static and dynamic 360-degree environments, which are more challenging, including using 360 cameras and multi-camera rigs. We next reconstruct the 3D geometry of the environments from the captured imagery by extending multi-view geometry/photogrammetry techniques to handle dynamic 360-degree environments. Extending image-based rendering to 360-degree environments will enable 6-DoF motion within a photorealistic 360-degree environment with high visual fidelity, and will result in detailed 360-degree environments covering all possible viewing directions. We first target 6-DoF 360-degree VR photographs (i.e. static scenes) and then extend our approach to 6-DoF VR videos. Project partners: This Fellowship is supported by the following project partners in the UK and abroad: Foundry (London) is a leading developer of visual effects software for film, video and VR post-production, and ideally suited to advise on industrial impact. REWIND (St Albans) is a leading cutting-edge creative VR production company that is keen to experiment with 6-DoF VR video. Reality7 (Hamburg, Germany) is a start-up working on cinematic VR video.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/S001816/1
    Funder Contribution: 557,530 GBP

    Our bodies move as we speak. Evidently, movement of the jaw, lips and tongue is required to produce coherent speech. Furthermore, additional body gestures both synchronise with the voice and significantly contribute to speech comprehension. For example, a person's eyebrows raise when they are stressing a point, their head shakes when they disagree and a shrug might express doubt. The goal is to build a computational model that learns the relationship between speech and upper body motion so that we can automatically predict face and body posture for any given audio speech. The predicted body pose can be transferred to computer graphics characters, or avatars, to automatically create character animation directly from speech, on the fly. A number of approaches have previously been used for mapping from audio to facial motion or head motion, but the limited amount of speech and body motion data that is available has hindered progress. Our research programme will use a field of machine learning called transfer learning to overcome this limitation. Our research will be used to automatically and realistically animate the face and upper body of a graphics character along with a user's voice in real time. This is valuable for a) controlling the body motion of avatars in multiplayer online gaming, b) driving a user's digital presence in virtual reality (VR) scenarios, and c) automating character animation in television and film production. The work will enhance the realism of avatars during live interaction between users in computer games and social VR without the need for full body tracking. Additionally, we will significantly reduce the time required to produce character animation by removing the need for expensive and time-consuming hand-animation or motion capture. We will develop novel artificial intelligence approaches to build a robust speech-to-body motion model. For this, we will design and collect a video and motion capture dataset of people speaking, and this will be made publicly available. The project team is comprised of Dr. Taylor and a PDRA at the University of East Anglia, Norwich, UK.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/K02339X/1
    Funder Contribution: 959,780 GBP

    Imagine being able to take a camera out of doors and use it to capture 3D models of the world around you. The landscape at large, including valleys and hills replete with trees, rivers, waterfalls, fields of grass, clouds; seasides with waves rolling onto shore here and crashing onto rocks over there; urban environments complete with incidentals such as lamposts, balconies, and the detritus of modern life. Imagine models that look and move like the real thing. Models that you can use with to make up new scenes of your own, which you can control as you please, and render in how you like. You can zoom into to see details, and out to get a wide impression. This is an impressive vision, and one that is well beyond current know-how. Our plan is to take a major step towards meeting this vision. We will enable users to use video and images to capture large scale scenes of selected types and populate them with models trees, fountains, street furniture and such like, again carefully selecting the types of objects. We will provide software that recognises the sort of environment the camera is in, and objects in that environment, so that 3D moving models can be automatically created. This will prove very useful to our intended user group, which is the creative industries in the UK: films, games, broadcast. Modelling outdoor scenes is expensive and time consuming, and the industry recognises that video and images are excellent sources for making models they can use. To help them further we will develop software that makes use of their current practice of acquiring survey shots of scenes, so that all data is used at many levels of detail. Finally we will wrap all of our developments into a single system that shows the acquisition, editing and control of complete outdoor environments is one step closer.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/M023281/1
    Funder Contribution: 3,994,060 GBP

    The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA) will build on and extend existing impactful relationships between leading researchers at the University of Bath supported by investment from the University, from external partners and with the close participation of Bath's EPSRC Doctoral Training Centre for Digital Entertainment (CDE). Building on existing expertise in Applied Visual Technology and closely linked with the CDE, CAMERA will draw on knowledge, skills and outputs across multi-disciplinary research areas. These include Computer Vision, Graphics, Motion-Capture, Human-Computer Interaction, Biomechanics and Healthcare, underpinned by a strong portfolio of DE research funding from RCUK and other funders. CAMERA will deliver Applied Visual Technology into our partners companies and their industries, to achieve high economic, societal and cultural impact. Bath leads the UK in innovative creative industry research and training for post-graduates through our CDE, which is contractually partnered with 35 innovative UK companies. Growing from our established core strength in the area of Visual Technology - capturing, modelling and visualising the real world - and our strong historical foundation of entertainment-delivered research, CAMERA will focus on high-impact work in movies, TV visual vffects (VFX) and video games with partners at the The Imaginarium and The Foundry, two of the world's leading visual entertainment companies. This focused collaboration will push the boundaries of technology in the area of human motion capture, understanding and animation, and artist driven visual effects production, feeding into our existing CDE partnerships. From this strong foundation, we will extend and apply visual technology to new areas of high economic, societal and cultural impact within the digital economy theme. These include Human Performance Enhancement, with partners in British Skeleton and BMT Defence Services; and Health, Rehabilitation and Assistive Technologies, with partners in the Ministry of Defence. CAMERA is well placed to lead the application of Visual Technology in these new directions: Bath researchers have helped athletes to win 15 Olympic and World Championship medals in the last 10 years and have contributed significantly to national efforts in integrating ex-soldiers with disabilities into civilian life.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.