Powered by OpenAIRE graph

Sony Broadcast and Professional Europe

Country: United Kingdom

Sony Broadcast and Professional Europe

8 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: GR/T21387/01
    Funder Contribution: 42,946 GBP

    Spatial audio research is concerned with the presentation of music and audio material to an audience in a manner that is optimized according to the spatial properties of the human hearing system. Ultimately this work seeks to rise to the challenge of synthesizing a complex three-dimensional acoustic world that is indistinguishable from what we normally hear around us.Work in this area has reached a point where multichannel delivery systems are finally becoming accepted in the domestic environment, predominantly through applications in the entertainment industry such as DVD, home-theatre systems and multimedia computing technology. These recent developments therefore precipitate the need to identify future directions for research in this increasingly demanding area.UK spatial audio research is currently small although there are a number of unique areas that ensure visibility in the wider international community. The proposed network aims to build on this current work to create a community of researchers, practitioners and artists, drawn from the fields of science, audio engineering and the arts. It is anticipated that through this network and the interface between theoretical, experimental and creative approaches, the effective coordination of these researchers and practitioners will lead to the identification, articulation and development of important directions in future spatial audio work.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/L00383X/1
    Funder Contribution: 704,220 GBP

    We have yet to experience a complete lifespan in the Digital Age, from conception to death in old age. Those who have grown up interacting with digital technology from a very early age are still young, whilst older technology adopters have identities that pre-date the Digital Age, populated with paper trails of memories. Many citizens have only a limited awareness of the permanency and consequence of posting in public and extended social circles. Digital posts from student or teenage years reflecting opinions or behaviour that seemed socially appropriate at that time may not reflect well in future professional life. Digitally mediated interactions produced in life may develop an undesirable perspective if they linger after physical death. The lifelong digital trails generated through our digitally mediated interactions, including online, echo our 'offline' lives, but unlike a physical life, the Digital Lifespan can persist indefinitely, and the rich personal context it provides can be harnessed in ways an individual might not expect or desire. In this EPSRC-funded research, we will produce unique insights into the digital lifespan of UK citizens both now and in a future where our young Digital Natives approach adulthood, become parents, retire, and pass away. To help generate these insights, we will first chart the unmapped territory of the "Digital Lifespan" as it is now in the UK, exploring the ways in which virtual and physical aspects of our lives converge, diverge and clash. This chart will be grounded in a series of in-depth studies with UK citizens at four transition points in their lives: approaching adulthood, becoming parents, retiring, and bereavement. The chart that we create will guide us as we look into a future where citizens increasingly live out their lives through digitally mediated interactions. We will explore the implications of this future with individuals, policymakers, legislators and industry representatives. The knowledge and insight developed into issues surrounding ownership and management of citizens' Digital Lifespans will be used to raise digital literacy. New technologies will be designed and developed, bringing personal digital content together in one place to create a far richer picture than that afforded by currently available tools. Our new technologies will automatically draw out the personal context of such content, making inferential links and distilling the impressions that citizens present of themselves through digital media. These distilled impressions will be reflected back to individuals, raising digital literacy by promoting awareness of how individuals' digital identities are (or will in future be) represented online over their entire lifespan. Further these novel technologies will equip citizens with ways to manage the impression that they give. Beyond individual citizens, our work will inform educators, policymakers and legislators providing a deeper understanding of what it means to live as a UK citizen in a Digital Age.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/P03456X/1
    Funder Contribution: 498,315 GBP

    Future wireless systems are expected to constitute an ultra dense wireless network, which supports billions of smart wireless devices (or machines) to provide a wide varieties of services for smart homes, smart cities, smart transportation systems, smart healthcare, and smart environments, etc., in addition to supporting conventional human-initiated mobile communications. Therefore, the communication technologies employed in future wireless communication systems are expected to be capable of coping with highly diverse service requirements and communication environments, both of which also have time-varying nature. However, the legacy wireless systems, such as LTE/LTE-A, have been primarily designed for human-initiated mobile communications, which rely on strict synchronisation guaranteed by a substantial signalling overhead. Explicitly, due to this overhead legacy systems are inefficient for device-centric mMTC. Furthermore, they are unable to support the massive connectivity required by the future mMTC networks, where devices heavily contend for the limited resources available for communications. This project is proposed at the time, when myriads of smart wireless devices of different types are being deployed and connected via the Internet, which is expected to be the next revolution in the mobile ecosystem. To fulfil these objectives, a new design paradigm is required for supporting the massive number of wireless devices having diverse service requirements and unique traffic characteristics. In this project, we propose to meet the challenges of future mMTC by investigating and designing novel non-orthogonal multiple access, flexible duplexing, and adaptive coherent-noncoherent transmission schemes, as well as new waveforms that are tailored for the future mMTC systems. We aim for alleviating the strict synchronism demanded by the legacy wireless systems, and for significantly improving their capabilities, network performance as well as the lifetime of autonomous mMTC nodes. The novelties of this project are summarized as follows. 1. New non-orthogonal sparse code multiple access (SCMA) schemes will be developed for mMTC systems, where the number of devices exceeds the number of available resource-slots, resulting in an over-loaded or a generalized rank-deficient condition. 2. Novel multicarrier waveforms will be designed for future mMTC in order to maximize spectrum efficiency by minimizing the overhead for achieving synchronisation as well as for reducing the out-of-band radiation. 3. By jointly exploiting the resources available in the time, frequency and spatial domains, we will design noncoherent, partially-coherent and adaptive coherent-noncoherent transmission schemes, in order to strike the best possible trade-off among overhead reduction, energy and spectral efficiency, latency and implementation complexity in practical mMTC scenarios. 4. We will investigate the full potential of the multicarrier-division duplex (MDD) scheme and, especially, its applications to future mMTC by synergistically combining it with novel multicarrier waveforms, non-orthogonal SCMA techniques and other high-efficiency transmission schemes developed within the project. 5. Furthermore, the key techniques developed in the project will be prototyped and integrated into the 5G Innovation Centre (5GIC) test bed facilities at the University of Surrey. This will allow us to demonstrate the viability of our new design approaches, as well as to accelerate knowledge transfer and commercialisation. The proposed research will be conducted jointly by the 5GIC at the University of Surrey and Southampton Wireless (SW) at the University of Southampton, led by Xiao, Tafazolli, Yang & Hanzo. The research and commercial exploitation of the project will be further consolidated by our partnership with experienced academic and industrial partners.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/L000539/1
    Funder Contribution: 5,415,200 GBP

    3D sound can offer listeners the experience of "being there" at a live event, such as the Proms or Olympic 100m, but currently requires highly controlled listening spaces and loudspeaker setups. The goal of S3A is to realise practical 3D audio for the general public to enable immersive experiences at home or on the move. Virtually the whole of the UK population consume audio. S3A aims to unlock the creative potential of 3D sound and deliver to listeners a step change in immersive experiences. This requires a radical new listener centred approach to audio enabling 3D sound production to dynamically adapt to the listeners' environment. Achieving immersive audio experiences in uncontrolled living spaces presents a significant research challenge. This requires major advances in our understanding of the perception of spatial audio together with new representations of audio and the signal processing that allows content creation and perceptually accurate reproduction. Existing audio production formats (stereo, 5.1) and those proposed for future cinema spatial audio (24,128) are channel-based requiring specific controlled loudspeaker arrangements that are simply not practical for the majority of home listeners. S3A will pioneer a novel object-based methodology for audio signal processing that allows flexible production and reproduction in real spaces. The reproduction will be adaptive to loudspeaker configuration, room acoustics and listener locations. The fields of audio and visual 3D scene understanding will be brought together to identify and model audio-visual objects in complex real scenes. Audio-visual objects are sound sources or events with known spatial properties of shape and location over time, e.g. a football being kicked, a musical instrument being played or the crowd chanting at a football match. Object based representation will transform audio production from existing channel based signal mixing (stereo, 5.1, 22.2) to spatial control of isolated sound sources and events. This will realise the creative potential of 3D sound enabling intelligent user-centred content production, transmission and reproduction of 3D audio content in platform independent formats. Object-based audio will allow flexible delivery (broadcast, IP and mobile) and adaptive reproduction of 3D sound to existing and new digital devices.

    more_vert
  • Funder: UK Research and Innovation Project Code: AH/S003622/1
    Funder Contribution: 7,554,430 GBP

    StoryFutures Academy is a genuine HEI-Industry collaboration between trainers and producers to develop the storytelling techniques and languages that will shape the future of immersive narrative. Led by the National Film & Television School and Royal Holloway, our bid is founded on research and training knowledge that places storytelling at its heart. We will provide core screen sector talent with the tools, space, creative freedom and cross-sector work structures to unlock the creative and commercial potential of immersive production. Partnered by Sir Lenny Henry, Destiny Ekaragha, Alex Garland, Georgina Campbell and more we will lead a charge of UK creative talent into immersive that embeds diversity into the development of the medium across writing, directing, producing, performance, cinematography, editing, animation and VFX. We will deliver training in action, providing opportunities for creatives to learn through taking part in immersive productions that tackle key creative and technical challenges. We link this to R&D in business model innovation and audience insight that combines electronic engineering, neuro- and cognitive psychology with long-sighted ethnography to provide a catalyst for growth of creative industries. We de-risk immersive production through 4 workstreams that provide £1.25m for collaborative projects with immediate impact: 1. Embedded Placements: Promoting talent development and commercial vitality by enabling placements of screen sector talent on immersive productions for cross sector innovation and work-based learning; 2. Collaborative Co-productions: Co-producing immersive experiences that tackle sector wide creative and technological barriers to growth, upskilling core screen sector workers via access to hands-on learning on live productions that build a cross-sector talent pool; 3. Experimental Labs: R&D-based productions that expose core screen sector talent partners to immersive and push technological and creative boundaries; 4. Developmental Training: Training a next generation of immersive storytellers and trainers that cascades knowledge to HEIs, FECs and industry across the country. We are unique in our industry credibility and relationships. The NFTS was awarded the BAFTA for Outstanding British Contribution to Cinema in 2018, and it is the only institution in the UK where industry already invest over £1.5m annually in CPD level training courses, enabling us access and partnership with internationally renowned on- and off-screen talent. Our partners are world-leaders in how story and new technologies combine to produce compelling and novel immersive experiences, including immersive theatre (Punchdrunk), VR (Rewind), gaming (Sony IE), film (BFI), television (Sky VR), advertising (McCann) visual effects (The Third Floor, Double Negative), performance capture (Imaginarium). We bring them together with advanced Original Equipment Manufacturers (Microsoft, Plexus) and sector experts (Digital Catapult) to place story and technology in tandem to explore, research, train and develop cross sector storytelling talent and business models. SFA will create over 60 ICE productions and generate nearly 1,000 direct beneficiaries. It will cascade benefits, insights and opportunities via collaborations with regional partners, including NFTS' base in Scotland alongside TRC Media and UK Games Fund as well as access to nationwide labs via Digital Catapult, and co-production bases in Manchester (McCann) and Yorkshire (BFI). It also gains significant advantage from the economies of scale and access to talent achievable from our Gateway Cluster base with its easy flows of talent and work in and out of London. StoryFutures Academy can make the UK a world-leader in immersive because it has unmatched access to mainstream creative screen sector talent, companies and technologies, allowing it to translate experimentation, training and R&D into tangible economic and creative ROI for the whole of UK Plc.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.