Powered by OpenAIRE graph

Electronic Arts (United Kingdom)

Electronic Arts (United Kingdom)

6 Projects, page 1 of 2
  • Funder: UK Research and Innovation Project Code: EP/G062412/1
    Funder Contribution: 24,385 GBP

    We propose to organise and administer a Computer Animation competition open to all UK schoolchildren aged 7-19. The competition will run January-May 2009.Entrants, who must be registered to enter by their teacher, will be invited to create and submit a short computer graphics animation. There will be a requirement that the animation must have some link to one or more topics from the National Curriculum. There will be four age categories: KS2, KS3, KS4, and 16-plus, and entries may be submitted by individuals or teams.In the 2008 competition, we specified that animations must be created using freely-available software from Carnegie-Mellon University called Alice. Its creators describe it as an educational software that teaches students computer programming in a 3D environment . Alice features a simple drag-and-drop interface which allows the rapid creation of animations of 3D figures in landscapes. We found that Alice is more suitable for younger children, so in order to attract more entries from the upper age groups, we propose to introduce two other softwares which may be used to create the animations: Flash and Scratch. Flash is popular professional multimedia software from Adobe Systems. Software to view Flash animations is freely available for all web browsers, but software to author Flash animations must be purchased from Adobe Systems (there are several free Flash authoring systems from third parties, but in our experience these are not robust).Scratch is freely-available interactive multimedia educational software from the Massachusetts Institute of Technology. It is intended to teach basic computer programming concepts using simple manipulations of graphics, images and shapes. Before the closing date of the competition, entrants submit their animations directly to The University of Manchester, using a web-based submission system already in place, following the 2008 competition. The project team will then pre-judge entries, and create a shortlist to be passed onto a panel of 6 external judges, who will select overall award winners in the different age and team/individual categories. There will also be a number of special awards such as Best use of music and Best link to the National Curriculum , which serve to spread awards around a larger number of entrants. The judges will be selected from contacts we already have in the fields of Education, Communication, Art, Computer Graphics, Computer Game Development, and Public Engagement.Prizes (laptops, software, tokens and certificates) will be awarded at a public Film Show and Awards Ceremony, to be held at MOSI in June 2009. Attendance at this event will be free for all interested school parties (subject to the venue's capacity), and Prize Winners will attend a special Winners' Lunch.Following the Awards Ceremony, winning entries will be posted on a public website and press releases circulated to the media, and relevant schools contacts across the UK.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/V002554/1
    Funder Contribution: 407,334 GBP

    We spend the majority of our lives indoors. Within enclosed spaces, sound is reflected numerous times, leading to reverberation. We are accustomed to perceiving reverberation-we unconsciously use it to navigate the space, and, when absent, we notice. Similarly, our electronic devices, such as laptops, TVs or smart home devices, are exposed to reverberation and need to take into account its presence. Being able to predict, synthesise, and control reverberation is therefore important. This is done using room acoustic models. Existing room acoustic models suffer from two main limitations. First, they were originally developed from very different starting points and for very different purposes, which has led to a highly fragmented research field where advancements in one area do not translate to advancements in other areas, slowing down research. Second, each model has a specific accuracy and a specific computational complexity, with some very accurate models taking several days to run (physical models), while others run in real-time but with low accuracy and only aim to create a pleasing reverberant sound (perceptual models). Thus, there is no single model that allows to scale continuously from one extreme to the other. This project will overcome both limitations by defining a novel, unifying room acoustic model that combines appealing properties of all main types of models and that can scale on demand from a lightweight perceptual model to a full-scale physical model. Such a SCalable Room Acoustic Model (SCReAM) will bring benefits in many applications, ranging from consumer electronics and communications, to computer games, immersive media, and architectural acoustics. The model will be able to adapt in real time, enabling end-users to get the best possible auditory experience allowed by the available computing resources. Audio software developers will not need to update their development chains once more powerful machines become available, thus reducing costs. Electronic equipment, such as hands-free devices, smart loudspeakers, and sound reinforcement systems, will be able to build a more flexible internal representation of room acoustics, allowing them to reduce unwanted echoes, to remove acoustic feedback, and/or to improve the tonal balance of reproduced sound. The main hypothesis of the project is that a connection exists between physical models and perceptual models based on so-called delay networks, and that this connection can be leveraged to develop the sought-after unifying and scalable model. The research will be conducted at the University of Surrey with industrial support by Sonos (audio consumer electronics), Electronic Arts (computer games), Audio Software Development Limited (computer games audio consultancy), and Adrian James Acoustics (acoustics consultancy).

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/L000539/1
    Funder Contribution: 5,415,200 GBP

    3D sound can offer listeners the experience of "being there" at a live event, such as the Proms or Olympic 100m, but currently requires highly controlled listening spaces and loudspeaker setups. The goal of S3A is to realise practical 3D audio for the general public to enable immersive experiences at home or on the move. Virtually the whole of the UK population consume audio. S3A aims to unlock the creative potential of 3D sound and deliver to listeners a step change in immersive experiences. This requires a radical new listener centred approach to audio enabling 3D sound production to dynamically adapt to the listeners' environment. Achieving immersive audio experiences in uncontrolled living spaces presents a significant research challenge. This requires major advances in our understanding of the perception of spatial audio together with new representations of audio and the signal processing that allows content creation and perceptually accurate reproduction. Existing audio production formats (stereo, 5.1) and those proposed for future cinema spatial audio (24,128) are channel-based requiring specific controlled loudspeaker arrangements that are simply not practical for the majority of home listeners. S3A will pioneer a novel object-based methodology for audio signal processing that allows flexible production and reproduction in real spaces. The reproduction will be adaptive to loudspeaker configuration, room acoustics and listener locations. The fields of audio and visual 3D scene understanding will be brought together to identify and model audio-visual objects in complex real scenes. Audio-visual objects are sound sources or events with known spatial properties of shape and location over time, e.g. a football being kicked, a musical instrument being played or the crowd chanting at a football match. Object based representation will transform audio production from existing channel based signal mixing (stereo, 5.1, 22.2) to spatial control of isolated sound sources and events. This will realise the creative potential of 3D sound enabling intelligent user-centred content production, transmission and reproduction of 3D audio content in platform independent formats. Object-based audio will allow flexible delivery (broadcast, IP and mobile) and adaptive reproduction of 3D sound to existing and new digital devices.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/L016540/1
    Funder Contribution: 4,641,600 GBP

    EPSRC Centre for Doctoral Training in Digital Entertainment University of Bath and Bournemouth University The Centre for Digital Entertainment (CDE) supports innovative research projects in digital media for the games, animation, visual effects, simulation, cultural and healthcare industries. Being an Industrial Doctorate Centre, CDE's students spend one year being trained at the university and then complete three years of research embedded in a company. To reflect the practical nature of their research they submit for an Engineering Doctorate degree. Digital media companies are major contributors to the UK economy. They are highly-respected internationally and find their services in great demand. To meet this demand they need to employ people with the highest technical skills and the imagination to use those skills to a practical end. The sector has become so successful that the shortage of such people now constrains them from expanding further. Our Doctoral Training Centre is already addressing that and has become the national focus for this kind of training. We do this by combining core taught material with an exciting and unusual range of activities designed to challenge and extend the students' knowledge beyond the usual boundaries. By working closely with companies we can offer practical challenges which really push the limits of what can be done with digital media and devices, and by the people using them. We work with many companies and 40-50 students at any one time. As a result we are able to support the group in ways which would not be possible for individual students. We can place several students in one company, we can send teams to compete in programming competitions, and we can send groups to international training sessions. This proposal is to extend and expand this successful Centre. Major enhancements will include use of internationally leading industry experts to teach Master Classes, closer cooperation between company and university researchers, business training led by businesses and options for international placements in an international industry. We will replace the entire first year teaching with a Digital Media programme specifically aimed at these students as a group. The graduates from this Centre will be the technical leaders of the next generation revolution in this fast-moving, demanding and exciting industry.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/G037159/1
    Funder Contribution: 5,649,580 GBP

    In redeveloping the EngD VEIV centre, we will be focussing on three themes in the area: - Vision & Imaging, covering the areas of computer-based interpretation of images. For example, object tracking in real-time video, or face detection and surface appearance capture. UCL now has a broad expertise in medical imaging (see description of CMIC), and also in tracking and interpretation of images (e.g. expertise of Julier and Prince who are on the management team). Previously we have supported several EngD projects in this area: e.g. Philips (structure from MRI), Sortex (object detection), Bodymetrics (body measurement from scanning data), where the innovation has been in higher-levels of interpretation of imaging data and derivation of measurements automatically. Two other projects highlight the rapidly developing imaging technology, with high-density sensors and high dynamic range imagery (e.g. BBC and Framestore). We have outline support from several companies for continuing in this area. - Media & Interfaces, covering real-time graphics and interactive interfaces. For example, the use of spatially immersive interfaces, or computer games technology. We have a growing relationship with a number of key games companies (EA, Sony, Eidos, Rebellion), where their concern or interest lies in the management of large sets of assets for complex games software. There is interest in tools for developing imagery (r.g. Arthropics, Geomerics). We also have interest in the online 3D social spaces from IBM and BT. A relatively recent development that we plan to exploit is the combination of real-time tracking, real-time graphics and ubiquitous sensing to create augmented reality systems. Interest has been expressed in this area from Selex and BAe. There is also a growing use of these technologies in the digital heritage area, which we have expertise in and want to expand. - Visualisation & Design, covering the generation and visualisation of computer models in support of decision-making processes. For example, the use of visualisation of geographic models, or generative modelling for architectural design. Great advances have been made in this area recently, with the popularity of online GIS tools such as Google Earth tied in to web services and the acceptance of the role of IT in complex design processes. We would highlight the areas of parameterised geometry (e.g. with Fosters and the ComplexMatters spin-out), studying pedestrian movements (with Buro Happold, Node Architects), visualisation of GIS data (e.g. ThinkLondon, Arup Geotechnical), and medical visualisation.These themes will be supported by broadening the engagement with other centres around UCL, including: the UCL Interaction Centre, the Centre for Medical Image Computing, the Chorley Institute and the Centre for Computational Science.The main value of the centre is that visual engineering requires cross-disciplinary training. This is possible with a normal PhD, but within the centre model inter-disciplinary training can embed the students' focussed research into a larger context. The centre model provides a programme structure and forums to ensure that opportunities and mechanisms for cross-disciplinary working are available. The centre also provides an essential role in providing some core training; though by its nature the programme must incorporate modules of teaching from a wide variety of departments that would otherwise be difficult to justify.

    more_vert
  • chevron_left
  • 1
  • 2
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.