Events

AES 61st Conference: Papers

Audio for Games

 

Home | Authors | Schedule | Papers | Venue | Committee | Sponsors | Registration Info | Wrap-Up

 

Papers Session 1 - Spatial Audio Rendering
Preliminary Investigations into Binaural Cue Enhancement for Height Perception in Transaural Systems
Thomas McKenzie and Gavin Kearney
In this paper, we investigate the perception of height cues in motion-tracked transaural reproduction. 10 subjects were asked to localise sounding objects with height in a two-loudspeaker head-tracked transaural reproduction setup. Features of the head related transfer function (HRTF) for height perception were tested with generic HRTFs from a KEMAR binaural mannequin and additional spatial filtering through modeling and subsequent exaggeration of HRTF cues for height perception. Results illustrate the applicability of HRTF cue exaggeration in transaural systems for height perception with non-individualised HRTFs.
An Algorithmic Approach to the Manipulation of B-Format Impulse Responses for Sound Source Rotation.
Michael Lovedee-Turner, Jude Brereton, and Damian Murphy
In many video games, sound sources and the player are constantly moving through the environment, establishing the requirement for dynamic reproduction of the acoustic conditions in an enclosed or semi-enclosed space. This paper presents an algorithm for the rotational movement of a sound source for a constant position in space. The method developed alters the amplitude of individual discrete reflections based on their point of origin and the directivity pattern of the source. Initial analysis of a B-Format impulse response of an enclosed or semi-enclosed space is required in order to locate individual reflections, their arrival directions and their point of origin. Intensity vector analysis is used to calculate the direction of arrival, circular variance and local maxima to locate individual discrete reflections, and ray tracing to retrace the detected reflections. The room acoustic parameters of the manipulated impulse responses have been objectively measured and compared against reference impulse responses. Initial testing of the manipulated impulse responses shows that the algorithm shows promise, but requires improvement, namely smaller intervals for the source directivity measurements and investigation into processing of the diffuse field.
Papers Session 2 - Audio Content and Serious Games
Audio Commons: Bringing Creative Commons Audio Content To The Creative Industries
Frederic Font, Tim Brookes, George Fazekas, Martin Guerber, Amaury La Burthe, David Plans, Mark D. Plumbley, Meir Shaashua, Wenwu Wang, and Xavier Serra
Significant amounts of user-generated audio content, such as sound effects, musical samples and music pieces, are uploaded to online repositories and made available under open licenses. Moreover, a constantly increasing amount of multimedia content, originally released with traditional licenses, is becoming public domain as its copyright expires. Nevertheless, the creative industries are not yet using much of all this content in their media productions. There is still a lack of familiarity and understanding of the legal context of all this open content, but there are also problems related with its accessibility. A big percentage of this content remains unreachable either because it is not published online or because it is not well organised and annotated. In this paper we present the Audio Commons Initiative, which is aimed at promoting the use of open audio content and at developing technologies with which to support the ecosystem composed by content repositories, production tools and users. These technologies should enable the reuse of this audio material, facilitating its integration in the production workflows used by the creative industries. This is a position paper in which we describe the core ideas behind this initiative and outline the ways in which we plan to address the challenges it poses.
Safe And Sound Drive: Sound Based Gamification Of User Interfaces In Cars
Arne NykŠnen, AndrŽ Lundkvist, Stefan Lindberg, And Mariana Lopez
The Safe and Sound Drive project concerns the design of an audio-only serious game for cars that will help drivers to increase eco-driving skills, lower fuel consumption and encourage safe and environmentally friendly approaches to driving. Methods and procedures for the design of sounds for audio-only user interfaces are reviewed and discussed, and design work and preliminary results from user studies of prototypes of the audio interface are presented. Contextual Inquiry Interviews with three participants using the audio interface in a car while driving on a test track showed that opinions about beeps and audio signals vary among subjects. Music and podcast based contents were generally well received. Alteration of media content, e.g. by actively adjusting BPM, volume, spectral balance, or music mix could form working mechanisms for providing game related cues to the driver.
Papers Session 3 - Binaural Sound for VR
Lateral Listener Movement On The Horizontal Plane: Sensing Motion Through Binaural Simulation
Matthew Boerum, Bryan Martin, Richard King, George Massenburg, Dave Benson, and Will Howie
An experiment was conducted to better understand first-person motion as perceived by a listener when moving between two virtual sound sources in an auditory virtual environment (AVE). It was hypothesized that audio simulations using binaural cross-fading between two separate sound source locations could represent a sensation of motion for the listener that is equivalent to real world motion. To test the hypothesis, a motion apparatus was designed to move a head and torso simulator (HATS) between two matched loudspeaker locations while recording various stimulus signals (music, pink noise, and speech) within a semi-anechoic chamber. Synchronized simulations were then created and referenced to video. In two separate, double blind MUSHRA-style listening tests (with and without visual reference), 61 trained binaural listeners evaluated the sensation of motion among real and simulated conditions. Results showed that the listeners rated the simulation as presenting the greatest sensation of motion among all test conditions.
Ear shape modeling for 3D audio and acoustic virtual reality: the shape-based average HRTF
Shoken Kaneko, Tsukasa Suenaga, Mai Fujiwara, Kazuya Kumehara, Futoshi Shirakihara, and Satoshi Sekine
In this paper, we present a method for modeling human ear shapes, and particularly, a method for obtaining a generic non-individualized head-related transfer function (HRTF), based on the arithmetic mean of human ear shapes. The shape-based average HRTF is calculated from this average human ear shape with the boundary element method (BEM). The obtained average HRTF is evaluated by subjective experiments, revealing improved localization precision over a HRTF calculated from the shape of a mannequin head. Our approach does not require any measurements of HRTFs of the listener or selections of fitting HRTFs from a predefined database, and thus it can be practically utilized in any 3D audio or acoustic virtual reality application which makes use of HRTFs, such as virtual auditory displays or virtual 3D audio rendering in 3D gaming.
Paper Session 4 - Synthesis and Sound Design
Modal Synthesis Of Weapon Sounds
Lucas Mengual, David Moffat, And Joshua D. Reiss
Sound synthesis can be used as an effective tool in sound design. This paper presents an interactive model that synthesizes high quality, impact-based combat weapons and gunfire sound effects. A procedural audio approach was taken to compute the model. The model was devised by extracting the frequency peaks of the sound source. Sound variations were then created in real-time using additive synthesis and amplitude envelope generation. A subtractive method was implemented to recreate the signal envelope and residual background noise. Existing work is improved through the use of procedural audio methodologies and application of audio effects. Finally, a perceptual evaluation was undertaken by comparing the synthesis engine to some of the analyzed recorded samples. In 4 out of 7 cases, the synthesis engine generated sounds that were indistinguishable, in terms of perceived realism, from recorded samples.
Feature Based Impact Sound Synthesis Of Rigid Bodies Using Linear Modal Analysis For Virtual Reality Applications
Muhammad Imran And Jin Yong Jeon
This paper investigates an approach for synthesizing sounds of rigid body interactions using linear modal synthesis (LMS). We propose a technique based on feature extraction from one recorded audio clip to estimate perceptually satisfactory material parameters of virtual objects for real-time sound rendering. In this study, the significant features from one recorded audio are extracted by computing high level power spectrogram that is based on short time Fourier transform analysis with optimal windowed function. Based on these reference features, the material parameters of intrinsic quality are computed for interactive virtual objects in graphical environments. A tetrahedralize finite element method (FEM) is employed to achieve Eigen values decomposition during modal analysis process. Residual compensation is also implemented to optimize the perceptual differences between the synthesized and the real sounds, and to include the non- harmonic components in the synthesized audio in order to achieve perceptually high quality sound. Furthermore, the computed parameters for material objects of one geometry can be transferred to different geometries and shapes of the same material objects, whereas, the synthesized sound varies as the shapes of the objects change. The results of the estimated parameters as well as a comparison of real sound and the synthesized sound are presented. The potential applications of our methodology are synthesis of real time contact sound events for games and interactive virtual graphical animations and providing extended authoring capabilities.
A Synthesis Model For Mammalian Vocalisation Sound Effects
William Wilkinson, Joshua D. Reiss
In this paper, potential synthesis techniques for mammalian vocalisation sound effects are analysed. Physically-inspired synthesis models are devised based on human speech synthesis techniques and research into the biology of a mammalian vocal system. The benefits and challenges of physically-inspired synthesis models are assessed alongside a signal-based alternative which recreates the perceptual aspects of the signal through subtractive synthesis. Nonlinear aspects of mammalian vocalisation are recreated using frequency modulation techniques, and Linear Prediction is used to map mammalian vocal tract configurations to waveguide filter coefficients. It is shown through the use of subjective listening tests that such models can be effective in reproducing harsh, spectrally dense sounds such as a lionÕs roar, and can result in life-like articulation.
AES - Audio Engineering Society