AES header

Programme

Friday 13th February

 

Paper Session 7: Real Time Synthesis

Session Chair:
Josh Reiss, Centre for Digital Music, Queen Mary University London, UK

 

7-1: Design and Evaluation of Physically Inspired Models of Sound Effects in Computer Games

Niels Bottcher, Stefania Serafin, Aalborg University, Ballerup, Denmark

Historically, one of the biggest problems facing game audio has been the endless repetition of sounds. From sound bites that play constantly, to repeated sound effects to a limited music selection that loops endlessly, players have had every reason to be annoyed at game audio. Despite increased memory budgets on modern consoles, this problem is still relevant. This paper will examine the pros and cons of various approaches used in game audio, as well as the various technologies and researches that might eventually be applied to the field.

 

7-2: Game Audio Lab - An Architectural Framework for Nonlinear Audio in Games

Sander Huiberts, Richard van Tol, Kees Went, Utrecht School of the Arts, Utrecht, The Netherlands

Nonlinear and adaptive systems for sound and music in games are gaining popularity due to their potential to enhance the game experience. This paper is about the Game Audio Lab: a framework for academic purposes that enables research and rapid prototyping of nonlinear sound for games. It enables researchers and designers to map composite variables and adapt sound and music design in real-time during active game play.

 

7-3: Retargeting Example Sounds to Interactive Physics-Driven Animations

Cecile Picard, Nicolas Tsingos, INRIA Sophia-Antipolis, Sophia-Antipolis, France; Francois Faure, INRIA Rhone-Alpes, Grenoble, France and Universite de Grenoble and CNRS, Grenoble, France

This paper proposes a new method to generate audio in the context of interactive animations driven by a physics engine. Our approach aims at bridging the gap between direct playback of audio recordings and physically-based synthesis by retargeting audio grains extracted from the recordings according to the output of a physics engine. In an off-line analysis task, we automatically segment audio recordings into atomic grains. The segmentation depends on the type of contact event and we distinguished between impulsive events, e.g., impacts or breaking sounds, and continuous events, e.g., rolling or sliding sounds. We segment recordings of continuous events into sinusoidal and transient components, which we encode separately. A technique similar to matching pursuit is used to represent each original recording as a compact series of audio grains. During interactive animations, the grains are triggered individually or in sequence according to parameters reported from the physics engine and/or user-defined procedures. A first application is simply to reduce the size of the original audio assets. Above all, our technique allows to synthesize non-repetitive sounding events and provides extended authoring capabilities

 

Sponsored by

Dolby

audiokinetic     Binari Sonori     Genelec     SCEE Research and Development     SpACE-Net

Please contact [email protected] for sponsorship opportunities or general enquiries