Events

56th Conference Program

Wednesday 11th February – Tutorial Day

Fish Science
09:00 - 10:00
Registration and Coffee
10:00 - 10:15
Welcome and Opening
10:15 - 11:15   Game Audio: Past Present and Future
11:15 - 11:45
Break
11:45 - 12:45   Dynamic Music in Games
12:45 - 14:15
Lunch
14:15 - 15:15   Building a Brand with Audio, lessons from Rovio
15:15 - 15:45
Break
15:45 - 16:45   Object Audio in Games (AudioKinetic)
16:45 - 17:45   Smart Sound Design Using Modularity and Data Inheritance
17:45 - 21:00
Dolby Reception (offsite - details in conference packs)

Thursday 12th February – Day 2

Fish Science Council
09:00 - 09:30 Registration    
09:30 - 10:00   Framework for enabling fast development of Procedural Audio models  
10:00 - 10:30   Paper Session: Education and Training
10:30 - 11:00
Break
11:00 - 12:30   Keynote  
12:30 - 14:00
Lunch
14:00 - 15:00   Hearing Ida's Journey - The Sound of Monument Valley Paper Session: Generative Music Systems
15:00 - 15:30    
15:30 - 16:00
Break
 
16:00 - 17:00   Making an Audio Only Game (Something Else) Paper Session: Game Music Systems
17:00 - 17:30    
17:30-21:00
Conference Social Event (Central London Pub)

Friday 13th February – Day 3

Fish Science Council
09:00 - 09:30
Registration
09:30 - 10:30   Speech Loudness in Multilingual Games Fabric Hands-on
10:30 - 11:00
Break
11:00 - 12:30   Paper Session: Spatial Audio
12:30 - 13:30
Lunch
 
13:30 - 14:00 FMOD Hands-on
14:00 - 15:00   3D Rendering Tutorial
15:00 - 15:30
Break
 
15:30 - 16:30   Building Audio for VR

 

Game Audio Implementation: Past Present and Future
Speaker: Scott Selfon
The current game consoles offer incredible technical capabilities in both hardware and software processing. Mobile devices are similarly rapidly evolving in capabilities. But are these platforms really just offering the same thing, only more of it? What are the audio frontiers and barriers when so many of the restrictions of the past have been eliminated? And with maturing and increasingly sophisticated audio engine solutions, is game sound programming and implementation really now a “solved” problem? This talk reflects on the current state-of-the-art for areas ranging from spatial simulation and acoustic modeling to evolving and dynamic mixing, audio as a feedback mechanism, and highly personalized audio experiences. Examples from both past and present titles are used to highlight the technical and creative achievements that implementers are striving for in making audio not only compelling and realistic but an equal-footing contributor to immersive, engaging, and rewarding gameplay experiences.
Exploring the possibilities of Dynamic Music
Stephan Schutze - The Sound Librarian
"This session uses a game currently in development as the focus to discuss the current possibilities of dynamic music and what the future may hold. Defect SDK is an indy game that has allowed Stephan Schütze to push the potential of modern game audio tools to an extreme level in an attempt to create a deep and responsive dynamic music system.

Using both the game and the audio project Stephan will explore just what our modern tool sets allow us to create, demonstrate the possibilities for all game projects and challenge the audience to make more use of the tools that have been created for us.

The Defect music system blurs the line between composition and implementation and allows for rapid prototyping of musical ideas, easy adaptation and evolution of design changes and a data driven system that can change on a beat."
 
Building an audio brand - case studies from Angry Birds Sounds & Music
Rovio: Ilmari Hakkola, Head of Audio, Rovio Entertainment
A journey through the creation of sounds and music for the Angry Birds seen from a perspective of audio branding. The evolution of the sound from the first Angry Birds game as the result of introducing deeper characters, richer storytelling and more ambitious productions required to take the world of Angry Birds to various media channels and platforms.
 
Object-Based Panning: Rethinking your Audio Mixing Graph
Xavier Buffoni Director, R&D Audiokinetic
"Object-based panning technologies, implemented using an external rendering system in the kernel and/or on a hardware receiver, are perfectly suited for games since games naturally handle objects in virtual 3D worlds. On the other hand, these technologies imply placing audio and positional data end-points upstream from mixing; closer to sound sources than before. This new paradigm has consequences for sound designers and audio programmers, since submixes and routing are often heavily used for multiple purposes.

The lecture will explore these consequences through various examples and scenarios, and will address sound design and system aspects such as dynamic mixing, metering, dynamics, DSP effects and performance. New techniques will be suggested when applicable. Other methods that involve the use of metadata-enriched signals will also be discussed.

This session in intended for sound designers and programmers. Attendees are expected to be familiar with most aspects of professional game audio production and integration."
 
Smart Sound Design Using Modularity and Data Inheritance
Frostbite Team: Martin Loxton, Audio Programmer, Frostbite EA Stockholm Sweden
As the quantity of content required by AAA games continues to grow, new approaches to structuring this content must be considered to quickly build your game, without compromising on quality due to memory and performance restraints. This presentation will discuss the technologies developed by Frostbite to support rapid content creation using modular sound design coupled with data inheritance, allowing for a succinct separation between the modelled behavior and the rendered result. This technique was used extensively to deliver the latest games built on Frostbite, including Battlefield 4, Need For Speed Rivals, and most recently, Dragon Age Inquisition. Use-cases from these games will be discussed to provide concrete examples of this approach.
 
Creating and using a framework for enabling fast development of Procedural Audio models
Amaury La Burthe - Audiogaming
This presentation will be focused on our experiments around the creation of a procedural audio framework. After various experiences in audio research, interactive audio & post-production, we started exploring more actively the creation of procedural audio models.

The challenges are twofold:
on one side we wanted an efficient framework to generate low-level optimized code,
on the other side we wanted to develop an approach for designing procedural audio models

From real-time versions of PureData, to python generators, we experienced different frameworks and methods and we'll give some feedbacks about what worked or not. Also, by creating "industry" quality audio models, we started to create an internal standardized way of tackling the difficult problem of creating realistically sounding procedural audio models. We'll demonstrate the work done with as much examples as possible, including real-time audio plug-ins and interactive audio prototypes.
 
Keynote - Giving Your Game an Audio Identity
Joanna Orland - SCEE
As sound designers, our objective is to create a unique and cohesive audio experience integrated into the wider game. This keynote presentation will explore ways on how to achieve this goal.

Only by having an audio vision, can you realize it in game. Methods of conducting research and analysis to define an audio style will be demonstrated using Develop Award winning Wonderbook: Book of Spells as example. Techniques of communicating and collaborating between various audio disciplines will be discussed, showing how they were put to use as part of Moonbot / SCEE collaboration Diggs Nightcrawler. Ideas on using technology for creative purposes, pushing the boundaries of imagination and innovation will be examined. We need to think about what we are trying to creatively achieve before we know how technology can help us to accomplish it.
Hearing Ida's Journey - The Sound of Monument Valley
Stafford Bawler
Monument Valley's sound designer and composer Stafford Bawler takes an in depth look at what went into creating the sound of Monument valley, both the creative process and how integration was achieved in Unity using Fabric Audio. The session will also take a look at some of the lessons learned that were taken forward when creating the audio for Forgotten Shores and Ida's Red Dream.
 
How do you create a fully interactive world with just audio
Something Else: Dan Finnegan, Research Engineer, Somethin' Else Sound Directions
"This is exactly what the team at Somethin’ Else have been doing since 2010. In this talk, I will give a developer’s diary on Audio Defence: Zombie Arena, the latest game published by SE. I’ll touch on the challenges we faced and how we designed an action FPS using binaural audio.

This talk will interest engineers, game designers, and researchers in the area of audio engineering and applications. I’ll discuss some current research undertaken at SE Labs, as well as future applications of binaural audio."
 
Opportunities and challenges of using EBU-R128 for file-based speech loudness measurement in large multilingual games
Binari Sonori: Roberto Pomoni; Francesco Zambon
In the recent years, loudness algorithms have set new standards for level measurement in all areas of the audio industry: broadcast, music, movies and recently also in the game world. The EBU R128 defines 3 parameters (Integrated Loudness, Loudness Range, Max True Peak) widely used to monitor levels of real-time final mixes, including voice, sound effects and music. Is this approach valid when dealing with large sets of voice-only audio files? Can we obtain an affordable and reliable measurement using the EBU R128 recommendation, even in non-linear, non-realtime scenarios? We explore opportunities and challenges with the goal of drawing workflow guidelines for reducing speech integration effort in large games.
 
Rendering Spatially-Extended 3D Audio in Games
David Thall - DTS
"In the game audio pipeline, sound designers and audio programmers would like not only simulate point sources, but sources with spatial extension. This is most useful in cases where a sound should emit from a collection of points defining a shape, or when a sound should emit from a point or shape that is closer than the physical speaker distance. Pairing a super-sampled convex hull and a configurable down-mixer, a system has been created that can accurately render spatially-extended audio to the target speaker layout.

In this session, we will begin with an overview of the system, including generating the convex hull and down-mix coefficients. Next, we will see (and hear) how this system is used to efficiently render point sources with improved spatial accuracy over the current offerings. Following this, we will examine the continuous and smooth modulation of a source from point to line to polygon, showcasing some of the capabilities of the system. Finally, we will explore the realm of expression in games, with some examples of possible implementations.

An audio plugin with full 3D modeled display will be used to demonstrate the effects in real-time."
 
Audio for VR/AR — is designing with spatial audio any different?
 
Varun Nair & Abesh Thakur - Two Big Ears
The presentation will cover the technicalities behind spatial audio and the opportunities it offers for designing compelling audio for VR and AR. Simulating real world physics isn't enough, we're often designing for reality on steroids! Careful sculpting of attenuation curves, pitch changes, audio cones, 3D audio parameters and geometry based reverberation can help the brain suspend disbelief and respond to virtual reality.
 
 
AES - Audio Engineering Society