AES San Francisco 2010
Game Audio Track Event Details

 

Thursday, November 4, 9:30 am — 10:30 am (Room 132)

Tutorial: T1 - iPhone Sound Design—Lessons Learned

Presenter:
Jeff Essex

Abstract:
This one-hour session will be a comprehensive review of factors to be considered when creating audio for iPhone, as well as strategies and tools for getting the best audio performance. Topics to covered include:

* Audio architecture overview

* Using the s/w & h/w channels (compressed music and .caf SFX)

* Designing for the speaker as well as headphones

* Frequency characteristics compared: iPod touch, iPad, iPhone 3x, iPhone 4

* Handy tools: TTW, SoundConverter, real time analyzers

* Case study: iRingPro, and using GarageBand to create ringtones

* 3rd party audio middleware: fmod

* Product demos

Thursday, November 4, 12:00 pm — 1:00 pm (Room 120)

Game Audio: G1 - Code Monkey Part 1: What Game Audio Content Providers Need to Know About C++ Programming

Presenter:
Peter "pdx" Drescher, Sound Designer, Twittering Machine

Abstract:
It's like Jane Goodall and the chimps—earn the ways of programmers and they will let you into their group. Being able to speak "the language" is helpful not only in communicating your ideas to them, but also in tracking down implementation bugs and understanding what *exactly* your programmer has been saying all these years. The author has written an application in C++ / Objective-C for Mac OSX that plays FMOD Designer's Interactive Music examples, and will use it to illustrate basic programming concepts. The source code and Xcode project will be available for download.

Thursday, November 4, 2:30 pm — 4:30 pm (Room 206)

Master Class: M2 - High Resolution Computer Audio

Presenter:
Keith O. Johnson

Abstract:
Computers, televisions, and mobile devices are functionally merging and integrating to become easy to set up fun to run systems. Right now, server-like versions can play high-resolution multichannel files from workstations, function as a control center, create loudspeaker crossovers and equalization, as well as perform room correction using all loudspeakers. Data management and processing create these advanced features but systems like these can present issues with processing activity, sample rate conversion, jitter, noise propagation, digital conversion, and interfaces. One encounters discussion about perceptual differences from technical changes that should not affect accuracy nor produce large differences in timing spectra. The class will briefly introduce systems and components, then show potential behavioral artifacts within system parts and describe test methods that might reveal explanations. Then we'll explore process monitoring, buffers, quick locking low jitter clocks, floating conversion environments, jitter from OP Amps and a test methodology that targets process related intrinsic behavioral problems. Cluster, subtraction and DSP overload tests will be described along with projected human perceptual and mental load that might have audibility or hinder playback involvement. This presentation of an overall background knowledge should encourage studies that are more detailed and it should be helpful with creation and development of very high quality systems.

Thursday, November 4, 2:30 pm — 4:30 pm (Room 120)

Game Audio: G2 - Rock On! – The Rock Band Network Demystified

Panelist:
Jeff Marshall, Executive Producer, Rock Band Network, Harmonix

Abstract:
With the popularity of music games like Rock Band there comes opportunity for musicians, engineers, and producers to provide musical content. The Rock Band Network was created to broaden the spectrum of available content, and allow your music to be experienced by a unique and untapped market. This in depth how-to session will give you plenty to chew on, straight from the horse's mouth!

Thursday, November 4, 2:30 pm — 4:15 pm (Room 130)

Workshop: W3 - The Future Is Tactile: How Does Whole Body Vibration Affect Perception of Binaural Audio Over Headphones?

Chair:
Todd Welti, Harman International
Panelists:
Clemeth L. Abercrombie, Artec Consultants Inc.
M. Ercan Altinsoy, Technische Universität Dresden
Sungyoung Kim, Yamaha Corporation
Sean Olive, Harman International

Abstract:
Audio playback at moderate to high levels over headphones alone does not recreate tactile sensations of the recorded or simulated environment. When striving for an accurate and compelling experience, inclusion of tactile stimuli can make the playback system more physically accurate; however including accurate tactile stimulation is not trivial. For example, how do you reproduce tactile stimuli accurately, and does it significantly enhance the experience? More specifically, what is the effect of tactile stimuli on perceived spectral response and loudness? What is the effect of timing asynchrony between aural and tactile channels? How do tactile and aural stimuli interact perceptually? Some of the psychoacoustic issues for haptic feedback systems may also be relevant to binaural playback.

Thursday, November 4, 5:00 pm — 6:30 pm (Room 206)

Game Audio: G3 - The Wide Wonderful World of 5.1 Orchestral Recordings

Panelists:
Richard Dekkard, Director, Orphic Media LLC
Tim Gedemer, Owner/Supervising Sound Editor, Source Sound Inc.

Abstract:
When recording an orchestra was a simple affair using two or three microphones, the performance, the choice and placement of said microphones, and the quality of the recording medium were all that factored into the result. These days, orchestral recording takes almost as many forms as pop recording. Spot mics, multichannel arrays, postproduction, and editing are all used in the production process. In this panel, experts in both game and film audio will be discussing the means by which producers and engineers arrive at their final goals for different formats and deal with the challenges of 5.1 orchestral recording for their mediums. Topics will include the different footprint limits in the 5.1 format used for games versus the film format—as well as the those involved in streaming bandwidth for both games and movies. Panelists will go over editing in 5.1 for games to accommodate player-driven music as compared to the linear progression standard in film editing, and will also discuss the lack of standards for 5.1 in games versus the established process and standards for film.

Friday, November 5, 9:00 am — 10:00 am (Room 120)

Game Audio: G4 - Code Monkey Part 2: LUA is not a Hawaiian Picnic—The Basics of Scripting for Dynamic Audio Implementation

Presenter:
Kristoffer Larson, Audio Manager, WB Games - Seattle, WA, USA

Abstract:
Isn't a script what you use for your VO sessions? Why, yes, little Billy, but in the world of dynamic audio it means something different. Scripting can be the cheese in your excellent sound sandwich. This session will teach you scripting basics, and how scripting can enhance your dynamic audio implementation. (Hold the mayo). LUA is not universally used, but it is common enough that you'll benefit from having a basic understanding of it. Take-home examples will be provided.

Friday, November 5, 11:30 am — 1:00 pm (Room 120)

Game Audio: G5 - Developing Sensible Reference Level Standards

Chair:
Steve Martz, Sr. Design Engineer, THX Ltd.
Panelists:
Lance Brown, Cinematic Game Audio Consultant
Charles Deenen, Senior Creative Director, Audio, Electronic Arts
Ken Felton, Sound Design Manager, SCEA
Tom Hays, Director of Audio Services, Technicolor
Francesco Zambon, Audio Project Lead, Binari Sonori s.r.l.

Abstract:
Particularly in environments where the mix is dynamic and constantly changing, a continuing challenge for game developers is devising (and abiding by) guidelines for appropriate playback levels. While the ever-loudening, highly dynamic-range compressed strategies of the music industry may be appropriate in that world, games can use multiple alternate techniques to 'feel' louder and maintain a wide dynamic range without forcing the player to scramble for their remote. This panel will cover the findings of an ongoing multi-platform, multi-studio conversation about what such a set of guidelines would look like, and how we can apply these.

Friday, November 5, 2:30 pm — 4:00 pm (Room 120)

Game Audio: G6 - Mobile Game Audio for Headphones and Micro-Speakers

Chair:
Steve Martz, Sr. Design Engineer, THX Ltd.
Panelists:
Peter "pdx" Drescher, Sound Designer, Twittering Machine
Greg Klas, Sr. Manager, Audio Engineering, Fisher-Price, Inc.
Jeffrey Xia, Sr. Acoustics Engineer, Ole Wolf Electronics

Abstract:
Mobile platforms (phones, toys, portable gaming devices) are typically relegated to using small speakers as a means to recreate an immersive environment for games. These playback devices have certain attributes that require unique approaches when creating content for mobile entertainment.

A panel consisting of speaker manufacturers and mobile game creators will discuss the performance characteristics and limitations of headphones and other micro-speakers as they pertain to playback of game audio on those devices as well considerations for designing game content.

Friday, November 5, 4:15 pm — 5:15 pm (Room 120)

Game Audio: G7 - Audio Shorts—Sound Design

Presenters:
Randy Buck, Principal, The Sound Department - Austin, TX, USA
Charles Deenen, Senior Creative Director, Audio, Electronic Arts
Kristoffer Larson, Audio Manager, WB Games, Seattle, WA, USA
Marc Schaefgen, Principal/Owner, The Sound Department - Austin, TX, USA
Jay Weinland, Senior Audio Lead, Bungie Studios

Abstract:
Three mini sessions are presented by game audio dudes that guarantee you will walk away with cool new techniques. Twenty minutes each to serve up an in-depth look at topics in sound design that matter most to them. Q&A to follow.

Shorty #1:
Game Audio Sound Sourcing - What is special about gathering sonic source material for games as opposed to other media? Need for longer ambiences (streams can be up to 5 minutes or more), more variation, more mic perspectives, components rather than complex events.

Shorty #2:
The Loop Trick and More - How to create seamless loops of any length and the best way to approach sound design for looping material. And, to loop or not to loop? That is always the question when it comes to rapid fire weapons. Learn a few techniques that go beyond the loop.

Shorty #3:
My Favorite Plugin! - Three speakers from other sessions will talk about their current favorite plug-in, why they love it so much, and how they use and or abuse it.

Friday, November 5, 5:30 pm — 6:30 pm (Room 120)

Game Audio: G8 - Code Monkey Part 3: ‹learn›XML‹/learn›

Presenter:
Michael Kelly

Abstract:
This session provides an overview of XML, the eXtensible Markup Language, and explains how it is used within the game-audio production pipeline. This is a chance to pick up some pointers how to read and write in XML. This introductory-level session is for those who work with XML (whether you know it or not!) and want to know more. The session may also be beneficial to those outside the games industry and shows how some popular game-audio tools use XML. The sessions ends with some pointers to more advanced topics for the adventurous.

Saturday, November 6, 9:00 am — 11:00 am (Room 120)

Game Audio: G9 - Game Industry Overview

Chair:
Marc Schaefgen, Principal/Owner, The Sound Department - Austin, TX, USA
Panelists:
Lance Brown, Cinematic Game Audio Consultant
Charles Deenen, Senior Creative Director, Audio, Electronic Arts
Adam Levenson, Senior Director, Central Audio and Talent, Activision/Blizzard

Abstract:
How does the game industry work compared to other media industries? What is the game development process? Unlike other media industries, the deployment platforms for games are constantly evolving as are the tools used to create content for said platforms. What if the TV industry experienced an evolution like HD every five years? How do game developers sail the seas of technical change? How does the technology affect creativity? Where can I fit in if I'm coming from a related media industry? Lots of good questions, those and more answered by a panel of top game audio professionals.

Saturday, November 6, 11:00 am — 1:00 pm (Room 206)

Workshop: W13 - Progress in Computer-Based Playback of High Resolution Audio

Chair:
Vicki R. Melchior, Audio DSP Consultant - Boston, MA, USA
Panelists:
Bob Bauman, Lynx Studio Technology - Costa Mesa, CA, USA
James Johnston, DTS Inc. - Calabasas, CA, USA
Andy McHarg, dCS Ltd. - Cambridge, UK
Daniel Weiss, Weiss Engineering Ltd. - Zurich, Switzerland

Abstract:
With the continuing decline in discs as music sources and concurrent growth of electronic distribution, computers and network attached storage (NAS) are now rapidly evolving as front end components in place of traditional transports and players. Computers have long been useful within mastering workflows, though not always loved, and their introduction into high quality music systems raises a new range of engineering challenges.

Intrinsic to computers are problems of EMC, switching noise, dirty power, jittered clocks, crosstalk, driver and operating system variability, protocol incompatibilities, and software errors, to name a few. These may directly influence audio quality. Of special importance, for example, are the design as well as system configuration of digital audio interfaces (USB, Firewire, S/PDIF, WiFi, Ethernet etc), D/A conversion, and data processing, along with clocks and power sourcing.

The panel in this workshop are active in the design of these systems and will discuss some of their results and thoughts regarding the most salient factors for optimization of sonic performance in this area.

Saturday, November 6, 11:15 am — 12:45 pm (Room 120)

Game Audio: G10 - Audio Cage Match!

Referee:
Steve Horowitz, Composer/Producer/Referee, President, The Code International, Inc.
Panelists:
Peter "pdx" Drescher, Sound Designer, Twittering Machine
Larry the O

Abstract:
A spirited ""cage match"" discussion about sound, music, interactive audio, game soundtracks, hardware, software, and audio production, by two experienced industry veterans. Peter Drescher and Larry the O met at Berklee College of Music in 1976, and have been arguing both sides of any audio issue ever since. A series of questions will be asked by the moderator; each will be discussed for a specific amount of time. Topics might include:

• “I promise never to program a computer to play something I can’t” (aaaand ... fight!)
• How much does audio quality really matter?
• Why does MIDI sound bad, and what's it good for anymore?
• Is music supposed to be easy to learn, or difficult?
• Why do game soundtracks have to SUCK so bad!?
• Who needs interactive audio anyway?
• The Myth of Music Ownership: music as “a service you listen to,” not “a thing that you buy.”
• Have advances in technology actually made audio production better?
• (Stay tuned ... more to come)

Referee will be provided. Trained medical personnel standing by. Remember, this is not a competition, it is only an exhibition— please, no wagering. After all, there is always the remote chance these guys might agree on something!

Saturday, November 6, 2:30 pm — 4:30 pm (Room 120)

Game Audio: G11 - Careers in Game Audio

Chair:
Steve Horowitz, Composer/Producer, President, The Code International, Inc.
Panelists:
Tim Duncan, DMA; Associate Professor; Director, Digital Audio Technology, Cogswell Polytechnical College
Shiloh Hobel, Sr. Director, Industry and Career Services - Sound Arts, Ex'pression College for Digital Arts
David Javelosa, Professor of Game Development, Santa Monica College
Lennie Moore, Composer
Michael Sweet, Associate Professor, Berklee College of Music

Abstract:
You already know that you want a career in game audio. You even know which position is best for you. The big question is how do you get the training and experience necessary to land that first job? This workshop will present the latest work of the IASIG (Interactive Audio Special Interest Group) and local education institutions to develop standardized school curriculum for games and interactive audio. Programs are springing up all over the world, and this panel will provide the big overview of what training is available now and what is coming in the future. From single overview classes, associate degree programs to a full four year university study we will preview the skill sets and templates that are desired and the recommended path for getting that position on the audio team.

Saturday, November 6, 4:30 pm — 6:00 pm (Room 206)

Game Audio: G12 - Mixing the DICE Way—Battlefield, HDR Audio, and Instantiated Mixing

Presenter:
David Mollerstadt, EA/DICE

Abstract:
In this session David Mollerstedt presents the detailed concepts behind how High Dynamic Range Audio does adaptive runtime level balancing. It further explains the Instantiated Mixer System in the Frostbite engine that allows for elaborate manipulation of individual sounds and asset groups.

Sunday, November 7, 9:00 am — 11:00 am (Room 120)

Game Audio: G13 - Takin' Care of Business

Chair:
Scott Selfon, Lead Program Manager, Microsoft Corporation
Panelists:
Rod Abernethy, Red Note Audio
Simon Amarasingham, CEO, dSonic Inc.
Alistair Hirst, CEO, OMNI Audio
Julien Kwasneski, President, Bay Area Sound, Inc.
Josh Rose, CEO and Co-founder, Flying Wisdom Studios

Abstract:
A panel of four game audio production companies will give insight on how to run a successful business. Moderated by one of their peers, questions will cover a wide range as well as involve audience participation.

Sunday, November 7, 11:30 am — 1:00 pm (Room 120)

Game Audio: G14 - Physics Psychosis

Panelists:
Stephen Hodde, Associate Audio Designer, Volition, Inc. (THQ)
Damien Kastbauer, Technical Sound Designer, Bay Area Sound
Jay Weinland, Senior Audio Lead, Bungie Studios

Abstract:
This session will dig into what can become a very complex implementation problem, physics. Solutions can range from simple to extremely complex depending on the demands of the game, the computational resources available, and the ambition of the audio team. Three developers will delve into the solutions they have used, discuss the pros and cons, and where they'd like to go in the future.

Sunday, November 7, 2:30 pm — 4:00 pm (Room 120)

Game Audio: G15 - Game Audio for Visually Impaired (CANCELLED)

Presenter:
Michelle Hinn

Abstract:
Good sound design is very important for placing a player in an interactive environment. It is even more critical for those that are visually impaired. Panelists will discuss methods and strategies for sound design that allows all players to participate in the action.

WE REGRET THAT DUE TO UNFORESEEN CIRCUMSTANCES THIS SESSION IS CANCELLED.

Sunday, November 7, 2:30 pm — 5:00 pm (Room 220)

Paper Session: P26 - Auditory Perception

Chair:
Poppy Crum

P26-1 Progress in Auditory Perception Research Laboratories—Multimodal Measurement Laboratory of Dresden University of TechnologyM. Ercan Altinsoy, Ute Jekosch, Sebastian Merchel, Jürgen Landgraf, Dresden University of Technology - Dresden, Germany
This paper presents the general ideas and implementation details of the MultiModal Measurement Laboratory (MMM Lab) of Dresden University of Technology. This lab combines VR equipment for multiple modalities (auditory, tactile, vestibular, visual) and is capable of presenting high-performance, interactive simulations. The goals are to discuss the progress in auditory perception research laboratories in recent years and the technical parameters, which should be considered for the implementation of reproduction systems for different modalities.
Convention Paper 8305 (Purchase now)

P26-2 Families of Sound Attributes for the Assessment of Spatial AudioSarah Le Bagousse, Orange Labs - France Télécom R&D - Cesson Sévigné, France; Mathieu Paquier, LISyC - Université de Bretagne Occidentale - Brest, France; Catherine Colomes, Orange Labs - France Télécom R&D - Cesson Sévigné, France
Over the last years, studies have highlighted many features liable to be used for the characterization of sounds by several elicitation methods. These various experiments have resulted in the production of a long list of sound attributes. But, as their respective meaning and weight are not alike for assessors and listeners, the analysis of the results of a listening test based on sound criteria remains complex and difficult. The experiments reported in this paper were aimed at shortening the list of attributes by clustering them in sound families from the results of two semantic tests based on either a free categorization (i) or use of a multi-dimensional scaling method (ii).
Convention Paper 8306 (Purchase now)

P26-3 Listening Tests for the Effect of Loudspeaker Directivity and Positioning on Auditory Scene PerceptionDavid Clark, DLC Design - Northville, MI, USA
Using stereo playback in a typical living room, subjects were exposed to six loudspeaker configurations under double-blind conditions and asked if the auditory scene was better or worse than that presented by a reference stereo system. For all configurations, the auditory scene was judged to be plausible, but mean scores were lower than those for the reference. The reference comprised symmetrically-placed conventional box loudspeakers with subwoofers.
Convention Paper 8307 (Purchase now)

P26-4 Modeling Tempo of Human Response to a Sudden Tempo Change Using Damped Harmonic OscillatorsNima Darabi, Peter Svensson, Jon Forbord, Norwegian University of Science and Telecommunications - Trondheim, Norway
A human-computer interactive subjective test was held in which 12 users tapped with a suddenly changing metronome by hand-clapping and finger-tapping. Up-sampled recorded trials with different interpolation methods were used to measure their internal timekeeper's tempo in response to each tempo step. An iterative prediction error minimization method was applied on the step response signals, to identify the underlying human users’ tempo-changing system related to this sensori-motor synchronization task. Experimental data indicated that the system is fairly LTI and would most likely resemble a second order damped harmonic oscillator. Fit ratio comparison showed that a delayed two-pole one-zero underdamped oscillator (P2DUZ) could be the trade-off between complexity and efficiency of the model. The related parameters for each user (as a set of their memory related built-in factors) were also extracted and shown to be slightly individual-dependent.
Convention Paper 8308 (Purchase now)

P26-5 Increasing Intelligibility of Multiple Talkers by Selective MixingPiotr Kleczkowski, Magdalena Plewa, Marek Pluta, AGH University of Science and Technology - Kraków, Poland
Five tracks of speech signal were recorded. One of the tracks, the target track, consisted of spoken numbers, so that by counting the number of correctly heard words the degree of comprehension of the target talker could be quantified in each trial. Two types of mixes of all five tracks were performed: a simple mix and a selective mix. The latter mix is a development of the processing technique known as binary masking. A large group of subjects (54) listened to both types of mixes and it was found that selective mixing slightly increased the intelligibility of the target talker.
Convention Paper 8309 (Purchase now)


Return to Game Audio Track Events