AES New York 2013
Game Audio Track Event Details

Thursday, October 17, 11:15 am — 12:30 pm (Room 1E10)

Photo

Game Audio: G1 - Planes, Trains, and Automobiles: Creating & Implementing Vehicle Sounds for Games

Presenter:
Mike Caviezel, Microsoft Game Studios - Redmond, WA, USA

Abstract:
This session will discuss some of the basic vehicle audio design concepts commonly found in games today. We’ll talk about system design, recording and sound design methodology, and various implementation techniques and tricks for making vehicles sound great in games.

 
 

Thursday, October 17, 12:00 pm — 12:45 pm (Room 1E11)

Workshop: W5 - Height Channels: Theory, Practice, and "Ears-On" Experience

Chair:
David Bowles, Swineshead Productions LLC - Berkeley, CA, USA
Panelists:
Paul Geluso, New York University - New York, NY, USA
Agnieszka Roginska, New York University - New York, NY, USA

Abstract:
Over the past century sound recording and reproduction has dealt with an increasing number of audio channels to simulate spatial dimensions via capturing horizontal axes in stereo and surround-sound. The next step in immersive audio is to move into the vertical dimension through height channel recording and reproduction. The members of this panel will discuss different recording techniques to capture height channels and whether this audio information can be integrated into conventional stereo and surround-sound recordings. Vital to this dialog is a clearer understanding via psychoacoustics of our hearing perception outside the horizontal planes: how this perception influences engineers’ choices in microphone technique and speaker placement. Last, post-production methods to create 3-D sonic imagery for recordings not originating in 3-D will be discussed. This workshop will be divided into two parts: a technical panel discussion at Javits Center, followed by playback sessions at the James Dolan Studios at New York University.

 
 

Thursday, October 17, 2:30 pm — 4:30 pm (Room 1E11)

Game Audio: G2 - Diablo III—Post Mortem

Presenters:
Russell Brower, Blizzard Entertainment
Derek Duke, Blizzard Entertainment
Joseph Lawrence, Blizzard Entertainment

Abstract:
Look behind the curtain as the Audio Team behind Diablo III shows us the world of game audio development from multiple perspectives—the audio director, sound designer, and composer. Discover the tips, tricks, and techniques of a major AAA title’s audio design process from conception to completion in this postmortem.

 
 

Thursday, October 17, 2:30 pm — 4:00 pm (Room 1E13)

Workshop: W7 - Tools and Workflow for the Creation of Immersive Content

Chair:
Bert Van Daele, Auro Technologies NV - Mol, Belgium
Panelists:
Fred Maher, DTS Inc. - Calabasas, CA, USA
Jurgen Scharpf, Dolby - San Francisco, CA, USA
Wilfried Van Baelen, Auro Technologies - Mol, Belgium
Brian A. Vessa, Sony Pictures Entertainment - Culver City, CA, USA; Chair SMPTE 25CSS standards committee

Abstract:
In this workshop the requirements for new tools and workflows to create three-dimensional, immersive content are discussed. The panel members present their own solutions and discuss the creative possibilities as well as the requirements for compatibility between different systems and deliverables.

AES Technical Council This session is presented in association with the AES Technical Committee on Sound for Digital Cinema and Television

 
 

Thursday, October 17, 5:00 pm — 6:30 pm (Room 1E14)

Workshop: W9 - Can We Measure Emotions?

Chair:
Judith Liebetrau, Ilmenau University of Technology - Ilmenau, Germany; Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany
Panelists:
Frederik Nagel, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; International Audio Laboratories - Erlangen, Germany
Mark B. Sandler, Queen Mary University of London - London, UK
Chia-Jung Tsay, University College London - London, UK

Abstract:
Music evokes and carries emotions. Music emotion recognition (MER), as a part of music information retrieval (MIR), examines the question: Which parts of music evoke what emotions and how can they be automatically classified? Classification systems need to be trained in terms of feature selection and prediction. As training data, musical pieces of which the average emotional impact is known have to be used. Due to the subjectivity of emotions, the generation of such ground truth data poses several challenges. In this workshop obstacles in measuring and automatically predicting emotions evoked by music will be displayed.

Among others, the workshop will address the following topics: What is an emotion and how can it be defined? Is there a difference between felt and perceived emotion? Can the emotional impact of a musical piece be subjectively measured? Can the emotional impact of a musical piece be predicted?

AES Technical Council This session is presented in association with the AES Technical Committee on Perception and Subjective Evaluation of Audio Signals

 
 

Friday, October 18, 9:00 am — 11:00 am (Room 1E11)

Game Audio: G3 - Scoring "Tomb Raider": The Music of the Game

Presenter:
Alex Wilmer, Crystal Dynamics

Abstract:
"Tomb Raider's" score has been critically acclaimed as being uniquely immersive and at a level of quality on par with film. It is a truly scored experience that has raised the bar for the industry. To achieve this, new techniques in almost every part of the music's production needed to be developed. This talk will focus on the process of scoring "Tomb Raider." Every aspect will be covered from the music's creative direction, composition, implementation, and the technology behind it.

 
 

Friday, October 18, 11:30 am — 1:00 pm (Room 1E11)

Sound for Picture: SP1 - Creative Dimension of Immersive Sound—Sound in 3D

Chair:
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Panelists:
Marti Humphrey CAS, The Dub Stage - Burbank, CA, USA
Branko Neskov, Loudness Films - Lisbon, Portugal

Abstract:
Audio for Cinema has always struggled to replicate the motion shown on the screen, a fact that became more apparent with 3D films. Several methodologies for "immersive sound" are currently under evaluation by the industry, with theater owners and film companies both advising that they will not tolerate a format war, with a common format a commercial requirement.

The two major methods of creating immersive sound and audio motion are referred to as "object-based" and "channel-based." Each has its strengths and limitations for retrofit into the current cinema market. With few sound mixers experienced in either of these techniques, we're pleased to welcome two of the pioneers, one with experience at Auro3D and the other in Atmos, in a discussion of their experiences and comments on working with the two systems.

AES Technical Council This session is presented in association with the AES Technical Committee on Sound for Digital Cinema and Television

 
 

Friday, October 18, 11:45 am — 12:45 pm (Room 1E09)

Photo

Game Audio: G4 - Loudness in Interactive Sound Roundup

Presenter:
Garry Taylor, Sony Computer Entertainment Europe - Cambridge, UK

Abstract:
Over the years there has been much talk of reigning in the loudness problem in the games industry. It's not just talk anymore. Listen to those who has made progress in this field and learn how to apply their efforts to your title. Recently, Sony’s Audio Standards Working Group (ASWG) released loudness recommendations for their first party titles. Garry Taylor, Audio Director at Sony Computer Entertainment, looks at the work of the ASWG, the data they collected, and how that data influenced their recommendations. He looks at their first loudness paper and how their titles are measured and tested at Quality Assurance.

 
 

Friday, October 18, 2:00 pm — 4:00 pm (Room 1E10)

Game Audio: G5 - Game Audio: A Primer and Educational Resources

Chair:
Stephen Harwood, Jr., Education Working Group Chair; IASIG - New York, NY, USA
Presenters:
Andrew Aversa, Drexel University; Impact Soundworks
Leonard J. Paul, School of Video Game Audio - Vancouver, Canada
Jean-Luc Sinclair, New York University - New York, NY, USA
Michael Sweet, Berklee College of Music - Boston, MA, USA

Abstract:
Game-curious? Interested in the video game industry but unsure of what exactly it is that we do here? Video game production values are improving rapidly, creating increased demand for top-notch, experienced audio professionals, but many composers, sound designers, and producers looking to bring their expertise from the world of film and TV into the video game industry are uncertain about what it is they’ll be getting themselves into. Fortunately, the field of game audio education is developing rapidly—more schools are offering related courses each year. Following a presentation of the differences between audio production for games and for film and television, this session will feature a discussion of best practices and suggestions for how to learn what it takes to succeed as an audio professional working in the game space. Come prepared to inquire, be inspired, and take notes.

 
 

Friday, October 18, 5:00 pm — 6:30 pm (Room 1E10)

Game Audio: G6 - In the Trenches

Chair:
Scott Selfon, Microsoft - Redmond, WA, USA
Presenters:
Russell Brower, Blizzard Entertainment
Jason Kanter, Avalanche Studios
D. Chadd Portwine, Vicarious Visions
Alex Wilmer, Crystal Dynamics

Abstract:
The guys doing the work know the most. Let's hear what they have to say about what bugs them, makes them smile, makes them drink. Tool sets both commercial and proprietary are how we get the job done. What works, what needs improvement? Who do these people rely upon for tech help, production info, direction, physical therapy? What goes on behind closed doors?

 
 

Saturday, October 19, 9:00 am — 11:00 am (Room 1E14)

Photo

Student / Career: SPARS Speed Counseling with Experts— Mentoring Answers for Your Career

Moderator:
Kirk Imamura, Avatar Studios - New York, NY; SPARS Foundation

Abstract:
This event is specially suited for students, recent graduates, young professionals, and those interested in career advice. Hosted by SPARS in cooperation with the AES Education Committee and G.A.N.G., career related Q&A sessions will be offered to participants in a speed group mentoring format. A dozen students will interact with 4–5 working professionals in specific audio engineering fields or categories every 20 minutes. Audio engineering fields/categories include gaming, live sound/live recording, audio manufacturer, mastering, sound for picture, and studio production. Listed mentors are subject to change.

The List of Mentors
Studio Production: Mark Rubel, David Kahne, Craig Schumacher, Chris Mara, Glenn Lorbecki, Drew Waters, Barry Rudolph, Pat McMakin, Kevin Killen, Todd Whitelock
Sound for Picture: Leslie Mona-Mathus, Eric Johnson, Jamie Baker, Bill Higley, Jun Mizumachi, Rick Senechal
Gaming: Tom Salta, Gina Zdanowicz, Scott Selfon, Randy Coppinger
Live Sound/Live Recording: Erik Zobler, Jeri Palumbo, Rick Camp
Theatrical Sound: Peter Hylenski, Nevin Steinberg, Kai Harada, Simon Matthews

 
 

Saturday, October 19, 9:00 am — 10:30 am (Room 1E11)

Sound for Picture: SP3 - Dialog Editing and Mixing for Film (Sound for Pictures Master Class)

Presenters:
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Fred Rosenberg

Abstract:
Film soundtracks contain three elements—dialog, music, and sound effects. Dialog is the heart of the process, with “telling the story” the primary goal of the dialog. With multiple sources of dialog available, the assessment and planning of the dialog and subsequent mixing is a critical element in the process. This Master Class with one of Hollywood's leading professionals puts the process under the microscope.

AES Technical Council This session is presented in association with the AES Technical Committee on Sound for Digital Cinema and Television

 
 

Saturday, October 19, 9:00 am — 10:00 am (Room 1E10)

Photo

Game Audio: G7 - Code Monkey: Mapping Audio into a 3D Game World

Presenter:
Michael Kelly, DTS, Inc. - London, UK

Abstract:
In a game, audio lives not in isolation but often as part of a rich and complex 3-D world. This code monkey session gives an overview of the links between the 3-D game world and the world of audio DSP, with particular emphasis on the representation of sound within a 3-D world. The tutorial is aimed at audio engineers who are looking to brush up on a little 3-D math and helper SDKs as well as those who are new to this field. Attendees can expect to hear coverage of converting from Carestian to spherical coordinates, matrix transformation, vector-base amplitude panning, distance modeling, programming examples with XAudio2, and how all this fits together.

 
 

Saturday, October 19, 9:30 am — 11:00 am (Room 1E08)

Broadcast and Streaming Media: B10 - Technology and Storytelling: How Can We Best Use the Tools Available to Tell Our Stories?

Panelists:
Butch D'Ambrosio, Manual SFX
Robert Fass, Voice Talent
Bill Rogers, Voice Talent
David Shinn, SueMedia Productions - Carle Place, NY, USA
Sue Zizza, SueMedia Productions - Carle Place, NY, USA

Abstract:
This session will showcase three examples of how the choices we make around technology and the way we use it effect the storytelling process for all entertainment media. With on-site demonstrations by Sue Zizza and David Shinn of SueMedia Productions.

1) Microphones and the Voice in Storytelling. Whether producing an audiobook or narration for a film or game, you want your talent to sound right for the story. This session will begin by looking at how we select microphones for voice talent. Two voice actors will demonstrate how working with different microphones effect their performance abilities.

2) Sound Effects: Studio vs. On Location Recordings. Sound Effects enhance the storytelling process by helping to create location, specific action, emotion, and more. Do you have to create every sound effect needed for your project, or can you work with a combination of already recorded elements, alongside studio produced sound effects (foley), or on-location effects, and what are some tips and tricks to recording sound design elements?

3) Digital Editing and Mixing. How can you better manage multiple voice, sound effect, and music elements into "stems," or sub-mixes for better control over final mixing as well as integrating plug-ins for mastering.

 
 

Saturday, October 19, 10:15 am — 11:15 am (Room 1E10)

Game Audio: G8 - Audio Shorts

Presenters:
D. Chadd Portwine, Vicarious Visions
Stephen Harwood, Jr., Education Working Group Chair; IASIG - New York, NY, USA
Jason Kanter, Avalanche Studios

Abstract:
Three presenters enter. No presenters leave. 20 minutes each to serve up an in-depth look at topics in sound design that matter most to them. Q&A to follow.

Shorty #1: Follow the Sound of My Voice: A Localization Retrospective—Follow a VO line as it travels through the localization process for Skylanders: Swap Force. We see how a movie screenplay written in English becomes one-hundred and fifty thousand .wav files in more than ten languages. Screenshots from lip-sync, special-effects, surround sound, and game-mix projects will be viewed and discussed.

Shorty #2: In-DAW Prototyping: WYSIWYG Approval and Delivery—Armed with video capture of gameplay, a content creator can develop sounds and music, as well as their in-game behavior, all without leaving the comfort of their favorite DAW. The workflow demonstrated will provide maximum protection against costly communication breakdowns, e.g., false-positive client approval and errant implementation.

Shorty #3: My Favorite Plugin!

 
 

Saturday, October 19, 11:30 am — 1:00 pm (Room 1E10)

Game Audio: G9 - Audio on Web—Overview and Application

Presenters:
Jan Linden, Google - Mountain View, CA, USA
Jory K. Prum, studio.jory.org - Fairfax, CA, USA

Abstract:
In the stampede to replace proprietary web browser plug-ins with a patchwork of open standards collectively known as HTML5, audio was once the largest gap in capability. In the past 18 months, however, great strides have been made to close this gap: browser support is nearly ubiquitous (with only one major hold-out), the standards body is marching toward completion of the first publication of the Web Audio API, and progress is being made in drafting and implementing the Web MIDI API, too. Developers are clearly excited, as interesting and advanced uses of the technology have been plentiful. This session will take a look at where we've come since last year's AES, discussing browser and codec support, shining a spotlight on a number of examples from developers across the globe (Infinite Jukebox, Step Daddy, BBC Radiophonic Workshop, Chrome Racer), take a look at how easy it is to work with the Web Audio API to implement sound within a web browser, and explore a few of the many libraries developers have created to make implementation even easier (Gibberish, Tuna, component.fm).

 
 

Saturday, October 19, 2:15 pm — 3:15 pm (Room 1E10)

Photo

Game Audio: G10 - Game Audio Breakthroughs for HTML5 and Mobile

Presenter:
Garrett Nantz, Luxurious Animals - New York, NY, USA

Abstract:
The common practice is that once game development is almost complete, sound design just gets added. We will show you a better way to use sound design at the beginning of a project as a ideation tool to inform the design and development of games.

Using the award-winning Lux Ahoy www.luxahoy.com game, we will take a behind the scenes look at the process to create audio experiences for HTML5 and Android. Topics covered will include audio workflows, tricks and techniques to combat platform and browser sound issues, creating memorable sound effects, binaural 3-D sound, audio loop creation, music sourcing, and coding libraries.

 
 

Saturday, October 19, 3:00 pm — 4:30 pm (1EFoyer)

Poster: P15 - Applications in Audio—Part I

P15-1 An Audio Game App Using Interactive Movement Sonification for Targeted Posture ControlDaniel Avissar, University of Miami - Coral Gables, FL, USA; Colby N. Leider, University of Miami - Coral Gables, FL, USA; Christopher Bennett, University of Miami - Coral Gables, FL, USA; Oygo Sound LLC - Miami, FL, USA; Robert Gailey, University of Miami - Coral Gables, FL, USA
Interactive movement sonification has been gaining validity as a technique for biofeedback and auditory data mining in research and development for gaming, sports, and physiotherapy. Naturally, the harvesting of kinematic data over recent years has been a function of an increased availability of more portable, high-precision sensory technologies, such as smart phones, and dynamic real time programming environments, such as Max/MSP. Whereas the overlap of motor skill coordination and acoustic events has been a staple to musical pedagogy, musicians and music engineers have been surprisingly less involved than biomechanical, electrical, and computer engineers in research efforts in these fields. Thus, this paper proposes a prototype for an accessible virtual gaming interface that uses music and pitch training as positive reinforcement in the accomplishment of target postures.
Convention Paper 8995 (Purchase now)

P15-2 Evaluation of the SMPTE X-Curve Based on a Survey of Re-Recording MixersLinda A. Gedemer, University of Salford - Salford, UK; Harman International - Northridge, CA, USA
Cinema calibration methods, which include targeted equalization curves for both dub stages and cinemas, are currently used to ensure an accurate translation of a film's sound track from dub stage to cinema. In recent years, there has been an effort to reexamine how cinemas and dub-stages are calibrated with respect to preferred or standardized room response curves. Most notable is the work currently underway reviewing the SMPTE standard ST202:2010 "For Motion-Pictures - Dubbing Stages (Mixing Rooms), Screening Rooms and Indoor Theaters -B-Chain Electroacoustic Response." There are both scientific and anecdotal reasons to question the effectiveness of the SMPTE standard in its current form. A survey of re-recording mixers was undertaken in an effort to better understand the efficaciousness of the SMPTE standard from the users' point of view.
Convention Paper 8996 (Purchase now)

P15-3 An Objective Comparison of Stereo Recording Techniques through the Use of Subjective Listener Preference RatingsWei Lim, University of Michigan - Ann Arbor, MI, USA
Stereo microphone techniques offer audio engineers the ability to capture a soundscape that approximates how one might hear realistically. To illustrate the differences between six common stereo microphone techniques, namely XY, Blumlein, ORTF, NOS, AB, and Faulkner, I asked 12 study participants to rate recordings of a Yamaha Disklavier piano. I examined the inter-rating correlation between subjects to find a preferential trend toward near-coincidental techniques. Further evaluation showed that there was a preference for clarity over spatial content in a recording. Subjects did not find that wider microphone placements provided for more spacious-sounding recordings. Using this information, this paper also discusses the need to re-evaluate how microphone techniques are typically categorized by distance between microphones.
Convention Paper 8997 (Purchase now)

P15-4 Tampering Detection of Digital Recordings Using Electric Network Frequency and Phase AngleJidong Chai, University of Tennessee - Knoxville, TN, USA; Yuming Liu, Electrical Power Research Institute, Chongqing Electric Power Corp. - Chongqing, China; Zhiyong Yuan, China Southern Power Grid - Guangzhou, China; Richard W. Conners, Virginia Polytechnic Institute and State University - Blacksburg, VA, USA; Yilu Liu, University of Tennessee - Knoxville, TN, USA; Oak Ridge National Laboratory
In the field of forensic authentication of digital audio recordings, the ENF (electric network frequency) Criterion is one of the possible tools and has shown promising results. An important task for forensic authentication is to determine whether the recordings are tampered or not. Previous work performs tampering detection by looking for the discontinuity in either the extracted ENF or phase angle from digital recordings. However, using only frequency or phase angle to detect tampering may not be sufficient. In this paper both frequency and phase angle with a corresponding reference database are used to do tampering detection of digital recordings, which result in more reliable detection. This paper briefly introduces the Frequency Monitoring Network (FNET) at UTK and its frequency and phase angle reference database. A Short-Time Fourier transform (STFT) is employed to estimate the ENF and phase angle embedded in audio files. A procedure of using the ENF criterion to detect tampering, ranging from signal preprocessing, ENF and phase angle estimation, frequency database matching to tampering detection, is proposed. Results show that utilizing frequency and phase angle jointly can improve the reliability of tampering detection in authentication of digital recordings.
Convention Paper 8998 (Purchase now)

P15-5 Portable Speech Encryption Based Anti-Tapping DeviceC. R. Suthikshn Kumar, Defence Institute of Advanced Technology (DIAT) - Girinagar, Pune, India
Tapping telephones nowadays is a major concern. There is a need for a portable device that can be attached to a mobile phone that can prevent tapping. Users want to encrypt their voice during conversation, mainly for privacy. The encrypted conversation can prevent tapping of the mobile calls as the network operator may tap the calls for various reasons. In this paper we propose a portable device that can be attached to the mobile phone/landline phone that serves as an anti-tapping device. The device encrypts the speech and decrypts the encrypted speech in real time. The main idea is that speech is unintelligible when encrypted.
Convention Paper 8999 (Purchase now)

P15-6 Personalized Audio Systems—A Bayesian ApproachJens Brehm Nielsen, Technical University of Denmark - Kongens Lyngby, Denmark; Widex A/S - Lynge, Denmark; Bjørn Sand Jensen, Technical University of Denmark - Kongens Lyngby, Denmark; Toke Jansen Hansen, Technical University of Denmark - Kongens Lyngby, Denmark; Jan Larsen, Technical University of Denmark - Kgs. Lyngby, Denmark
Modern audio systems are typically equipped with several user-adjustable parameters unfamiliar to most listeners. To obtain the best possible system setting, the listener is forced into non-trivial multi-parameter optimization with respect to the listener's own objective and preference. To address this, the present paper presents a general interactive framework for robust personalization of such audio systems. The framework builds on Bayesian Gaussian process regression in which the belief about the user's objective function is updated sequentially. The parameter setting to be evaluated in a given trial is carefully selected by sequential experimental design based on the belief. A Gaussian process model is proposed that incorporates assumed correlation among particular parameters, which provides better modeling capabilities compared to a standard model. A five-band constant-Q equalizer is considered for demonstration purposes, in which the equalizer parameters are optimized for each individual using the proposed framework. Twelve test subjects obtain a personalized setting with the framework, and these settings are significantly preferred to those obtained with random experimentation.
Convention Paper 9000 (Purchase now)

 
 

Saturday, October 19, 3:15 pm — 4:45 pm (Room 1E10)

Game Audio: G11 - Learning from the Future

Presenters:
Scott Selfon, Microsoft - Redmond, WA, USA
Garry Taylor, Sony Computer Entertainment Europe - Cambridge, UK

Abstract:
With the “next generation” of game consoles soon to be this generation, what have we learned from games already in development? Is it really just “more of everything” or are other trends emerging as the defining factors for game audio production, implementation, and integration? In this panel we will discuss patterns and practices that are changing, accelerating, or declining for the titles of the next year and the next decade.

 
 

Sunday, October 20, 9:00 am — 11:00 am (Room 1E10)

Game Audio: G12 - Professional Game Audio—Opportunities In The Mobile Space

Chair:
Stephen Harwood, Jr., Education Working Group Chair; IASIG - New York, NY, USA
Presenters:
Andrew Aversa, Drexel University; Impact Soundworks
Steve Horowitz, The Code International Inc. - San Francisco, CA, USA
Jory K. Prum, studio.jory.org - Fairfax, CA, USA
Michael Sweet, Berklee College of Music - Boston, MA, USA
Gina Zdanowicz, Serial Lab Studios - NJ, USA

Abstract:
In addition to sound design, composition, and production supervision, game audio requires skill sets that are rarely encountered elsewhere, including interactive audio programming and implementation. This broad array of work types provides for an equally broad range of career opportunities. Whatever your background and area of specialized expertise might be, there is room for you in this rapidly growing industry. In this session a panel of accomplished industry veterans will discuss how to begin and develop a successful career in game audio with a focus on the new opportunities available in the booming mobile gaming and web apps marketplace. Audience members will take away a comprehensive understanding of the many opportunities available to audio professionals in the video game industry, as well as valuable suggestions and insights into how to land that first gig.

 
 

Sunday, October 20, 12:30 pm — 1:30 pm (Room 1E11)

Workshop: W28 - Practical Techniques for Recording Ambience in Surround

Chair:
Helmut Wittek, SCHOEPS GmbH - Karlsruhe, Germany
Panelist:
Michael Williams, Sounds of Scotland - Le Perreux sur Marne, France

Abstract:
In this workshop microphone recording techniques for ambience in 5.1 Surround are presented and discussed in theory and practice. Various simultaneous recordings were done in preparation of the workshop.
Theses audio samples from 6 different techniques in 5 different venues are perfectly suitable for demonstrating the principal differences between the techniques and the perceptual consequences on immersion, localization, sound color, stability, etc. The differences are not only valid for ambience and for 5.1 Surround as they show the basic differences between level/time difference stereophony and as they confirm theories on correlation between channels and their consequence for the perceived spatial image.

During the workshop, the audio samples are compared in an A/B manner and differences are discussed. The audio samples as well as the full documentation can be downloaded for free use on www.hauptmikrofon.de. They are particularly useful in education but also for sound engineers which have to choose an ambience setup in practice.

 
 


Return to Game Audio Track Events

EXHIBITION HOURS October 18th 10am – 6pm October 19th 10am – 6pm October 20th 10am – 4pm
REGISTRATION DESK October 16th 3pm – 7pm October 17th 8am – 6pm October 18th 8am – 6pm October 19th 8am – 6pm October 20th 8am – 4pm
TECHNICAL PROGRAM October 17th 9am – 7pm October 18th 9am – 7pm October 19th 9am – 7pm October 20th 9am – 6pm
AES - Audio Engineering Society