AES Los Angeles 2014
Game Audio Track Event Details

Thursday, October 9, 8:15 am — 11:00 am (Off-Site 2)

Technical Tour: TT2 - Sony Computer Entertainment America

Abstract:
A walkthrough of Sony Computer Entertainment America’s new, state-of-the-art audio facilities located within Santa Monica Studios, the game studio behind the hit God of War franchise. Senior Sound Manager Gene Semel and Sound Design Manager David Collins will lead this walkthrough tour of the facilities, which includes an explanation of tools, process and pipeline, and samples of the studio's audio work in recent releases. Attendees will need to sign a non-disclosure agreement to enter the studio.

This tour is limited to 25 people.

 
 

Thursday, October 9, 9:00 am — 10:30 am (Room 306 AB)

Photo

Tutorial: T2 - MPEG-H 3D Audio

Presenter:
Schuyler Quackenbush, Audio Research Labs - Scotch Plains, USA

Abstract:
MPEG-H 3D Audio is the newest MPEG Audio standard. With the move to Ultra-High Definition video and large screens that provide an immersive visual experience, it is compelling to have an equally immersive audio experience. MPEG-H 3D Audio provides both compression and flexible rendering for such immersive audio programs. These can be, for example, 9.1 or 22.2 channel audio programs for presentation on loudspeakers or spatialized for headphones. In addition, programs can include dynamic audio objects or can be Higher Order Ambisonic recordings. The standard makes extensive use of metadata to control the audio presentation and to support user interaction. A very important aspect of 3D Audio is its rendering engine. It is not expected that all consumers would have a 22.2 loudspeaker setup, so the rendering engine is able to adapt the audio program to the loudspeaker configuration of the consumer’s setup. This can include fewer loudspeakers, incorrectly place loudspeakers or non-standard loudspeaker configurations. The tutorial will review example scenarios in which immersive audio can be enjoyed (home theater, tablet TV, smartphone TV); give an overview of the technology; and give a look at compression and rendering performance.

AES Technical Council This session is presented in association with the AES Technical Committee on Coding of Audio Signals

 
 

Thursday, October 9, 11:15 am — 12:45 pm (Room 409 AB)

Photo

Tutorial: T5 - Prototyping Audio Algorithms as VST Plugins

Presenter:
Edward Stein, DTS, Inc. - Los Gatos, CA, USA

Abstract:
Professional algorithm designers, hobbyist programmers with a passion for audio, and experimentalist musicians often have a common challenge–“How do I hear my idea come to life?” Forums are full of posts with subjects like “I want to…, where do I start?” Depending on your budget, very comprehensive tools are out there with various trade-offs on ease and control. This tutorial looks at a powerful open-source C++ framework, JUCE for rapidly prototyping your ideas as VSTs (and other plugin formats) with a real-time graphical user interface. The focus will be on kick-starting beginners with a limited but working knowledge of C++. Topics will include: good practice for building a highly reusable C++ audio class library, basics of real-time audio plugins, quickly setting up and working with JUCE projects, real-time parameters (GUI, MIDI control, presets, etc.), and troubleshooting tips for when things don’t go as you planned. By the end, you should be comfortable building your own VST plugins and be able to move forward focusing on what you care about–how it sounds.

 
 

Thursday, October 9, 11:30 am — 12:30 pm (Room 406 AB)

Game Audio: G1 - Sound Business: Strategies and Fundamentals in Game Audio Contracts

Presenter:
Keith Arem, PCB Productions and PCB Entertainment

Abstract:
Understand how new technologies and delivery methods can affect ownership, residuals, and copyright. Actors, musicians, composers, sound engineers, and directors can discover new opportunities in the expanding frontier of games. PCB President Keith Arem shares his experiences and insight into how games are transforming the way the entertainment industry works with sound.

 
 

Thursday, October 9, 11:45 am — 12:15 pm (Room 309)

Photo

Tutorial: T6 - Produce 3D Audio for Music, Film, and Game Applications

Presenter:
Tom Ammermann, New Audio Technology GmbH - Hamburg, Germany

Abstract:
Beyond of formats and applications but having later distribution in mind, the session will show production strategies and tools in 3D audio. Complete sessions from different genres will be opened and setups will be explained. Furthermore current and future end customer application and distribution possibilities will be shown.

 
 

Thursday, October 9, 2:15 pm — 3:15 pm (Room 408 B)

Photo

Game Audio: G2 - Effective Interactive Music Systems: The Nuts and Bolts of Dynamic Musical Content

Presenter:
Winifred Phillips, Generations Productions LLC - New York City Metropolitan Area

Abstract:
Interactive methodologies have profoundly impacted the way that music is recorded, mixed and integrated in video games. From horizontal resequencing and vertical layering techniques for the interactive implementation of music recordings, to MIDI and generative systems for the manipulation of music data, the structure of game music poses serious challenges both for the composer and for the game audio engineer. This talk will examine the procedures for designing interactive music models and implementing them effectively into video games. The talk will include comparisons between additive and interchange systems in vertical layering, the lessons that can be learned from conventional stem mixing, the use of markers for switching between segments, and how to disassemble a traditionally composed piece of music for use within an interactive system.

 
 

Thursday, October 9, 3:15 pm — 4:45 pm (Room 408 B)

Game Audio: G3 - Sound Design & Mix: Challenges and Solutions—Games, Film, Advertisement

Presenters:
Charles Deenen, Source Sound Inc. - Los Angeles, CA, USA
John Fasal
Tim Gedemer, Source Sound, Inc. - Woodland Hills, CA USA; Calliope Music Design, Inc.
Csaba Wagner, Freelance Sound Designer - Hollywood, CA, USA
Bryan Watkins, Warner Brothers Post Production Services Game Audio - Burbank, CA, USA

Abstract:
Different media in today’s marketplace require different (immersive) sound. Each media format has it’s own level requirements, channel distinctions, hidden technical challenges to overcome and more. This panel will attempt to demonstrate, through example and discussion, how audio production and post production techniques can (and should) be effectively tailored to their respective visual release media formats, including Games, Mobile, Trailers, Commercials and Film.

 
 

Friday, October 10, 9:00 am — 11:00 am (Room 408 B)

Game Audio: G4 - Next Gen-Game Audio Education

Chair:
Steve Horowitz, Game Audio Institute - San Francisco, CA, USA; Nickelodeon Digital
Panelists:
Dale Everingham, Video Symphony Director of Audio Programs - Burbank, CA, USA
Scott Looney, Academy of Art University - San Francisco, CA, USA
Leonard J. Paul, School of Video Game Audio - Vancouver, Canada
Stephan Schütze, Sound Librarian - Melbourne, Australia
Michael Sweet, Berklee College of Music - Boston, MA, USA

Abstract:
Game Audio education programs are starting to take root and sprout up all over the world. Game audio education is becoming a hot topic. What are some of the latest training programs out here? What are the pros and cons of a degree program versus just getting out there on my own? I am already a teacher, how can I start a game audio program at my current school? Good questions! This panel brings together entrepreneurs from some of the top private instructional institutions to discuss the latest and greatest educational models in audio for interactive media. Attendees will get a fantastic overview of what is being offered inside and outside of the traditional education system. This is a must for students and teachers alike, who are trying to navigate the waters and steer a path toward programs that are right for them in the shifting tides of audio for games and interactive media.

 
 

Friday, October 10, 9:00 am — 11:30 am (Room 309)

Paper Session: P7 - Cinema Sound, Recording and Production

Chair:
Scott Levine, Skywalker Sound - Marin County, CA, USA; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada

P7-1 Particle Systems for Creating Highly Complex Sound Design ContentNuno Fonseca, ESTG/CIIC, Polytechnic Institute of Leiria - Leiria, Portugal
Even with current audio technology, many sound design tasks present practical constraints in terms of layering sounds, creating sound variations, fragmenting sound, and ensuring space distribution especially when trying to handle highly complex scenarios with a significant number of audio sources. This paper presents the use of particles systems and virtual microphones, as a new approach to sound design, allowing the mixing of thousands or even millions of sound sources, without requiring laborious work and providing a true coherence between sound and space, with support for several surround formats, Ambisonics, Binaural, and even partial Dolby Atmos support. By controlling a particle system, instead of individual sound sources, a high number of sounds can be easily spread over a virtual space. By adding movement or random audio effects, even complex scenarios can be created.
Convention Paper 9132 (Purchase now)

P7-2 Stage Metaphor Mixing on a Multi-Touch Tablet DeviceSteven Gelineck, Aalborg University Copenhagen - Copenhagen, Denmark; Dannie Korsgaard, Aalborg University - Copenhagen, Denmark
This paper presents a tablet based interface (the Music Mixing Surface) for supporting a more natural user experience while mixing music. It focuses on the so-called stage metaphor control scheme where audio channels are represented by virtual widgets on a virtual stage. Through previous research the interface has been developed iteratively with several evaluation sessions with professional users on different platforms. The iteration presented here has been developed especially for the mobile tablet platform and explores this format for music mixing both in a professional and casual setting. The paper first discusses various contexts in which the tablet platform might be optimal for music mixing. It then describes the overall design of the mixing interface (especially focused on the stage metaphor), after which the iOS implementation is briefly described. Finally, the interface is evaluated in a qualitative user study comparing it to two alternative existing tablet solutions. Results are presented and discussed focusing on how the evaluated interfaces invite four different forms of exploration of the mix and on what consequences this has in a mobile mixing context.
Convention Paper 9133 (Purchase now)

P7-3 The Duplex Panner: Comparative Testing and Applications of an Enhanced Stereo Panning Technique for Headphone-Reproduced Commercial MusicSamuel Nacach, New York University - New York, NY, USA
As a result of new technology advances consumers primarily interact with recorded music on-the-go through headphones. Yet, music is primarily mixed using stereo loudspeaker systems consisting of crosstalk signals, which are absent in headphone reproduction. Consequently, the audio engineer's intended sound image collapses with headphones. To solve this, the work presented in this paper examines existing 3D audio techniques—primarily Binaural Audio and Ambiophonics—and enhances them to develop a novel and improved mixing technique, the Duplex Panner, for headphone-reproduced commercial music. Through subjective experiments designed for two groups, the Duplex Panner is compared to conventional Stereo panning to determine what the advantages are, if any.
Convention Paper 9134 (Purchase now)

P7-4 The Role of Acoustic Condition on High Frequency PreferenceRichard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Brett Leonard, University of Nebraska at Omaha - Omaha, NE; McGill University - Montreal, Quebec, Canada; Stuart Bremner, McGill University - Montreal, Quebec, Canada; The Centre for Interdiciplinary Research in music Media & Technology - Montreal, QC, Canada; Grzegorz Sikora, Bang & Olufsen Deutschland GmbH - Pullach, Germany
Subjective preference for high frequency content in music program has shown a wide variance in baseline testing involving expert listeners. The same well-trained subjects are retested for consistency in setting a high frequency shelf equalizer to a preferred level under varying acoustic conditions. Double-blind testing indicates that lateral energy significantly influences high frequency preference. Furthermore, subject polling indicates that blind preference of acoustic condition is inversely related to optimal consistency when performing high frequency equalization tasks.
Convention Paper 9135 (Purchase now)

P7-5 Listener Preferences for Analog and Digital Summing Based on Music GenreEric Tarr, Belmont University - Nashville, TN, USA; Jane Howard, Belmont University - Nashville, TN, USA; Benjamin Stager, Belmont University - Nashville, TN, USA
The summation of multiple audio signals can be accomplished using digital or analog technologies. Digital summing and analog summing are not identical processes and, therefore, produce different results. In this study digital summing and analog summing were performed separately on the audio signals of three different recordings of music. These recordings represented three genres of music: classical, pop/country, and heavy rock. Twenty-one listeners participated in a preference test comparing digital summing to analog summing. Results indicated that listeners preferred one type of summing to the other; this preference was dependent on the genre of music.
Convention Paper 9136 (Purchase now)

 
 

Friday, October 10, 11:00 am — 1:00 pm (Room 408 B)

Game Audio: G5 - Audio Middleware for the Next Generation

Presenters:
Scott Looney, Academy of Art University - San Francisco, CA, USA
Steve Horowitz, Game Audio Institute - San Francisco, CA, USA; Nickelodeon Digital

Abstract:
Until quite recently, there was very little audio middleware capabilities available in popular game engines without the need for scripting. Now, there are at least four hearty middleware solutions to choose from. In this presentation and demonstration, we will bring you up to speed on the latest and greatest audio middleware solutions. We will get an overview of FMOD, WWISE, FABRIC, and MASTER AUDIO. We will compare and contrast their different features, strengths and weaknesses, as well as discuss elements of audio workflow and how sound designers can go about deploying these full featured solutions in a number of different game engines.

 
 

Friday, October 10, 12:30 pm — 2:00 pm (Room 409 AB)

Student / Career: SPARS Speed Counseling with Experts—Mentoring Answers for Your Career

Abstract:
This event is specially suited for students, recent graduates, young professionals, and those interested in career advice. Hosted by SPARS in cooperation with the AES Education Committee and G.A.N.G., career related Q&A sessions will be offered to participants in a speed group mentoring format. A dozen students will interact with 4–5 working professionals in specific audio engineering fields or categories every 20 minutes. Audio engineering fields/categories include gaming, live sound/live recording, audio manufacturer, mastering, sound for picture, and studio production. Listed mentors are subject to change.

Mentors include: Tom Salta, Dren McDonald, Chanel Summers, Danny Leake, Juan R Garza, Craig Doubet, TW Blackmon, Lorita de la Cerna, Eric Johnson, Jeri Palumbo, Rick Senechal, Geoff Gray, David Rideau, Erik Zobler, Chuck Zwicky, Sylvia Massy, Mark Rubel, Anthony Schultz, Pat McMakin, Lisa Chamblee, David Glasser, Bruce Maddocks, Andrew Mendelson

 
 

Friday, October 10, 2:00 pm — 3:30 pm (Room 306 AB)

Game Audio: G6 - Diablo III: Reaper of Souls, The Devil Is In The Details

Presenters:
Derek Duke, Blizzard Entertainment
Kris Giampa, Blizzard Entertainment
Seph Lawrence, Blizzard Entertainment - Lake Forest, CA, USA
David Rovin, Blizzard Entertainment
Andrea Toyias, Blizzard Entertainment

Abstract:
Look, listen, and learn from the audio team behind Diablo III: Reaper of Souls as they show us the world of game audio development from multiple perspectives—Sound Design, Music, and VO. Hear from some the audio team members how they approached the task of bringing sound to the next installment in the Diablo franchise and how it has evolved from the original Diablo III release. Attendees will also get a peek at the Reaper of Souls cinematic.

 
 

Friday, October 10, 3:45 pm — 5:15 pm (Room 408 B)

Game Audio: G7 - Dynamic Mixing for Games

Presenter:
Simon Ashby, Audiokinetic

Abstract:
Given the linear nature of film and music, audio mixing is easily controlled and predictable. Mixing game audio brings with it many challenges, including performance constraints and the non-linear event based triggering of in-game sounds. Using real-game practical audio examples, this session will demonstrate the many positive benefits that dynamic audio mixing can have on modern sound design.

 
 

Friday, October 10, 5:30 pm — 6:30 pm (Room 408 B)

Game Audio: G8 - New Techniques for Zero-Latency Convolution

Presenter:
Frederick Umminger, Sony Computer Entertainment America

Abstract:
Low latency in audio is critical for virtual reality applications. This creates a need for low-latency but highly efficient convolutions for HRIR and reverberation. Ordinary, FFT-accelerated block-based convolution requires increasing latency in order to lower the CPU load with larger FFT block sizes. In 1993, William Gardner proposed a method of using non-uniform block sizes to simultaneously attain low latency and high efficiency. In 2008, Jeffrey Hurchalla proposed another method to accomplish this goal. This talk introduces two recent techniques for performing efficient zero-latency convolution.

 
 

Friday, October 10, 7:00 pm — 8:00 pm (Room 403 AB)

Photo

Special Event: Heyser Lecture

Presenter:
Marty O'Donnell, Marty O'Donnell Music - Seattle, WA, USA

Abstract:
Legendary game audio director and composer, Marty O'Donnell is to present the Richard C. Heyser Memorial lecture. Marty is the famed audio director behind the award-winning Halo game series and is responsible for the biggest selling game soundtrack of all time. In his talk entitled "The Ear Doesn’t Blink: Creating Culture With Adaptive Audio," O'Donnell will draw on his unique perspective from games, film and jingle-writing to share the creative challenges of working in non-linear media such as games.

 
 

Saturday, October 11, 9:00 am — 11:00 am (Room 408 B)

Game Audio: G9 - Game Audio Careers 101—How to Jump Start Your Career

Chair:
Steve Horowitz, Game Audio Institute - San Francisco, CA, USA; Nickelodeon Digital
Panelists:
Brennan J. Anderson, Disney Interactive
Stephan Schütze, Sound Librarian - Melbourne, Australia
Richard Warp, Manhattan Producers Alliance - San Francisco, CA; Leapfrog Enterprises Inc - Emeryville, CA, USA
Guy Whitmore, PopCap Games

Abstract:
Everyone wants to work in games, just check out the news. The game industry is on the rise and the growth curve keeps going up and up and up. So, what is the best way to get that first gig in audio for games? How can I transfer my existing skills to interactive media? We will take a panel of today’s top creative professionals from large game studios to Indie producers and ask them what they think you need to know when looking for work in the game industry. So, whether you are already working in the game industry or just thinking of the best way to transfer your skills from film, TV or general music production to interactive media or a complete newbie to the industry, this panel is a must!

 
 

Saturday, October 11, 11:00 am — 12:30 pm (Room 408 B)

Photo

Game Audio: G10 - Back to the Future: Interactive Audio Implementation Trends

Presenter:
Scott Selfon, Microsoft - Redmond, WA, USA

Abstract:
"Next gen” has arrived, with ever increasing technical capabilities in both hardware and software processing. But are these new consoles really just offering the same thing, only more of it? What are the audio frontiers and barriers when so many of the restrictions of the past have been eliminated? And with maturing and increasingly sophisticated audio engine solutions, is game sound programming really now a “solved” problem? Scott reflects on the current state-of-the-art for areas ranging from spatial simulation and acoustic modeling to evolving and dynamic mixing, audio as a feedback mechanism, and highly personalized audio experiences. He’ll use examples from both past and present to highlight the technical achievements that implementers are striving for in making audio not only compelling and realistic but an equal-footing contributor to immersive, engaging, and rewarding gameplay experiences.

 
 

Saturday, October 11, 1:00 pm — 2:30 pm (Room 304 AB)

Special Event: The Future Is Now—Mind Controlled Interactive Music

Presenters:
Scott Looney, Academy of Art University - San Francisco, CA, USA
Tim Mullen, Syntrogi Inc. - San Diego, CA, USA; UC San Diego
Richard Warp, Manhattan Producers Alliance - San Francisco, CA; Leapfrog Enterprises Inc - Emeryville, CA, USA

Abstract:
If one thing is clear from the music industry over the last 20 years, it is that consumers are seeking an ever-more immersive experiences, and in many ways bio feedback is the "final frontier," where music can be made in reaction to emotions, mood and more. Whether the feedback comes from autonomic processes (stress or arousal, as in Galvanic Skin Response) or cognitive function (EEG signals from the brain), there is no doubt that these "active input" technologies, which differ from traditional HCI inputs (such as hardware controllers) in their singular correspondence to the individual player, are here to stay. These technologies are already robust enough to be integrated into everything from single interfaces to complete systems.

 
 

Saturday, October 11, 1:00 pm — 3:00 pm (Room 306 AB)

Game Audio: G11 - Adventures in Music and Sound Design—The World of Hohokum

Presenters:
Daniel Birczynski, Sony Computer Entertainment America
David Collins, Sony Computer Entertainment America
Mike Niederquell, Sony Computer Entertainment America

Abstract:
Accompanying the vibrant visuals in Hohokum is a lush soundtrack and highly interactive audio environment that brings audio to the forefront of this title. In this workshop, the senior staff of the audio team talks about storytelling through their creative use of sound design and music.

 
 

Saturday, October 11, 3:15 pm — 4:45 pm (Room 404 AB)

Game Audio: G12 - Yes, Your Mobile Game Can Have Awesome Audio!

Presenter:
Stephan Schütze, Sound Librarian - Melbourne, Australia

Abstract:
Many developers work under the fallacy that because they are working on a game for a mobile that high quality audio is not an option. Lack of resources, the need to stay within tight size constraints due to download requirements, and just a general idea that mobile means lower quality. Simply put these ideas are completely wrong. There are a range of tools available that, when utilized properly, support the creation of dynamic, high quality audio for nearly every platform available.

This presentation will cover two main aspects on this topic.
1. An overview of how audio is a sophisticated tool for communicating with your customers in regards to narrative, tactical information, and feedback.
2 Practical examples of how current tool sets allow for the creation of effective and dynamic audio elements that are incredibly resource efficient and high quality.

 
 

Saturday, October 11, 5:00 pm — 6:00 pm (Room 404 AB)

Photo

Tutorial: T19 - Psychoacoustics for Sound Designers

Presenter:
Shaun Farley, Dynamic Interference - Berkeley, CA, USA

Abstract:
This session will explore some of the mechanical and psychological oddities that affect our perception of sound. This will be focused on the sound designer's perspective. As such, it will be more about identifying end behaviors of the human hearing system than the underlying reasons for those behaviors. Some we develop awareness of through experience in our work, while others remain sub-conscious until they're presented to us. This is meant to be a starting place for us to begin talking about how we can use these behaviors as tools for sonic storytelling.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Cinema

 
 

Sunday, October 12, 9:00 am — 10:00 am (Room 408 B)

Game Audio: G13 - MIDI: Still Strong After 30 Years – New Advances with Web Browsers, Bluetooth, and More

Presenters:
Athan Billias, Executive Board Member, MIDI Manufacturers Association; Director of Strategic Product Planning, Yamaha
Pete Brown, DX Engineering Engagement and Evangelism, Creative Media Apps, Microsoft
Pat Scandalis, CTO & Acting CEO, moForte.com
Torrey Walker, Core Audio Software Engineering Team, Apple - Cupertino, CA USA

Abstract:
Mobile platforms from Google, Apple, and Microsoft are starting to catch up to desktop/console platforms when it comes to audio capabilities. This session will provide an overview of new Audio/MIDI capabilities in the Chrome web browser, Chrome OS, Android OS, and Windows RT OS, plus a progress report on the "MIDI over Bluetooth" standard involving the major OS developers and MIDI hardware/software makers.

 
 

Sunday, October 12, 10:15 am — 11:45 am (Room 408 B)

Game Audio: G14 - New DAW Rising: Scoring and Mixing Your Game, In The Game

Presenter:
Guy Whitmore, PopCap Games

Abstract:
A common practice for game composers and sound designers today is to compose and arrange fully mixed music and sound cues in Pro Tools or Logic, then have an implementation specialist drop those files into the game. Sound integration, in this case, is seen as a basic technical task. But in order to score and mix a game with greater nuance, the composer would want to see the game in action while composing; the sound designer would want to watch the interactivity of the visuals, working in a DAW that includes robust adaptive features. This new DAW exists; it is your game audio engine and its authoring tools. In this scenario, sound integration is a highly creative endeavor, where music arranging, mixing, mastering, and even composing takes place.

 
 

Sunday, October 12, 1:30 pm — 5:00 pm (Room 308 AB)

Paper Session: P19 - Signal Processing: Part 3

Chair:
Duane Wise, Wholegrain Digital Systems LLC - Boulder, CO, USA

P19-1 Eliminating Transducer Distortion in Acoustic MeasurementsFinn Agerkvist, Technical University of Denmark - Kgs. Lyngby, Denmark; Antoni Torras-Rosell, Danish National Metrology Institute - Lyngby, Denmark; Richard McWalter, Technical University of Denmark - Lyngby, Denmark
This paper investigates the influence of nonlinear components that contaminate the linear response of acoustic transducers and presents a method for eliminating the influence of nonlinearities in acoustic measurements. The method is evaluated on simulated as well as experimental data and is shown to perform well even in noisy conditions. The limitations of the Total Harmonic Distortion, THD, measure is discussed and a new distortion measure, Total Distortion Ratio, TDR, which more accurately describes the amount of nonlinear power in the measured signal, is proposed.
Convention Paper 9204 (Purchase now)

P19-2 Uniformly-Partitioned Convolution with Independent Partitions in Signal and FilterFrank Wefers, RWTH Aachen University - Aachen, Germany; Michael Vorländer, RWTH Aachen University - Aachen, Germany
Low-latency real-time FIR filtering is often realized using partitioned convolution algorithms, which split the filter impulse responses into a sequence of sub filters and process these sub filters efficiently using frequency-domain methods (e.g., FFT-based convolution). Methods that split both, the signal and the filter, into uniformly-sized sub filters define a fundamental class of algorithms known as uniformly-partitioned convolution techniques. In these methods both operands, signal and filter, are usually partitioned with the same granularity. This contribution introduces uniformly-partitioned algorithms with independent partitions (block lengths) in both operands and regards viable transform sizes resulting from these. The relations of the algorithmic parameters are derived and the performance of the approach is evaluated.
Convention Paper 9205 (Purchase now)

P19-3 Modeling the Nonlinear Behavior of Operational AmplifiersRobert-Eric Gaskell, McGill University - Montreal, QC, Canada; GKL Audio Inc. - Montreal, QC, Canada
Due to the gain-bandwidth characteristics of operational amplifiers, their nonlinearities are frequency dependent, showing a rise in distortion at higher frequencies. Depending on the circuit and system implementations, this distortion can be significant to listener perception of sonic character and quality and is therefore relevant to models of op amp-based analog equipment. Power-series models of the harmonic signature of various op amp nonlinearities are developed with and without this frequency dependence. Listening tests are performed to determine the extent to which the distortion characteristic of the model must match that of the real component to create a perceptually similar result.
Convention Paper 9206 (Purchase now)

P19-4 More Cowbell: A Physically-Informed, Circuit-Bendable, Digital Model of the TR-808 CowbellKurt James Werner, Center for Computer Research in Music and Acoustics (CCRMA) - Stanford, CA, USA; Stanford University; Jonathan S. Abel, Stanford University - Stanford, CA, USA; Julius O. Smith, III, Stanford University - Stanford, CA, USA
We present an analysis of the cowbell voice circuit from the Roland TR-808 Rhythm Composer. A digital model based on this analysis accurately emulates the original. Through the use of physical and behavioral models of each sub-circuit, this model supports accurate emulation of circuit-bent extensions to the voice's original behavior (including architecture-level alterations and component substitution). Some of this behavior is very complicated and is inconvenient or impossible to capture accurately through black box modeling or structured sampling. The band pass filter sub-circuit is treated as a case study of how to apply Mason's gain formula to finding the continuous-time transfer function of an analog circuit.
Convention Paper 9207 (Purchase now)

P19-5 A Modal Architecture for Artificial Reverberation with Application to Room Acoustics ModelingJonathan S. Abel, Stanford University - Stanford, CA, USA; Sean Coffin, Stanford University - Stanford, CA, USA; Kyle Spratt, University of Texas, Austin - Austin, TX, USA
The modal analysis of a room response is considered, and a computational structure employing a modal decomposition is introduced for synthesizing artificial reverberation. The structure employs a collection of resonant filters, each driven by the source signal and their outputs summed. With filter resonance frequencies and dampings tuned to the modal frequencies and decay times of the space, and filter gains set according to the source and listener positions, any number of acoustic spaces and resonant objects may be simulated. Issues of sufficient modal density, computational efficiency and memory use are discussed. Finally, models of measured and analytically derived reverberant systems are presented, including a medium-sized acoustic room and an electro-mechanical spring reverberator.
Convention Paper 9208 (Purchase now)

P19-6 The Procedural Sound Design of Sim CellLeonard J. Paul, School of Video Game Audio - Vancouver, Canada
Synthesis was used to generate all of the audio for the sound design of educational game Sim Cell using the open source language Pure Data [1]. A primary advantage of using Pure Data is that it can be easily embedded into games for iOS, Android, and other platforms. This paper illustrates different examples of how synthesis can be effectively used in video games in contrast to more conventional contemporary audio production methods such as sampling. Synthesis allows for the accurate rendering of high resolution audio easily in addition to very high rates of data compression when compared to sampling.
Convention Paper 9209 (Purchase now)

P19-7 OBRAMUS: A System for Object-Based Retouch of Amateur MusicJordi Janer, Universitat Pompeu Fabra - Barcelona, Catalunya, Spain; Stanislaw Gorlow, Sony Computer Science Laboratory - Paris, France; Keita Arimoto, Yamaha Corporation - Iwata, Shizuoka, Japan
In the more recent past, the area of semantic audio has become an object of special attention due to the increase in attractiveness of signal representations that allow manipulations of audio on a symbolic level. The semantics usually refer to audio objects, such as instruments, or musical entities, such as chords or notes. In this paper we present a system for making minor corrections to amateur piano recordings based on a nonnegative matrix factorization. Acting as middleman between the signal and the user, the system enables a simple form of musical recomposition by altering pitch, timbre, onset, and offset of distinct notes. The workflow is iterative, that is the result improves stepwise through user intervention.
Convention Paper 9210 (Purchase now)

 
 

Sunday, October 12, 2:00 pm — 3:00 pm (Room 405)

TC Meeting: Audio for Games

Abstract:
Technical Committee Meeting on Audio for Games

 
 


Return to Game Audio Track Events

EXHIBITION HOURS October 10th   10am ��� 6pm October 11th   10am ��� 6pm October 12th   10am ��� 4pm
REGISTRATION DESK October 8th   3pm ��� 7pm October 9th   8am ��� 6pm October 10th   8am ��� 6pm October 11th   8am ��� 6pm October 12th   8am ��� 4pm
TECHNICAL PROGRAM October 9th   9am ��� 7pm October 10th   9am ��� 7pm October 11th   9am ��� 7pm October 12th   9am ��� 6pm
AES - Audio Engineering Society