AES San Francisco 2012
Broadcast & Streaming Track Event Details

Friday, October 26, 9:00 am — 11:00 am (Room 121)

Paper Session: P1 - Amplifiers and Equipment

Chair:
Jayant Datta, THX - San Francisco, CA, USA; Syracuse University - Syracuse, NY, USA

P1-1 A Low-Voltage Low-Power Output Stage for Class-G Headphone AmplifiersAlexandre Huffenus, EASii IC - Grenoble, France
This paper proposes a new headphone amplifier circuit architecture, which output stage can be powered with very low supply rails from ±1.8 V to ±0.2 V. When used inside a Class-G amplifier, with the switched mode power supply powering the output stage, the power consumption can be significantly reduced. For a typical listening level of 2x100µW, the increase in power consumption compared to idle is only 0.7mW, instead of 2.5mW to 3mW for existing amplifiers. In battery-powered devices like smartphones or portable music players, this can increase the battery life of more than 15% during audio playback. Theory of operation, electrical performance and a comparison with the actual state of the art will be detailed.
Convention Paper 8684 (Purchase now)

P1-2 Switching/Linear Hybrid Audio Power Amplifiers for Domestic Applications, Part 2: The Class-B+D AmplifierHarry Dymond, University of Bristol - Bristol, UK; Phil Mellor, University of Bristol - Bristol, UK
The analysis and design of a series switching/linear hybrid audio power amplifier rated at 100 W into 8 O are presented. A high-fidelity linear stage controls the output, while the floating mid-point of the power supply for this linear stage is driven by a switching stage. This keeps the voltage across the linear stage output transistors low, enhancing efficiency. Analysis shows that the frequency responses of the linear and switching stages must be tightly matched to avoid saturation of the linear stage output transistors. The switching stage employs separate DC and AC feedback loops in order to minimize the adverse effects of the floating-supply reservoir capacitors, through which the switching stage output current must flow.
Convention Paper 8685 (Purchase now)

P1-3 Investigating the Benefit of Silicon Carbide for a Class D Power StageVerena Grifone Fuchs, University of Siegen - Siegen, Germany; CAMCO GmbH - Wenden, Germany; Carsten Wegner, University of Siegen - Siegen, Germany; CAMCO GmbH - Wenden, Germany; Sebastian Neuser, University of Siegen - Siegen, Germany; Dietmar Ehrhardt, University of Siegen - Siegen, Germany
This paper analyzes in which way silicon carbide transistors improve switching errors and loss associated with the power stage. A silicon carbide power stage and a conventional power stage with super-junction devices are compared in terms of switching behavior. Experimental results of switching transitions, delay times, and harmonic distortion as well as a theoretical evaluation are presented. Emending the imperfection of the power stage, silicon carbide transistors bring out high potential for Class D audio amplification.
Convention Paper 8686 (Purchase now)

P1-4 Efficiency Optimization of Class G Amplifiers: Impact of the Input SignalsPatrice Russo, Lyon Institute of Nanotechnology - Lyon, France; Gael Pillonnet, University of Lyon - Lyon, France; CPE dept; Nacer Abouchi, Lyon Institute of Nanotechnology - Lyon, France; Sophie Taupin, STMicroelectronics, Inc. - Grenoble, France; Frederic Goutti, STMicroelectronics, Inc. - Grenoble, France
Class G amplifiers are an effective solution to increase the audio efficiency for headphone applications, but realistic operating conditions have to be taken into account to predict and optimize power efficiency. In fact, power supply tracking, which is a key factor for high efficiency, is poorly optimized with the classical design method because the stimulus used is very different from a real audio signal. Here, a methodology has been proposed to find class G nominal conditions. By using relevant stimuli and nominal output power, the simulation and test of the class G amplifier are closer to the real conditions. Moreover, a novel simulator is used to quickly evaluate the efficiency with these long duration stimuli, i.e., ten seconds instead of a few milliseconds. This allows longer transient simulation for an accurate efficiency and audio quality evaluation by averaging the class G behavior. Based on this simulator, this paper indicates the limitations of the well-established test setup. Real efficiencies vary up to ±50% from the classical methods. Finally, the study underlines the need to use real audio signals to optimize the supply voltage tracking of class G amplifiers in order to achieve a maximal efficiency in nominal operation.
Convention Paper 8687 (Purchase now)

 
 

Friday, October 26, 9:00 am — 10:30 am (Room 131)

Broadcast and Streaming Media: B1 - Working with HTML5

Chair:
Valerie Tyler, College of San Mateo - San Mateo, CA, USA
Panelists:
Jan Linden, Google - Mountain View, CA, USA
Greg Ogonowski, Orban - San Leandro, CA, USA
Charles Van Winkle, Adobe Systems Incorporated - Minneapolis, MN, USA


Abstract:
HTML5 is a language for structuring and presenting content for the World Wide Web, a core technology of the Internet. It is the fifth revision of the HTML standard. HTML5 has many features built into the code. One feature is the media player and how it handles media being downloaded or streamed. This session will look into the technical considerations for media to be played back as well as the user interfaces.

 
 

Friday, October 26, 9:00 am — 10:30 am (Room 122)

Paper Session: P2 - Networked Audio

Chair:
Ellen Juhlin, Meyer Sound - Berkeley, CA, USA; AVnu Alliance

P2-1 Audio Latency Masking in Music Telepresence Using Artificial ReverberationRen Gang, University of Rochester - Rochester, NY, USA; Samarth Shivaswamy, University of Rochester - Rochester, NY, USA; Stephen Roessner, University of Rochester - Rochester, NY, USA; Akshay Rao, University of Rochester - Rochester, NY, USA; Dave Headlam, University of Rochester - Rochester, NY, USA; Mark F. Bocko, University of Rochester - Rochester, NY, USA
Network latency poses significant challenges in music telepresence systems designed to enable multiple musicians at different locations to perform together in real-time. Since each musician hears a delayed version of the performance from the other musicians it is difficult to maintain synchronization and there is a natural tendency for the musicians to slow their tempo while awaiting response from their fellow performers. We asked if the introduction of artificial reverberation can enable musicians to better tolerate latency by conducting experiments with performers where the degree of latency was controllable and for which artificial reverberation could be added or not. Both objective and subjective evaluation of ensemble performances were conducted to evaluate the perceptual responses at different experimental settings.
Convention Paper 8688 (Purchase now)

P2-2 Service Discovery Using Open Sound ControlAndrew Eales, Wellington Institute of Technology - Wellington, New Zealand; Rhodes University - Grahamstown, South Africa; Richard Foss, Rhodes University - Grahamstown, Eastern Cape, South Africa
The Open Sound Control (OSC) control protocol does not have service discovery capabilities. The approach to adding service discovery to OSC proposed in this paper uses the OSC address space to represent services within the context of a logical device model. This model allows services to be represented in a context-sensitive manner by relating parameters representing services to the logical organization of a device. Implementation of service discovery is done using standard OSC messages and requires that the OSC address space be designed to support these messages. This paper illustrates how these enhancements to OSC allow a device to advertise its services. Controller applications can then explore the device’s address space to discover services and retrieve the services required by the application.
Convention Paper 8689 (Purchase now)

P2-3 Flexilink: A Unified Low Latency Network Architecture for Multichannel Live AudioYonghao Wang, Birmingham City University - Birmingham, UK; John Grant, Nine Tiles Networks Ltd. - Cambridge, UK; Jeremy Foss, Birmingham City University - Birmingham, UK
The networking of live audio for professional applications typically uses layer 2-based solutions such as AES50 and MADI utilizing fixed time slots similar to Time Division Multiplexing (TDM). However, these solutions are not effective for best effort traffic where data traffic utilizes available bandwidth and is consequently subject to variations in QoS. There are audio networking methods such as AES47, which is based on asynchronous transfer mode (ATM), but ATM equipment is rarely available. Audio can also be sent over Internet Protocol (IP), but the size of the packet headers and the difficulty of keeping latency within acceptable limits make it unsuitable for many applications. In this paper we propose a new unified low latency network architecture that supports both time deterministic and best effort traffic toward full bandwidth utilization with high performance routing/switching. For live audio, this network architecture allows low latency as well as the flexibility to support multiplexing multiple channels with different sampling rates and word lengths.
Convention Paper 8690 (Purchase now)

 
 

Friday, October 26, 10:45 am — 12:30 pm (Room 132)

Product Design: PD2 - AVB Networking for Product Designers

Chair:
Rob Silfvast, Avid - Mountain View, CA, USA
Panelists:
John Bergen, Marvell
Jeff Koftinoff, Meyer Sound Canada - Vernon, BC, Canada
Morten Lave, TC Applied Technologies - Toronto, ON, Canada
Lee Minich, Lab X Technologies - Rochester, NY, USA
Matthew Mora, Chair IEEE 1722.1 - Pleasanton, CA, USA
Dave Olsen, Harman International
Michael Johas Teener, Broadcom - Santa Cruz, CA, USA


Abstract:
This session will cover the essential technical aspects of Audio Video Bridging technology and how it can be deployed in products to support standards-based networked connectivity. AVB is an open IEEE standard and therefore promises low cost and wide interoperability among products that leverage the technology. Speakers from several different companies will share insights on their experiences deploying AVB in real products. The panelists will also compare and contrast the open-standards approach of AVB with proprietary audio-over-Ethernet technologies.

 
 

Friday, October 26, 10:45 am — 12:15 pm (Room 131)

Broadcast and Streaming Media: B2 - Facility Design

Chair:
John Storyk, Walters-Storyk Design Group - Highland, NY, USA
Panelists:
Kevin Carroll, Sonic Construction LLC
Cindy McSherry-Martinez, Studio Trilogy
Paul Stewart, Genelec


Abstract:
A panel of leading studio contractors and installation experts will provide a real-world users survey of specific products and acoustic materials commonly (and rarely) incorporated in professional critical listening environments. Optimal options for doors, glass, HVAC, variable acoustic panels, furniture, equipment racks, and other integral components of today's high-end (and budget conscious) TV and radio broadcast facilities will be discussed in detail. This is not an advertorial event. Contractor recommendations are based on personal field experience with these products. Their success is contingent on their ability to provide clients with cost-effective solutions to a myriad of technical and aesthetic issues.

 
 

Friday, October 26, 11:00 am — 12:30 pm (Room 123)

Game Audio: G1 - A Whole World in Your Hands: New Techniques in Generative Audio Bring Entire Game Worlds into the Realms of Mobile Platforms

Presenter:
Stephan Schütze


Abstract:
"We can't have good audio; there is not enough memory on our target platform." This is a comment heard far too often especially considering it's incorrect. Current technology already allows for complex and effective audio environments to be made with limited platform resources when developed correctly, but we are just around the corner from an audio revolution.

The next generation of tools being developed for audio creation and implementation will allow large and complex audio environments to be created using minimal amounts of resources. While the new software apps being developed are obviously an important part of this coming revolution it is the techniques, designs, and overall attitudes to audio production that will be the critical factors in successfully creating the next era of sound environments.

This presentation will break down and discuss this new methodology independent of the technology and demonstrate some simple concepts that can be used to develop a new approach to sound design. All the material presented in this talk will benefit development on current and next gen consoles as much as development for mobile devices.

 
 

Friday, October 26, 2:00 pm — 3:30 pm (Room 120)

Photo

Live Sound Seminar: LS2 - Live Sound Engineering: The Juxtaposition of Art and Science

Chair:
Chuck Knowledge, Chucknology - San Francisco, CA, USA


Abstract:
In this presentation we examine the different disciplines required for modern live event production. The technical foundations of audio engineering are no longer enough to deliver the experiences demanded by today's concert goers. This session will discuss practical engineering challenges with consideration for the subjective nature of art and the desire of performing artists to push the technological envelope. Focal points will include:

• Transplanting studio production to the live arena.
• Computer-based solutions and infrastructure requirements.
• The symbiosis with lighting and video.
• New technologies for interactivity and audience engagement.
• New career paths made possible by innovation in these fields.

Attendees can expect insight to the delivery of recent high-profile live events, the relevant enabling technologies, and how to develop their own skill set to remain at the cutting edge.

 
 

Friday, October 26, 2:00 pm — 4:00 pm (Room 133)

Workshop: W3 - What Every Sound Engineer Should Know about the Voice

Chair:
Eddy B. Brixen, EBB-consult - Smorum, Denmark
Panelists:
Henrik Kjelin, Complete Vocal Institute - Denmark
Cathrine Sadolin, Complete Vocal Institute - Denmark


Abstract:
The purpose of this workshop is to teach sound engineers how to listen to the voice before they even think of microphone picking and knob-turning. The presentation and demonstrations are based on the "Complete Vocal Technique" (CVT) where the fundamental is the classification of all human voice sounds into one of four vocal modes named Neutral, Curbing, Overdrive, and Edge. The classification is used by professional singers within all musical styles and has in a period of 20 years proved easy to grasp in both real life situations and also in auditive and visual tests (sound examples and laryngeal images/ Laryngograph waveforms). These vocal modes are found in the speaking voice as well. Cathrine Sadolin, the developer of CVT, will involve the audience in this workshop, while explaining and demonstrating how to work with the modes in practice to achieve any sound and solve many different voice problems like unintentional vocal breaks, too much or too little volume, hoarseness, and much more. The physical aspects of the voice will be explained and laryngograph waveforms and analyses will be demonstrated by Henrik Kjelin. Eddy Brixen will demonstrate measurements for the detection of the vocal modes and explain essential parameters in the recording chain, especially the microphone, to ensure reliable and natural recordings.

 
 

Friday, October 26, 2:15 pm — 3:45 pm (Room 131)

Broadcast and Streaming Media: B3 - Broadcast Audio Network Techniques

Chair:
Dan Braverman, Radio Systems - Logan Township, NJ, USA
Panelists:
Tag Borland, Logitek Electronic Systems Inc. - Houston, TX, USA
Andreas Hildebrand, ALC NetworX - Munich, Germany
Kelly Parker, Wheatstone
Greg Shay, Telos Alliance/Axia - Cleveland, OH, USA


Abstract:
Broadcasting, especially in the studio arena has suffered mightily from lack of standards (or as the old joke goes; “from liking standards so much we created too many!"). Without any analogous MIDI-like control or serial protocol, integrating today’s studio remains a science project. But audio over IP presents—and more aptly demands—a standard protocol if our new industry hardware and peripherals are literally going to communicate.

This session will overview the current implemented broadcast VOIP standards with an emphasis on interoperability challenging the participating manufacturers to reveal their plans, issues and hurtles in adapting and implementing a standard.

 
 

Friday, October 26, 2:45 pm — 6:00 pm (Tech Tours)

Technical Tour: TT2 - Tamalpais Research Institute (TRI)


Abstract:
Created by Grateful Dead founding member Bob Weir, TRI (www.tristudios.com) is a $5+ million, state-of-the-art performance studio for broadcasting live video and audio streams to the internet. The 11,500-square-foot complex has two studios, including a 2,000-square-foot main studio with a the Meyers Sound Constellation System. Control room A seamlessly integrates a 48 channel 7.1 surround API analog console with racks of outboard gear spanning from decades of Grateful Dead tours to the latest technology. The entire facility is interconnected for audio and HD video recording. The visionary broadcast and recording capabilities offer a revolutionary new concept, allowing fans to enjoy intimate live performances from wherever they have internet access.

This event is limited to 42 tickets.

Technical Tours are made available on a first come, first served basis. Tickets can be purchased during normal registration hours at the convention center.

Price: Members $40/Nonmembers $50

 
 

Friday, October 26, 3:00 pm — 4:30 pm (Foyer)

Poster: P7 - Amplifiers, Transducers, and Equipment

P7-1 Evaluation of trr Distorting Effects Reduction in DCI-NPC Multilevel Power Amplifiers by Using SiC Diodes and MOSFET TechnologiesVicent Sala, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; Tomas Resano, Jr., UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; MCIA Research Center; Jose Luis Romeral, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; Jose Manuel Moreno, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain
In the last decade, the Power Amplifier applications have used multilevel diode-clamped-inverter or neutral-point-clamped (DCI-NPC) topologies to present very low distortion at high power. In these applications a lot of research has been done in order to reduce the sources of distortion in the DCI-NPC topologies. One of the most important sources of distortion, and less studied, is the reverse recovery time (trr) of the clamp diodes and MOSFET parasitic diodes. Today, with the emergence of Silicon Carbide (SiC) technologies, these sources of distortion are minimized. This paper presents a comparative study and evaluation of the distortion generated by different combinations of diodes and MOSFETs with Si and SiC technologies in a DCI-NPC multilevel Power Amplifier in order to reduce the distortions generated by the non-idealities of the semiconductor devices.
Convention Paper 8720 (Purchase now)

P7-2 New Strategy to Minimize Dead-Time Distortion in DCI-NPC Power Amplifiers Using COE-Error InjectionTomas Resano, Jr., UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; MCIA Research Center; Vicent Sala, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; Jose Luis Romeral, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain; Jose Manuel Moreno, UPC-Universitat Politecnica de Catalunya - Terrassa, Catalunya, Spain
The DCI-NPC topology has become one of the best options to optimize energy efficiency in the world of high power and high quality amplifiers. This can use an analog PWM modulator that is sensitive to generate distortion or error, mainly for two reasons: Carriers Amplitude Error (CAE) and Carriers Offset Error (COE). Other main error and distortion sources in the system is the Dead-Time (td). This is necessary to guarantee the proper operation of the power amplifier stage so that errors and distortions originated by it are unavoidable. This work proposes a negative COE generation to minimize the distortion effects of td. Simulation and experimental results validates this strategy.
Convention Paper 8721 (Purchase now)

P7-3 Further Testing and Newer Methods in Evaluating Amplifiers for Induced Phase and Frequency Modulation via Tones, Amplitude Modulated Signals, and Pulsed WaveformsRonald Quan, Ron Quan Designs - Cupertino, CA, USA
This paper will present further investigations from AES Convention Paper 8194 that studied induced FM distortions in audio amplifiers. Amplitude modulated (AM) signals are used for investigating frequency shifts of the AM carrier signal with different modulation frequencies. A square-wave and sine-wave TIM test signal is used to evaluate FM distortions at the fundamental frequency and harmonics of the square-wave. Newer amplifiers are tested for FM distortion with a large level low frequency signal inducing FM distortion on a small level high frequency signal. In particular, amplifiers with low and higher open loop bandwidths are tested for differential phase and FM distortion as the frequency of the large level signal is increased from 1 KHz to 2 KHz.
Convention Paper 8722 (Purchase now)

P7-4 Coupling Lumped and Boundary Element Methods Using SuperpositionJoerg Panzer, R&D Team - Salgen, Germany
Both, the Lumped and the Boundary Element Method are powerful tools for simulating electroacoustic systems. Each one can have its preferred domain of application within one system to be modeled. For example the Lumped Element Method is practical for electronics, simple mechanics, and internal acoustics. The Boundary Element Method on the other hand enfolds its strength on acoustic-field calculations, such as diffraction, reflection, and radiation impedance problems. Coupling both methods allows to investigate the total system. This paper describes a method for fully coupling of the rigid body mode of the Lumped to the Boundary Element Method with the help of radiation self- and mutual radiation impedance components using the superposition principle. By this, the coupling approach features the convenient property of a high degree of independence of both domains. For example, one can modify parameters and even, to some extent, change the structure of the lumped-element network without the necessity to resolve the boundary element system. This paper gives the mathematical derivation and a demonstration-example, which compares calculation results versus measurement. In this example electronics and mechanics of the three involved loudspeakers are modeled with the help of the lumped element method. Waveguide, enclosure and radiation is modeled with the boundary element method.
Convention Paper 8723 (Purchase now)

P7-5 Study of the Interaction between Radiating Systems in a Coaxial LoudspeakerAlejandro Espi, Acústica Beyma - Valencia, Spain; William A. Cárdenas, Sr., University of Alicante - Alicante, Spain; Jose Martinez, Acustica Beyma S.L. - Moncada (Valencia), Spain; Jaime Ramis, University of Alicante - Alicante, Spain; Jesus Carbajo, University of Alicante - Alicante, Spain
In this work the procedure followed to study the interaction between the mid and high frequency radiating systems of a coaxial loudspeaker is explained. For this purpose a numerical Finite Element model was implemented. In order to fit the model, an experimental prototype was built and a set of experimental measurements, electrical impedance, and pressure frequency response in an anechoic plane wave tube among these, were carried out. So as to take into account the displacement dependent nonlinearities, a different input voltage parametric analysis was performed and internal acoustic impedance was computed numerically in the frequency domain for specific phase plug geometries. Through inversely transforming to a time differential equation scheme, a lumped element equivalent circuit to evaluate the mutual acoustic load effect present in this type of acoustic coupled systems was obtained. Additionally, the crossover frequency range was analyzed using the Near Field Acoustic Holography technique.
Convention Paper 8724 (Purchase now)

P7-6 Flexible Acoustic Transducer from Dielectric-Compound Elastomer FilmTakehiro Sugimoto, NHK Science & Technology Research Laboratories - Setagaya-ku, Tokyo, Japan; Tokyo Institute of Technology - Midori-ku, Yokohama, Japan; Kazuho Ono, NHK Science & Technology Research Laboratories - Setagaya-ku, Tokyo, Japan; Akio Ando, NHK Science & Technology Research Laboratories - Setagaya-ku, Tokyo, Japan; Hiroyuki Okubo, NHK Science & Technology Research Laboratories - Setagaya-ku, Tokyo, Japan; Kentaro Nakamura, Tokyo Institute of Technology - Midori-ku, Yokohama, Japan
To increase the sound pressure level of a flexible acoustic transducer from a dielectric elastomer film, this paper proposes compounding various kinds of dielectrics into a polyurethane elastomer, which is the base material of the transducer. The studied dielectric elastomer film utilizes a change in side length derived from the electrostriction for sound generation. The proposed method was conceived from the fact that the amount of dimensional change depends on the relative dielectric constant of the elastomer. Acoustical measurements demonstrated that the proposed method was effective because the sound pressure level increased by 6 dB at the maximum.
Convention Paper 8725 (Purchase now)

P7-7 A Digitally Driven Speaker System Using Direct Digital Spread Spectrum Technology to Reduce EMI NoiseMasayuki Yashiro, Hosei University - Koganei, Tokyo, Japan; Mitsuhiro Iwaide, Hosei University - Koganei, Tokyo, Japan; Akira Yasuda, Hosei University - Koganei, Tokyo, Japan; Michitaka Yoshino, Hosei University - Koganei, Tokyo, Japan; Kazuyki Yokota, Hosei University - Koganei, Tokyo, Japan; Yugo Moriyasu, Hosei University - Koganei, Tokyo, Japan; Kenji Sakuda, Hosei University - Koganei, Tokyo, Japan; Fumiaki Nakashima, Hosei University - Koganei, Tokyo, Japan
In this paper a novel digital direct-driven speaker for reducing electromagnetic interference incorporating a spread spectrum clock generator is proposed. The driving signal of a loudspeaker, which has a large spectrum at specific frequency, interferes with nearby equipment because the driving signal emits electromagnetic waves. The proposed method changes two clock frequencies according to the clock selection signal generated by a pseudo-noise circuit. The noise performance deterioration caused by the clock frequency switching can be reduced by the proposed modified delta-sigma modulator, which changes coefficients of the DSM according to the width of the clock period. The proposed method can reduce out-of-band noise by 10 dB compared to the conventional method.
Convention Paper 8726 (Purchase now)

P7-8 Automatic Speaker Delay Adjustment System Using Wireless Audio Capability of ZigBee NetworksJaeho Choi, Seoul National University - Seoul, Korea; Myoung woo Nam, Seoul National University - Seoul, Korea; Kyogu Lee, Seoul National University - Seoul, Korea
IEEE 802.15.4 (ZigBee) standard is a low data rate, low power consumption, low cost, flexible network system that uses wireless networking protocol for automation and remote control applications. This paper applied these characteristics on the wireless speaker delay compensation system in a large venue (over 500-seat hall). Traditionally delay adjustment has been manually done by sound engineers, but our suggested system will be able to analyze delayed-sound of front speaker to rear speaker automatically and apply appropriate delay time to rear speakers. This paper investigates the feasibility of adjusting the wireless speaker delay over the above-mentioned ZigBee network. We present an implementation of a ZigBee audio transmision and LBS (Location-Based Service) application that allows to calculation a speaker delay time.
Convention Paper 8727 (Purchase now)

P7-9 A Second-Order Soundfield Microphone with Improved Polar Pattern ShapeEric M. Benjamin, Surround Research - Pacifica, CA, USA
The soundfield microphone is a compact tetrahedral array of four figure-of-eight microphones yielding four coincident virtual microphones; one omnidirectional and three orthogonal pressure gradient microphones. As described by Gerzon, above a limiting frequency approximated by fc = pc/r, the virtual microphones become progressively contaminated by higher-order spherical harmonics. To improve the high-frequency performance, either the array size must be substantially reduced or a new array geometry must be found. In the present work an array having nominally octahedral geometry is described. It samples the spherical harmonics in a natural way and yields horizontal virtual microphones up to second order having excellent horizontal polar patterns up to 20 kHz.
Convention Paper 8728 (Purchase now)

P7-10 Period Deviation Tolerance Templates: A Novel Approach to Evaluation and Specification of Self-Synchronizing Audio ConvertersFrancis Legray, Dolphin Integration - Meylan, France; Thierry Heeb, Digimath - Sainte-Croix, Switzerland; SUPSI, ICIMSI - Manno, Switzerland; Sebastien Genevey, Dolphin Integration - Meylan, France; Hugo Kuo, Dolphin Integration - Meylan, France
Self-synchronizing converters represent an elegant and cost effective solution for audio functionality integration into SoC (System-on-Chip) as they integrate both conversion and clock synchronization functionalities. Audio performance of such converters is, however, very dependent on the jitter rejection capabilities of the synchronization system. A methodology based on two period deviation tolerance templates is described for evaluating such synchronization solutions, prior to any silicon measurements. It is also a unique way for specifying expected performance of a synchronization system in the presence of jitter on the audio interface. The proposed methodology is applied to a self-synchronizing audio converter and its advantages are illustrated by both simulation and measurement results.
Convention Paper 8729 (Purchase now)

P7-11 Loudspeaker Localization Based on Audio WatermarkingFlorian Kolbeck, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Giovanni Del Galdo, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Iwona Sobieraj, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Tobias Bliem, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Localizing the positions of loudspeakers can be useful in a variety of applications, above all the calibration of a home theater setup. For this aim, several existing approaches employ a microphone array and specifically designed signals to be played back by the loudspeakers, such as sine sweeps or maximum length sequences. While these systems achieve good localization accuracy, they are unsuitable for those applications in which the end-user should not be made aware that the localization is taking place. This contribution proposes a system that fulfills these requirements by employing an inaudible watermark to carry out the localization. The watermark is specifically designed to work in reverberant environments. Results from realistic simulations confirm the practicability of the proposed system.
Convention Paper 8730 (Purchase now)

 
 

Friday, October 26, 4:00 pm — 6:00 pm (Room 123)

Network Audio: N1 - Error-Tolerant Audio Coding

Chair:
David Trainor, CSR - Belfast, Northern Ireland, UK
Panelists:
Bernhard Grill, Fraunhofer IIS - Erlangen, Germany
Deepen Sinha, ATC Labs - Newark, NJ, USA
Gary Spittle, Dolby Laboratories - San Francisco, CA, USA


Abstract:
Two important and observable trends are the increasing delivery of real-time audio services over the Internet or cellular network and also the implementation of audio networking throughout a residence, office or studio using wireless technologies. This approach to distributing audio content is convenient, ubiquitous, and can be relatively inexpensive. However the nature of these networks is such that their capacity and reliability for real-time audio streaming can vary considerably with time and environment. Therefore error-tolerant audio coding techniques have an important role to play in maintaining audio quality for relevant applications. This workshop will discuss the capabilities of error-tolerant audio coding algorithms and recent advances in the state of the art.

 
 

Friday, October 26, 4:00 pm — 6:00 pm (Room 132)

Photo

Tutorial: T3 - Social Media for Engineers and Producers

Presenter:
Bobby Owsinski, Bobby Owsinski Media Group - Los Angeles, CA, USA; 2b Media - Los Angeles, CA, USA


Abstract:
Facebook, Google+, Twitter, and YouTube are important elements for developing a fan base or client list, but without the proper strategy they can prove ineffective and take so much time that there's none left for creating. This presentation shows engineers, producers, audio professionals, and musicians the best techniques and strategy to utilize social media as a promotional tool without it taking 20 hours a day.

Topics covered include:
Your mailing list—old tech, new importance
Social Media management strategies
Optimizing your YouTube presence
The secrets of viral videos
The online video world is larger than you think
Search Engine Optimization basics
Using Facebook and Twitter for marketing
The secret behind successful tweets
Social media measurement techniques
What’s next?

 
 

Friday, October 26, 4:00 pm — 5:30 pm (Room 131)

Broadcast and Streaming Media: B4 - Audio Encoding for Streaming

Chair:
Fred Willard, Univision - Washington, DC, USA
Panelists:
Casey Camba, Dolby Labs
Robert Reams, Streaming Appliances/DSP Concepts - Mill Creek, WA, USA
Samuel Sousa, Triton Digital - Montreal, QC, Canada
Jason Thibeault, Limelight


Abstract:
This session will discuss various methods of encoding audio for streaming. What is most efficient for particular uses? Various bit rates and compression algorithms are used, what are the advantages, what are the disadvantages?

 
 

Saturday, October 27, 9:00 am — 10:00 am (Room 124)

TC Meeting: Transmission and Broadcasting


Abstract:
Technical Committee Meeting on Transmission and Broadcasting

 
 

Saturday, October 27, 9:00 am — 11:00 am (Room 130)

Sound for Pictures: SP1 - Post-Production Audio Techniques for Digital Cinema and Ancillary Markets

Chair:
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Panelists:
Lon Bender, Soundelux
Jason LaRocca, La-Rocc-A-Fella, Inc. - Los Angeles, CA, USA
Brian A. Vessa, Sony Pictures Entertainment - Culver City, CA, USA


Abstract:
Cinema sound has traditionally been limited in fidelity because optical soundtracks, used until recently, were incapable of delivering full-bandwidth audio to the theaters. As Digital Cinema replaces film in distribution, sound mixers are now delivering uncompressed lossless tracks to the audience. These three top sound mixers will discuss the changes and challenges this has presented to them.

 
 

Saturday, October 27, 9:00 am — 10:30 am (Room 131)

Broadcast and Streaming Media: B5 - Stream Distribution: IP in the Mobile Environment

Chair:
David Layer, National Association of Broadcasters - Washington, DC, USA
Panelists:
Mike Daskalopoulos, Dolby Labs
Samuel Sousa, Triton Digital - Montreal, QC, Canada
Jason Thibeault, Limelight


Abstract:
The public has demanded the portability of stream listening, whether in a handheld or other mobile devices including the car. There are a variety of streaming technologies in the marketplace that can support portable streaming, and in this session representatives from three of the leading companies in this space will offer their insights and vision. Some of the specific topics to be covered include: Audio on mobile, the list of challenges; how mobile streaming can interact with traditional in-car listening; HTML5—savior or just more trouble?; and challenges in IP streaming and advertising interaction.

 
 

Saturday, October 27, 9:30 am — 12:00 pm (Tech Tours)

Technical Tour: TT3 - Fenix


Abstract:
The latest addition to San Rafael’s night life scene, this new, 150 seat, 8600-square-foot performing venue combines innovative food and drinks with outstanding acoustics and a Meyer's speaker system, for a remarkable live music experience. Behind the scenes is their state-of-the art studio production for recording and streaming live shows to the internet. Architect/acoustician John Storyk, whose Walters-Storyk Design Group designed the club, will be leading part of the tour. (http://fenixlive.com)

This event is limited to 44 tickets.

Technical Tours are made available on a first come, first served basis. Tickets can be purchased during normal registration hours at the convention center.

Price: Members $40/Nonmembers $50

 
 

Saturday, October 27, 10:45 am — 12:15 pm (Room 131)

Broadcast and Streaming Media: B6 - Audio for Mobile Television

Chair:
Brad Dick, Broadcast Engineering Magazine - Kansas City, MO, USA
Panelists:
Tim Carroll, Linear Acoustic Inc. - Lancaster, PA, USA
David Layer, National Association of Broadcasters - Washington, DC, USA
Robert Murch, Fox Television
Geir Skaaden, DTS, Inc.
Jim Starzynski, NBC Universal - New York, NY, USA
Dave Wilson, CEA - Arlington, VA, USA


Abstract:
Many TV stations recognize mobile DTV as a new and great financial opportunity. By simply simulcasting their main channel an entirely new revenue stream can be developed. But according to audio professionals, TV audio engineers should consider carefully the additional audio processing required to ensure proper loudness and intelligibility in a mobile device’s typically noisy environment. The proper solution may be more complex that just reducing dynamic range or adding high-pass filtering.

This panel of audio experts will provide in-depth guidance on steps that may be taken to maximize the performance of your mobile DTV channel.

 
 

Saturday, October 27, 11:00 am — 1:00 pm (Room 130)

Sound for Pictures: SP2 - Reconsidering Standards for Cinema Sound—Alternatives to ISO 2969

Chair:
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Panelists:
Keith Holland, University of Southampton - Southampton, UK
Floyd Toole, Acoustical consultant to Harman, ex. Harman VP Acoustical Engineering - Oak Park, CA, USA


Abstract:
For over eighty years ISO 2969 (aka SMPTE S202) has been a cornerstone of the film sound reproduction "B-Chain." Like the RIAA curve, it was originally implemented to compensate for defects in the delivery medium. Groundbreaking acoustical research, led by Philip Newell and Dr. Keith Holland, has exposed shortcomings in the standard as well as the testing methodology used in the standard. This panel will examine the implications of these standards as film rapidly shifts to Digital Cinema delivery, as mixing rooms being used become smaller and as full bandwidth soundtracks and newer formats like 3-D audio are now delivered to theaters for reproduction.

 
 

Saturday, October 27, 11:00 am — 1:00 pm (Room 133)

Workshop: W5 - Loudness Wars: The Wrong Drug?

Chair:
Thomas Lund, TC Electronic A/S - Risskov, Denmark
Panelists:
John C. Atkinson, Stereophile Magazine - New York, NY, USA
Florian Camerer, ORF - Austrian TV - Vienna, Austria; EBU - European Broadcasting Union
Robert Katz, Digital Domain Mastering - Orlando, FL, USA
George Massenburg, Schulich School of Music, McGill University - Montreal, Quebec, Canada


Abstract:
Newly produced pop/rock music rarely sounds good on fine loudspeakers. Could it be that the wrong mastering drug has been used for decades, affecting Peak to Average Ratio instead of Loudness Range? With grim side effects all around—and years of our music heritage irreversibly harmed—the panel provides a new status on the loudness wars and sets out to investigate the difference between the two from a technical, a perceptual, and a practical point of view. In a normalized world, bad drugs will no longer be compensated by a benefit of being loud. Learn to distinguish between a cure and quack practice, and save your next album.

 
 

Saturday, October 27, 12:00 pm — 1:00 pm (Room 120)

Product Design: PD5 - Graphical Audio/DSP Applications Development Environment for Fixed and Floating Point Processors

Presenter:
Miguel Chavez, Analog Devices


Abstract:
Graphical development environments have been used in the audio industry for a number of years. An ideal graphical environment not only allows for algorithm development and prototyping but also facilitates the development of run-time DSP applications by producing production-ready code. This presentation will discuss how a graphic environments’ real-time control and parameter tuning makes audio DSPs easy to evaluate, design, and use resulting in a shortened development time and reduced time-to-market. It will then describe and explain the software architecture decisions and design challenges that were used to develop a new and expanded development environment for audio-centric commercially available fixed and floating-point processors.

 
 

Saturday, October 27, 12:45 pm — 1:45 pm (Room 131)

Broadcast and Streaming Media: B7 - Maintenance, Repair, and Troubleshooting

Chair:
Kirk Harnack, Telos Alliance - Nashville, TN, USA; South Seas Broadcasting Corp. - Pago Pago, American Samoa
Panelists:
Dan Mansergh, KQED
Bill Sacks, Orban / Optimod Refurbishing - Hollywood, MD, USA
Kimberly Sacks, Optimod Refurbishing - Hollywood, MD, USA
Mike Pappas, Lakewood, CO, USA
Milos Nemcik


Abstract:
Much of today's audio equipment may be categorized as “consumer, throw-away” gear, or so complex that factory assistance is required for a board or module swap. The art of Maintenance, Repair, and Troubleshooting is actually as important as ever, even as the areas of focus may be changing. This session brings together some of the sharpest troubleshooters in the audio business. They'll share their secrets to finding problems, fixing them, and working to ensure they don't happen again. We'll delve into troubleshooting on the systems level, module level, and the component level, and explain some guiding principles that top engineers share.

 
 

Saturday, October 27, 2:00 pm — 3:30 pm (Room 123)

Photo

Network Audio: N2 - Open IP Protocols for Audio Networking

Presenter:
Kevin Gross, AVA Networks - Boulder, CO, USA


Abstract:
The networking and telecommunication industry has its own set of network protocols for carriage of audio and video over IP networks. These protocols have been widely deployed for telephony and teleconferencing applications, internet streaming, and cable television. This tutorial will acquaint attendees with these protocols and their capabilities and limitations. The relationship to AVB protocols will be discussed.

Specifically, attendees will learn about Internet protocol (IP), voice over IP (VoIP), IP television (IPTV), HTTP streaming, real-time transport protocol (RTP), real-time transport control protocol (RTCP), real-time streaming protocol (RTSP), session initiation protocol (SIP), session description protocol (SDP), Bonjour, session announcement protocol (SAP), differentiated services (DiffServ), and IEEE 1588 precision time protocol (PTP)

An overview of AES standards work, X192, adapting these protocols to high-performance audio applications will be given.

 
 

Saturday, October 27, 2:00 pm — 4:00 pm (Room 133)

Workshop: W6 - Are You Ready for the New Media Express? or Dealing with Today's Audio Delivery Formats

Chair:
Jim Kaiser, CEMB / Belmont University - Nashville, TN, USA
Panelists:
Robert Bleidt, Fraunhofer USA Digital Media Technologies - San Jose, CA, USA
Stefan Bock, msm-studios GmbH - Munich, Germany
David Chesky, HD Tracks/Chesky Records
Robert Katz, Digital Domain Mastering - Orlando, FL, USA


Abstract:
With the rapid dominance of digital delivery mediums for audio, traditional physical media and their associated standards are losing relevance. Today's audio mastering engineer is expected to provide an appropriate format for any of their client's expanding uses. The ability to do so properly is affected by the preceding production and mixing processes, as well as an understanding of what follows. This workshop will focus on detailing newer audio delivery formats and the typical processes that a digital file goes through on its way through to the end-consumer. The intention is to provide the know-how to ensure that one is not technically compromising the client's music on the way to providing for various lower and higher-resolution releases (e.g., Mastered for iTunes, HD Tracks, etc.).

 
 

Saturday, October 27, 2:00 pm — 4:00 pm (Room 131)

Broadcast and Streaming Media: B8 - Loudness and Metadata—Living with the CALM Act

Chair:
Joel Spector, Freelance Television and Theater Sound Designer - Riverdale, NY, USA
Panelists:
Florian Camerer, ORF - Austrian TV - Vienna, Austria; EBU - European Broadcasting Union
Tim Carroll, Linear Acoustic Inc. - Lancaster, PA, USA
Stephen Lyman, Dolby Laboratories - San Francisco, CA, USA
Robert Murch, Fox Television
Lon Neumann, Neumann Technologies - Sherman Oaks, CA, USA
Robert Seidel, CBS - New York, NY, USA
Jim Starzynski, NBC Universal - New York, NY, USA


Abstract:
The Commercial Advertising Loudness Mitigation (CALM) Act was signed by President Obama in 2011. Enforcement by the FCC will begin in December of this year.

Television broadcasters and Multi-Channel Video Program Distributors (MVPDs) are required to put in place procedures, software and hardware to “effectively control program-to-interstitial loudness … and loudness management at the boundaries of programs and interstitial content.” Objective data must be supplied to the FCC to support compliance with the legislation as well as timely resolution of listener complaints. Similar rules have been developed in the UK and other parts of the world.

Members of our panel of experts have worked tirelessly to either create loudness control recommendations that have become the law or to bring those recommendations to implementation at the companies they represent. This session will cover the FCC’s Report and Order on the CALM Act, the development of the ATSC’s A/85 Recommended Practice that is now part of the U.S. legislation, and both domestic and European technical developments by major media distributors and P/LOUD.

 
 

Saturday, October 27, 2:00 pm — 6:00 pm (Room 121)

Paper Session: P10 - Transducers

Chair:
Alex Voishvillo, JBL Professional - Northridge, CA, USA

P10-1 The Relationship between Perception and Measurement of Headphone Sound QualitySean Olive, Harman International - Northridge, CA, USA; Todd Welti, Harman International - Northridge, CA, USA
Double-blind listening tests were performed on six popular circumaural headphones to study the relationship between their perceived sound quality and their acoustical performance. In terms of overall sound quality, the most preferred headphones were perceived to have the most neutral spectral balance with the lowest coloration. When measured on an acoustic coupler, the most preferred headphones produced the smoothest and flattest amplitude response, a response that deviates from the current IEC recommended diffuse-field calibration. The results provide further evidence that the IEC 60268-7 headphone calibration is not optimal for achieving the best sound quality.
Convention Paper 8744 (Purchase now)

P10-2 On the Study of Ionic MicrophonesHiroshi Akino, Audio-Technica Co. - Machida-shi, Tokyo, Japan; Kanagawa Institute of Technology - Kanagawa, Japan; Hirofumi Shimokawa, Kanagawa Institute of Technology - Kanagawa, Japan; Tadashi Kikutani, Audio-Technica U.S., Inc. - Stow, OH, USA; Jackie Green, Audio-Technica U.S., Inc. - Stow, OH, USA
Diaphragm-less ionic loudspeakers using both low-temperature and high-temperature plasma methods have already been studied and developed for practical use. This study examined using similar methods to create a diaphragm-less ionic microphone. Although the low-temperature method was not practical due to high noise levels in the discharges, the high-temperature method exhibited a useful shifting of the oscillation frequency. By performing FM detection on this oscillation frequency shift, audio signals were obtained. Accordingly, an ionic microphone was tested in which the frequency response level using high-temperature plasma increased as the sound wave frequency decreased. Maintaining performance proved difficult as discharges in the air led to wear of the needle electrode tip and adhesion of products of the discharge. Study results showed that the stability of the discharge corresponded to the non-uniform electric field that was dependent on the formation shape of the high-temperature plasma, the shape of the discharge electrode, and the use of inert gas that protected the needle electrode. This paper reviews the experimental outcome of the two ionic methods, and considerations given to resolve the tip and discharge product and stability problems.
Convention Paper 8745 (Purchase now)

P10-3 Midrange Resonant Scattering in LoudspeakersJuha Backman, Nokia Corporation - Espoo, Finland
One of the significant sources of midrange coloration in loudspeakers is the resonant scattering of the exterior sound field from ports, recesses, or horns. This paper discusses the qualitative behavior of the scattered sound and introduces a computationally efficient model for such scattering, based on waveguide models for the acoustical elements (ports, etc.), and mutual radiation impedance model for their coupling to the sound field generated by the drivers. In the simplest case of driver-port interaction in a direct radiating loudspeaker an approximate analytical expression can be written for the scattered sound. These methods can be applied to numerical optimization of loudspeaker layouts.
Convention Paper 8746 (Purchase now)

P10-4 Long Distance Induction Drive Loud Hailer CharacterizationMarshall Buck, Psychotechnology, Inc. - Los Angeles, CA, USA; Wisdom Audio; David Graebener, Wisdom Audio Corporation - Carson City, NV, USA; Ron Sauro, NWAA Labs, Inc. - Elma, WA, USA
Further development of the high power, high efficiency induction drive compression driver when mounted on a tight pattern horn results in a high performance loud hailer. The detailed performance is tested in an independent laboratory with unique capabilities, including indoor frequency response at a distance of 4 meters. Additional characteristics tested include maximum burst output level, polar response, and directivity balloons. Outdoor tests were also performed at distances up to 220 meters and included speech transmission index and frequency response. Plane wave tube driver-phase plug tests were performed to assess incoherence, power compression, efficiency, and frequency response.
Convention Paper 8747 (Purchase now)

P10-5 Optimal Configurations for Subwoofers in Rooms Considering Seat to Seat Variation and Low Frequency EfficiencyTodd Welti, Harman International - Northridge, CA, USA
The placement of subwoofers and listeners in small rooms and the size and shape of the room all have profound influences on the resulting low frequency response. In this study, a computer model was used to investigate a large number of room, seating, and subwoofer configurations. For each configuration simulated, metrics for seat to seat consistency and bass efficiency were calculated and combined in a newly proposed metric, which is intended as an overall figure of merit. The data presented has much practical value in small room design for new rooms, or even for modifying existing configurations.
Convention Paper 8748 (Purchase now)

P10-6 Modeling the Large Signal Behavior of Micro-SpeakersWolfgang Klippel, Klippel GmbH - Dresden, Germany
The mechanical and acoustical losses considered in the lumped parameter modeling of electro-dynamical transducers may become a dominant source of nonlinear distortion in micro-speakers, tweeters, headphones, and some horn compression drivers where the total quality factor Qts is not dominated by the electrical damping realized by a high force factor Bl and a low voice resistance Re. This paper presents a nonlinear model describing the generation of the distortion and a new dynamic measurement technique for identifying the nonlinear resistance Rms(v) as a function of voice coil velocity v. The theory and the identification technique are verified by comparing distortion and other nonlinear symptoms measured on micro-speakers as used in cellular phones with the corresponding behavior predicted by the nonlinear model.
Convention Paper 8749 (Purchase now)

P10-7 An Indirect Study of Compliance and Damping in Linear Array TransducersRichard Little, Far North Electroacoustics - Surrey, BC, Canada
A linear array transducer is a dual-motor, dual-coil, multi-cone, tubularly-shaped transducer whose shape defeats many measurement techniques that can be used to examine directly the force-deflection behavior of its diaphragm suspension system. Instead, the impedance curve of the transducer is compared against theoretical linear models to determine best-fit parameter values. The variation in the value of these parameters with increasing input signal levels is also examined.
Convention Paper 8750 (Purchase now)

P10-8 Bandwidth Extension for Microphone ArraysBenjamin Bernschütz, Cologne University of Applied Sciences - Cologne, Germany; Technical University of Berlin - Berlin, Germany
Microphone arrays are in the focus of interest for spatial audio recording applications or the analysis of sound fields. But one of the major problems of microphone arrays is the limited operational frequency range. Especially at high frequencies spatial aliasing artifacts tend to disturb the output signal. This severely restricts the applicability and acceptance of microphone arrays in practice. A new approach to enhance the bandwidth of microphone arrays is presented, which is based on some restrictive assumptions concerning natural sound fields, the separate acquisition and treatment of spatiotemporal and spectrotemporal sound field properties, and the subsequent synthesis of array signals for critical frequency bands. Additionally, the method can be used for spatial audio data reduction algorithms.
Convention Paper 8751 (Purchase now)

 
 

Saturday, October 27, 3:30 pm — 6:30 pm (Room 130)

Sound for Pictures: SP3 - New Multichannel Formats for 3-D Cinema and Home Theater

Co-chairs:
Christof Faller, Illusonic - Uster, Switzerland
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Panelists:
Kimio Hamasaki, NHK - Tokyo, Japan
Jeff Levison, IOSONO GmbH - Germany
Nicolas Tsingos, Dolby Labs - San Francisco, CA, USA
Wilfried Van Baelen, Auro Technologies - Mol, Belgium
Brian A. Vessa, Sony Pictures Entertainment - Culver City, CA, USA


Abstract:
Several new digital cinema formats are under active consideration for cinema soundtrack production. Each was developed to create realistic sound "motion" in parallel with 3-D pictures. This workshop presents a rare first opportunity for the proponents of these leading systems to discuss their specific technologies.

 
 

Saturday, October 27, 4:00 pm — 6:00 pm (Room 123)

Network Audio: N3 - Audio Networks—Paradigm Shift for Broadcasters

Chair:
Stefan Ledergerber, Lawo Group - Zurich, Switzerland; LES Switzerland GmbH
Panelists:
Kevin Gross, AVA Networks - Boulder, CO, USA
Andreas Hildebrand, ALC NetworX - Munich, Germany
Sonja Langhans, Institut für Rundfunktechnik - Munich, Germany
Lee Minich, Lab X Technologies - Rochester, NY, USA
Greg Shay, Telos Alliance/Axia - Cleveland, OH, USA
Kieran Walsh, Audinate Pty. Ltd. - Ultimo, NSW, Australia


Abstract:
Today a variety of audio networking technologies are emerging. However, a number of questions related to workflow in broadcasting organizations seem still unanswered. This panel will try to find possible answers to some of the hot topics, such as:

• Will traditional crosspoint matrix switches (routers) disappear and fully be replaced by networks?
• Which component will deal with signal processing, which is currently done within audio routers?
• Which department is handling audio networks: audio or IT?
• How do we educate personnel handling audio networks?

The panelists will explain their views from a technology provider point of view, but lively participation by the audience is highly appreciated.

 
 

Saturday, October 27, 4:00 pm — 6:00 pm (Room 134)

Broadcast and Streaming Media: B9 - What Happens to Your Production When Played Back on the Various Media

Chair:
David Bialik, CBS - New York, NY, USA
Panelists:
Karlheinz Brandenburg, Fraunhofer Institute for Digital Media Technology IDMT - Ilmenau, Germany; Ilmenau University of Technology - Ilmenau, Germany
Frank Foti, Omnia - New York, NY, USA
George Massenburg, Schulich School of Music, McGill University - Montreal, Quebec, Canada
Greg Ogonowski, Orban - San Leandro, CA, USA
Robert Orban, Orban - San Leandro, CA, USA


Abstract:
Everyone has a different perspective when producing or playing back audio. This session will look at what happens to the audio product during the stages of recording, reproduction, digital playback, radio broadcast, and streaming. Is it the same experience for everyone?

 
 

Sunday, October 28, 9:00 am — 10:30 am (Room 123)

Network Audio: N4 - AVnu – The Unified AV Network: Overview and Panel Discussion

Chair:
Rob Silfvast, Avid - Mountain View, CA, USA
Panelists:
Ellen Juhlin, Meyer Sound - Berkeley, CA, USA; AVnu Alliance
Denis Labrecque, Analog Devices - San Jose, CA, USA
Lee Minich, Lab X Technologies - Rochester, NY, USA
Bill Murphy, Extreme Networks
Michael Johas Teener, Broadcom - Santa Cruz, CA, USA


Abstract:
This session will provide an overview of the AVnu Alliance, a consortium of audio and video product makers and core technology companies committed to delivering an interoperable open standard for audio/video networked connectivity built upon IEEE Audio Video Bridging standards. AVnu offers a logo-testing program that allows products to become certified for interoperability, much like the Wi-Fi Alliance provides for the IEEE 802.11 family of standards. Representatives from several different member companies will speak in this panel discussion and provide insights about AVB technology and participation in the AVnu Alliance.

 
 

Sunday, October 28, 9:00 am — 10:30 am (Room 131)

Broadcast and Streaming Media: B10 - Sound Design: How Does that "Thing" Go Bump in the Night?

Panelists:
David Shinn, SueMedia Productions - Carle Place, NY, USA
Sue Zizza, SueMedia Productions - Carle Place, NY, USA


Abstract:
Whether you are working with props, recording sounds on location, or using pre-recorded sounds from a library, the sound design elements you choose will impact the aesthetics of the stories you tell.

Working with scenes from the AES Performance, Poe A Life and Stories in Sound, this session will examine working with sound effects props in the studio and recording elements on location. Recording and performance techniques will be discussed. A brief overview of sound effects libraries will also be included.

 
 

Sunday, October 28, 10:45 am — 12:15 pm (Room 131)

Broadcast and Streaming Media: B11 - Listener Fatigue and Retention

Chair:
Dave Wilson, CEA - Arlington, VA, USA
Panelists:
Stephen Ambrose, Asius Technologies, LLC - Longmont, CO, USA
J. Todd Baker, DTS, Inc. - Laguna Hills, CA, USA
Sean Olive, Harman International - Northridge, CA, USA
Robert Reams, Streaming Appliances/DSP Concepts - Mill Creek, WA, USA
Bill Sacks, Orban / Optimod Refurbishing - Hollywood, MD, USA


Abstract:
This panel will discuss listener fatigue and its impact on listener retention. While listener fatigue is an issue of interest to broadcasters, it is also an issue of interest to telecommunications service providers, consumer electronics manufacturers, music producers, and others. Fatigued listeners to a broadcast program may tune out, while fatigued listeners to a cell phone conversation may switch to another carrier, and fatigued listeners to a portable media player may purchase another company’s product. The experts on this panel will discuss their research and experiences with listener fatigue and its impact on listener retention.

 
 

Sunday, October 28, 11:00 am — 12:30 pm (Room 123)

Network Audio: N5 - Interoperability Issues in Audio Transport over IP-Based Networks

Chair:
Timothy Shuttleworth, Oceanside, CA, USA
Panelists:
Kevin Gross, AVA Networks - Boulder, CO, USA
Sonja Langhans, Institut für Rundfunktechnik - Munich, Germany
Lee Minich, Lab X Technologies - Rochester, NY, USA
Greg Shay, Telos Alliance/Axia - Cleveland, OH, USA


Abstract:
This Workshop will focus on interoperability issues in two areas of audio/media transport over IP based networks. These are:

• Multichannel Audio distribution over Ethernet LANs for low latency, high reliability interconnections in home, automobile, and commercial environments. Interoperability standards and methods based on the Ethernet AVB suite of IEEE standards as well as the AES X-192 interoperability project shall be discussed.

• Audio Contribution over Internet Protocol (ACIP and ACIP2) interoperability issues will be discussed from both a European and US perspective with presenters discussing activities within the EBU community and the US broadcasting market. Audio over IP methods are being widely used in remote broadcast situations. The challenges and solutions in achieving reliable content distribution shall be examined.

Cross-vendor operability is becoming increasingly demanded in all audio applications markets. This topic will be of interest to audio systems designers and users across the gamut of market segments. Two presenters will provide their overview within each of the three topic areas.

 
 

Sunday, October 28, 11:00 am — 1:00 pm (Room 132)

Photo

Tutorial: T8 - An Overview of Audio System Grounding and Interfacing

Presenter:
William E. Whitlock, Jensen Transformers, Inc. - Chatsworth, CA, USA; Whitlock Consulting - Oxnard, CA, USA


Abstract:
Equipment makers like to pretend the problems don’t exist, but this tutorial replaces hype and myth with insight and knowledge, revealing the true causes of system noise and ground loops. Unbalanced interfaces are exquisitely vulnerable to noise due to an intrinsic problem. Although balanced interfaces are theoretically noise-free, they’re widely misunderstood by equipment designers, which often results in inadequate noise rejection in real-world systems. Because of a widespread design error, some equipment has a built-in noise problem. Simple, no-test-equipment, troubleshooting methods can pinpoint the location and cause of system noise. Ground isolators in the signal path solve the fundamental noise coupling problems. Also discussed are unbalanced to balanced connections, RF interference, and power line treatments. Some widely used “cures” are both illegal and deadly.

 
 

Sunday, October 28, 12:45 pm — 1:45 pm (Room 131)

Broadcast and Streaming Media: B12 - Troubleshooting Software Issues

Chair:
Jonathan Abrams, Nutmeg Post - New York, NY, USA
Panelists:
Connor Sexton, Avid Technology - Daly City, CA, USA
Charles Van Winkle, Adobe Systems Incorporated - Minneapolis, MN, USA


Abstract:
What should you do before contacting support? How do you make the most of online support resources? What kind of audio driver are you using and how does that interact with the rest of your system? What role does your plug-in platform play when troubleshooting? How can permissions wreak havoc on your system or workflow?

Get the answers to these questions and bring your own for Mac OS X, Windows,
Adobe Audition, and AVID Pro Tools.

 
 

Sunday, October 28, 2:00 pm — 4:00 pm (Room 132)

Photo

Tutorial: T9 - Large Room Acoustics

Presenter:
Diemer de Vries, RWTH Aachen University - Aachen, Germany; TU Delft - Delft, Netherlands


Abstract:
In this tutorial, the traditional and modern ways to describe the acoustical properties of "large" rooms—having dimensions large in comparison to the average wavelength of the relevant frequencies of the speech or music to be (re-)produced—will be discussed. Theoretical models, measurement techniques, the link between objective data and the human perception will be discussed. Is it the reverberation time, or the impulse response, or is there more to take into account to come to a good assessment?

 
 

Sunday, October 28, 2:00 pm — 3:30 pm (Room 131)

Broadcast and Streaming Media: B13 - Lip Sync Issue

Chair:
Jonathan Abrams, Nutmeg Post - New York, NY, USA
Panelists:
Paul Briscoe, Harris Corporation - Toronto, ON, Canada
Bob Brown, AVID - San Francisco, CA, USA
Bram Desmet, Flanders Scientific, Inc. - Suwanee, GA
Matthieu Parmentier, France Televisions - Paris, France


Abstract:
Lip sync remains a complex problem, with several causes and few solutions. From production through transmission and reception, there are many points where lip sync can either be properly corrected or made even worse. This session’s panel will discuss several key issues. What is the perspective of the EBU and SMPTE regarding lip sync? Are things being done in production that create this problem? Who is responsible for implementing the mechanisms that ensure lip sync is maintained when the signal reaches your television? Where do the latency issues exist? How can the latency be measured? What are the recommended tolerances? What correction techniques exist? How does video display design affect lip sync? What factors need to be accounted for in Digital Audio Workstations when working with external video monitors in a post environment? Join us as our panel addresses these questions and yours.

 
 

Sunday, October 28, 2:00 pm — 3:30 pm (Foyer)

Poster: P16 - Analysis and Synthesis of Sound

P16-1 Envelope-Based Spatial Parameter Estimation in Directional Audio CodingMichael Kratschmer, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany; Oliver Thiergart, International Audio Laboratories Erlangen - Erlangen, Germany; Ville Pulkki, Aalto University - Espoo, Finland
Directional Audio Coding provides an efficient description of spatial sound in terms of few audio downmix signals and parametric side information, namely the direction-of-arrival (DOA) and diffuseness of the sound. This representation allows an accurate reproduction of the recorded spatial sound with almost arbitrary loudspeaker setups. The DOA information can be efficiently estimated with linear microphone arrays by considering the phase information between the sensors. Due to the microphone spacing, the DOA estimates are corrupted by spatial aliasing at higher frequencies affecting the sound reproduction quality. In this paper we propose to consider the signal envelope for estimating the DOA at higher frequencies to avoid the spatial aliasing problem. Experimental results show that the presented approach has great potential in improving the estimation accuracy and rendering quality.
Convention Paper 8791 (Purchase now)

P16-2 Approximation of Dynamic Convolution Exploiting Principal Component Analysis: Objective and Subjective Quality EvaluationAndrea Primavera, Universitá Politecnica della Marche - Ancona, Italy; Stefania Cecchi, Universitá Politecnica della Marche - Ancona, Italy; Laura Romoli, Universitá Politecnica della Marche - Ancona, Italy; Michele Gasparini, Universitá Politecnica della Marche - Ancona, Italy; Francesco Piazza, Universitá Politecnica della Marche - Ancona (AN), Italy
In recent years, several techniques have been proposed in the literature in order to attempt the emulation of nonlinear electro-acoustic devices, such as compressors, distortions, and preamplifiers. Among them, the dynamic convolution technique is one of the most common approaches used to perform this task. In this paper an exhaustive objective and subjective analysis of a dynamic convolution operation based on principal components analysis has been performed. Taking into consideration real nonlinear systems, such as bass preamplifier, distortion, and compressor, comparisons with the existing techniques of the state of the art have been carried out in order to prove the effectiveness of the proposed approach.
Convention Paper 8792 (Purchase now)

P16-3 Optimized Implementation of an Innovative Digital Audio EqualizerMarco Virgulti, Universitá Politecnica della Marche - Ancona, Italy; Stefania Cecchi, Universitá Politecnica della Marche - Ancona, Italy; Andrea Primavera, Universitá Politecnica della Marche - Ancona, Italy; Laura Romoli, Universitá Politecnica della Marche - Ancona, Italy; Emanuele Ciavattini, Leaff Engineering - Ancona, Italy; Ferruccio Bettarelli, Leaff Engineering - Ancona, Italy; Francesco Piazza, Universitá Politecnica della Marche - Ancona (AN), Italy
Digital audio equalization is one of the most common operations in the acoustic field, but its performance depends on computational complexity and filter design techniques. Starting from a previous FIR implementation based on multirate systems and filterbanks theory, an optimized digital audio equalizer is derived. The proposed approach employs all-pass IIR filters to improve the filterbanks structure developed to avoid ripple between adjacent bands. The effectiveness of the optimized implementation is shown comparing it with the FIR approach. The solution presented here has several advantages increasing the equalization performance in terms of low computational complexity, low delay, and uniform frequency response.
Convention Paper 8793 (Purchase now)

P16-4 Automatic Mode Estimation of Persian Musical SignalsPeyman Heydarian, London Metropolitan University - London, UK; Lewis Jones, London Metropolitan University - London, UK; Allan Seago, London Metropolitan University - London, UK
Musical mode is central to maqamic musical traditions that span from Western China to Southern Europe. A mode usually represents the scale and is to some extent an indication of the emotional content of a piece. Knowledge of the mode is useful in searching multicultural archives of maqamic musical signals. Thus, the modal information is worth inclusion in metadata of a file. An automatic mode classification algorithm will have potential applications in music recommendation and play list generation, where the pieces can be ordered based on a perceptually accepted criterion such as the mode. It has the possibility of being used as a framework for music composition and synthesis. This paper presents an algorithm for classification of Persian audio musical signals, based on a generative approach, i.e., Gaussian Mixture Models (GMM), where chroma is used as the feature. The results will be compared with a chroma-based method with a Manhattan distance measure that was previously developed by ourselves.
Convention Paper 8794 (Purchase now)

P16-5 Generating Matrix Coefficients for Feedback Delay Networks Using Genetic AlgorithmMichael Chemistruck, University of Miami - Coral Gables, FL, USA; Kyle Marcolini, University of Miami - Coral Gables, FL, USA; Will Pirkle, University of Miami - Coral Gables, FL, USA
The following paper analyzes the use of the Genetic Algorithm (GA) in conjunction with a length-4 feedback delay network for audio reverberation applications. While it is possible to manually assign coefficient values to the feedback network, our goal was to automate the generation of these coefficients to help produce a reverb with characteristics as similar to those of a real room reverberation as possible. To do this we designed a GA to be used in conjunction with a delay-based reverb that would be more desirable in the use of real-time applications than the more computationally expensive convolution reverb.
Convention Paper 8795 (Purchase now)

P16-6 Low Complexity Transient Detection in Audio Coding Using an Image Edge Detection ApproachJulien Capobianco, France Telecom Orange Labs/TECH/OPERA - Lannion Cedex, France; Université Pierre et Marie Curie - Paris, France; Grégory Pallone, France Telecom Orange Labs/TECH/OPERA - Lannion Cedex, France; Laurent Daudet, University Paris Diderot - Paris, France
In this paper we propose a new low complexity method of transient detection using an image edge detection approach. In this method, the time-frequency spectrum of an audio signal is considered as an image. Using appropriate mapping function for converting energy bins into pixels, audio transients correspond to rectilinear edges in the image. Then, the transient detection problem is equivalent to an edge detection problem. Inspired by standard image methods of edge detection, we derive a detection function specific to rectilinear edges that can be implemented with a very low complexity. Our method is evaluated in two practical audio coding applications, in replacement of the SBR transient detector in HEAAC+ V2 and in the stereo parametric tool of MPEG USAC.
Convention Paper 8796 (Purchase now)

P16-7 Temporal Coherence-Based Howling Detection for Speech ApplicationsChengshi Zheng, Chinese Academy of Sciences - Beijing, China; Hao Liu, Chinese Academy of Sciences - Beijing, China; Renhua Peng, Chinese Academy of Sciences - Beijing, China; Xiaodong Li, Chinese Academy of Sciences - Beijing, China
This paper proposes a novel howling detection criterion for speech applications, which is based on temporal coherence (will be referred as TCHD). The proposed TCHD criterion is based on the fact that the speech only has a relatively short coherence time, while the coherence times of the true howling components are nearly infinite since the howling components are perfectly correlated with themselves for large delays. The proposed TCHD criterion is computationally efficient for two reasons. First, the fast Fourier transform (FFT) can be applied directly to compute the temporal coherence. Second, the proposed TCHD criterion does not need to identify spectral peaks from the raw periodogram of the microphone signal. Simulation and experimental results show the validity of the proposed TCHD criterion.
Convention Paper 8797 (Purchase now)

P16-8 A Mixing Matrix Estimation Method for Blind Source Separation of Underdetermined Audio MixtureMingu Lee, Samsung Electronics Co. - Suwon-si, Gyeonggi-do, Korea; Keong-Mo Sung, Seoul National University - Seoul, Korea
A new mixing matrix estimation method for under-determined blind source separation of audio signals is proposed. By statistically modeling the local features, i.e., the magnitude ratio and phase difference of the mixtures, in a time-frequency region, a region can have information of the mixing angle of a source with reliability amounted to its likelihood. Regional data are then clustered with statistical tests based on their likelihood to produce estimates for the mixing angle of the sources as well as the number of them. Experimental results show that the proposed mixing matrix estimation algorithm outperform the existing methods.
Convention Paper 8798 (Purchase now)

P16-9 Speech Separation with Microphone Arrays Using the Mean Shift AlgorithmDavid Ayllón, University of Alcala - Alcalá de Henares, Spain; Roberto Gil-Pita, University of Alcala - Alcalá de Henares, Spain; Manuel Rosa-Zurera, University of Alcala - Alcalá de Henares, Spain
Microphone arrays provide spatial resolution that is useful for speech source separation due to the fact that sources located in different positions cause different time and level differences in the elements of the array. This feature can be combined with time-frequency masking in order to separate speech mixtures by means of clustering techniques, such as the so-called DUET algorithm, which uses only two microphones. However, there are applications where larger arrays are available, and the separation can be performed using all these microphones. A speech separation algorithm based on mean shift clustering technique has been recently proposed using only two microphones. In this work the aforementioned algorithm is generalized for arrays of any number of microphones, testing its performance with echoic speech mixtures. The results obtained show that the generalized mean shift algorithm notably outperforms the results obtained by the original DUET algorithm.
Convention Paper 8799 (Purchase now)

P16-10 A Study on Correlation Between Tempo and Mood of MusicMagdalena Plewa, Gdansk University of Technology - Gdansk, Poland; Bozena Kostek, Gdansk University of Technology - Gdansk, Poland
In this paper a study is carried out to identify a relationship between mood description and combinations of various tempos and rhythms. First, a short review of music recommendation systems along with music mood recognition studies is presented. In addition, some details on tempo and rhythm perception and detection are included. Then, the experiment layout is explained in which a song is first recorded and then its rhythm and tempo are changed. This constitutes the basis for a mood tagging test. Six labels are chosen for mood description. The results show a significant dependence between the tempo and mood of the music.
Convention Paper 8800 (Purchase now)

 
 

Sunday, October 28, 3:00 pm — 4:30 pm (Room 123)

Network Audio: N7 - Audio Network Device Connection and Control

Chair:
Richard Foss, Rhodes University - Grahamstown, Eastern Cape, South Africa
Panelists:
Jeffrey Alan Berryman, Bosch Communications - Flesherton, ON, Canada
Andreas Hildebrand, ALC NetworX - Munich, Germany
Jeff Koftinoff, Meyer Sound Canada - Vernon, BC, Canada
Kieran Walsh, Audinate Pty. Ltd. - Ultimo, NSW, Australia


Abstract:
In this session a number of industry experts will describe and demonstrate how they have enabled the discovery of audio devices on local area networks, their subsequent connection management, and also control over their various parameters. The workshop will start with a panel discussion that introduces issues related to streaming audio, such as bandwidth management and synchronization, as well as protocols that enable connection management and control. The panelists will have demonstrations of their particular audio network solutions. They will describe these solutions as part of the panel discussion, and will provide closer demonstrations following the panel discussion.

 
 

Sunday, October 28, 3:30 pm — 5:00 pm (Room 131)

Broadcast and Streaming Media: B14 - Understanding and Working with Codecs

Chair:
Kimberly Sacks, Optimod Refurbishing - Hollywood, MD, USA
Panelists:
Kirk Harnack, Telos Alliance - Nashville, TN, USA; South Seas Broadcasting Corp. - Pago Pago, American Samoa
James Johnston, Retired - Redmond, WA, USA
Jeffrey Riedmiller, Dolby Laboratories - San Francisco, CA, USA
Chris Tobin, Musicam USA - Holmdel, NJ, USA


Abstract:
In the age of smart phones and internet ready devices, audio transport and distribution has evolved from sharing low quality MP3 files to providing high quality mobile device audio streams, click to play content, over the air broadcasting, audio distribution in large facilities, and more. Each medium has several methods of compressing content by means of a codec. This session will explain which codecs are appropriate for which purposes, common misuse of audio codecs, and how to maintain audio quality by implementing codecs professionally.

 
 

Sunday, October 28, 4:30 pm — 6:00 pm (Foyer)

Network Audio: N8 - Audio Network Device Connection and Control—Demos

Chair:
Richard Foss, Rhodes University - Grahamstown, Eastern Cape, South Africa
Panelists:
Jeffrey Alan Berryman, Bosch Communications - Flesherton, ON, Canada
Andreas Hildebrand, ALC NetworX - Munich, Germany
Jeff Koftinoff, Meyer Sound Canada - Vernon, BC, Canada
Kieran Walsh, Audinate Pty. Ltd. - Ultimo, NSW, Australia


Abstract:
In this session a number of industry experts will describe and demonstrate how they have enabled the discovery of audio devices on local area networks, their subsequent connection management, and also control over their various parameters. The workshop will start with a panel discussion that introduces issues related to streaming audio, such as bandwidth management and synchronization, as well as protocols that enable connection management and control. The panellists will have demonstrations of their particular audio network solutions. They will describe these solutions as part of the panel discussion, and will provide closer demonstrations following the panel discussion.

 
 

Sunday, October 28, 5:00 pm — 6:00 pm (Room 123)

Product Design: PD9 - Audio for iPad Publishers

Chair:
Jeff Essex, AudioSyncrasy


Abstract:
Book publishers are running to the iPad, and not just for iBooks, or one-off apps. They're building storefronts and creating subscription models, and the children's book publishers are leading the way. Through two case studies, this talk will explore how to build the audio creation and content management systems needed to produce multiple apps in high-volume environments, including VO production, concatenation schemes, file-naming conventions, audio file types for iOS, and perhaps most important, helping book publishers make the leap from the printed page to interactive publishing.

 
 

Sunday, October 28, 5:00 pm — 6:30 pm (Room 131)

Broadcast and Streaming Media: B15 - Audio Processing Basics

Chair:
Richard Burden, Richard W. Burden Associates - Canoga Park, CA, USA
Panelists:
Tim Carroll, Linear Acoustic Inc. - Lancaster, PA, USA
Frank Foti, Omnia - New York, NY, USA
James Johnston, Retired - Redmond, WA, USA
Robert Orban, Orban - San Leandro, CA, USA


Abstract:
Limiting peak excursions to prevent over modulation and increasing the average level through compression to improve signal to noise are worthwhile objectives. Just as we can all agree that a little salt in pepper in the stew enhances the flavor, the argument is how much salt and pepper becomes too much.

It is a given that the louder signal is interpreted by the listener as sounding better. However, there are misuses of the available tools and display a lack of leadership at the point of origin. The variation in energy levels within program and commercial content, as well as, the excessive use of compression on many news interviews are annoying to the listener.

The presentation will cover the fundamentals, the history, and the philosophy of audio processing. An open discussion, with audience participation, on the subject and its practices follow.

 
 

Monday, October 29, 1:30 pm — 3:00 pm (Room 131)

Broadcast and Streaming Media: B16 - The Streaming Experience

Chair:
Rusty Hodge, SomaFM - San Francisco, CA, USA
Panelists:
Mike Daskalopoulos, Dolby Labs
Jason Thibeault, Limelight
Leigh Newsome, Targetspot - New York, NY, USA
Robert Reams, Streaming Appliances/DSP Concepts - Mill Creek, WA, USA


Abstract:
How are consumers listening to streaming audio today, and how can broadcasters improve that experience? What is the future direction that streaming audio should be taking?

Home streaming on hardware devices often suffers from limitations of the hardware UI in terms of selecting between tens of thousands of available channels. How can we improve that? Mobile streaming still doesn't match the convenience of turning on your car and having the AM/FM radio start right up. What can and is being done to change that?

What does the future of CODECs hold and what formats should broadcasters be streaming in to create the best quality while achieving universal accessibility?
How do we improve the continuity between the home and mobile environment, especially in regard to customized streams? What are listeaners unhappy with now and what can be done about it?

We will also talk about the future of customized content and the integration of live broadcast content with customized streams.

 
 


Return to Broadcast & Streaming Track Events

EXHIBITION HOURS October 27th 10am – 6pm October 28th 10am – 6pm October 29th 10am – 4pm
REGISTRATION DESK October 25th 3pm – 7pm October 26th 8am – 6pm October 27th 8am – 6pm October 28th 8am – 6pm October 29th 8am – 4pm
TECHNICAL PROGRAM October 26th 9am – 7pm October 27th 9am – 7pm October 28th 9am – 7pm October 29th 9am – 5pm
AES - Audio Engineering Society