AES New York 2013
Sound for Picture Track Event Details

Thursday, October 17, 9:00 am — 11:00 am (Room 1E07)

Paper Session: P1 - Transducers—Part 1: Microphones

Chair:
Helmut Wittek, SCHOEPS GmbH - Karlsruhe, Germany

P1-1 Portable Spherical Microphone for Super Hi-Vision 22.2 Multichannel AudioKazuho Ono, NHK Engineering System, Inc. - Setagaya-ku, Tokyo, Japan; Toshiyuki Nishiguchi, NHK Science & Technology Research Laboratories - Setagaya, Tokyo, Japan; Kentaro Matsui, NHK Science & Technology Research Laboratories - Setagaya, Tokyo, Japan; Kimio Hamasaki, NHK Science & Technology Research Laboratories - Setagaya, Tokyo, Japan
NHK has been developing a portable microphone for the simultaneous recording of 22.2ch multichannel audio. The microphone is 45 cm in diameter and has acoustic baffles that partition the sphere into angular segments, in each of which an omnidirectional microphone element is mounted. Owing to the effect of the baffles, each segment works as a narrow angle directivity and a constant beam width in higher frequencies above 6 kHz. The directivity becomes wider as frequency decreases and that it becomes almost omnidirectional below 500 Hz. The authors also developed a signal processing method that improves the directivity below 800 Hz.
Convention Paper 8922 (Purchase now)

P1-2 Sound Field Visualization Using Optical Wave Microphone Coupled with Computerized TomographyToshiyuki Nakamiya, Tokai University - Kumamoto, Japan; Fumiaki Mitsugi, Kumamoto University - Kumamoto, Japan; Yoichiro Iwasaki, Tokai University - Kumamoto, Japan; Tomoaki Ikegami, Kumamoto University - Kumamoto, Japan; Ryoichi Tsuda, Tokai University - Kumamoto, Japan; Yoshito Sonoda, Tokai University - Kumamoto, Kumamoto, Japan
The novel method, which we call the “Optical Wave Microphone (OWM)” technique, is based on a Fraunhofer diffraction effect between a sound wave and a laser beam. The light diffraction technique is an effective sensing method to detect the sound and is flexible for practical uses as it involves only a simple optical lens system. OWM is also very useful to detect the sound wave without disturbing the sound field. This new method can realize high accuracy measurement of slight density change of atmosphere. Moreover, OWM can be used for sound field visualization by computerized tomography (CT) because the ultra-small modulation by the sound field is integrated along the laser beam path.
Convention Paper 8923 (Purchase now)

P1-3 Proposal of Optical Wave Microphone and Physical Mechanism of Sound DetectionYoshito Sonoda, Tokai University - Kumamoto, Kumamoto, Japan; Toshiyuki Nakamiya, Tokai University - Kumamoto, Japan
An optical wave microphone with no diaphragm, which uses wave optics and a laser beam to detect sounds, can measure sounds without disturbing the sound field. The theoretical equation for this measurement can be derived from the optical diffraction integration equation coupled to the optical phase modulation theory, but the physical interpretation or meaning of this phenomenon is not clear from the mathematical calculation process alone. In this paper the physical meaning in relation to wave-optical processes is considered. Furthermore, the spatial sampling theorem is applied to the interaction between a laser beam with a small radius and a sound wave with a long wavelength, showing that the wavenumber resolution is lost in this case, and the spatial position of the maximum intensity peak of the optical diffraction pattern generated by a sound wave is independent of the sound frequency. This property can be used to detect complex tones composed of different frequencies with a single photo-detector. Finally, the method is compared with the conventional Raman-Nath diffraction phenomena relating to ultrasonic waves. AES 135th Convention Best Peer-Reviewed Paper Award Cowinner
Convention Paper 8924 (Purchase now)

P1-4 Numerical Simulation of Microphone Wind Noise, Part 2: Internal FlowJuha Backman, Nokia Corporation - Espoo, Finland
This paper discusses the use of the computational fluid dynamics (CFD) for computational analysis of microphone wind noise. The previous part of this work showed that an external flow produces a pressure difference on the external boundary, and this pressure causes flow in the microphone internal structures, mainly between the protective grid and the diaphragm. The examples presented in this work describe the effect of microphone grille structure and microphone diaphragm properties on the wind noise sensitivity related to the behavior of this kind of internal flows.
Convention Paper 8925 (Purchase now)

 
 

Friday, October 18, 11:30 am — 1:00 pm (Room 1E11)

Sound for Picture: SP1 - Creative Dimension of Immersive Sound—Sound in 3D

Chair:
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Panelists:
Marti Humphrey CAS, The Dub Stage - Burbank, CA, USA
Branko Neskov, Loudness Films - Lisbon, Portugal

Abstract:
Audio for Cinema has always struggled to replicate the motion shown on the screen, a fact that became more apparent with 3D films. Several methodologies for "immersive sound" are currently under evaluation by the industry, with theater owners and film companies both advising that they will not tolerate a format war, with a common format a commercial requirement.

The two major methods of creating immersive sound and audio motion are referred to as "object-based" and "channel-based." Each has its strengths and limitations for retrofit into the current cinema market. With few sound mixers experienced in either of these techniques, we're pleased to welcome two of the pioneers, one with experience at Auro3D and the other in Atmos, in a discussion of their experiences and comments on working with the two systems.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Cinema

 
 

Friday, October 18, 2:15 pm — 4:45 pm (Room 1E09)

Paper Session: P11 - Perception—Part 1

Chair:
Jason Corey, University of Michigan - Ann Arbor, MI, USA

P11-1 On the Perceptual Advantage of Stereo Subwoofer Systems in Live Sound ReinforcementAdam J. Hill, University of Derby - Derby, Derbyshire, UK; Malcolm O. J. Hawksford, University of Essex - Colchester, Essex, UK
Recent research into low-frequency sound-source localization confirms the lowest localizable frequency is a function of room dimensions, source/listener location, and reverberant characteristics of the space. Larger spaces therefore facilitate accurate low-frequency localization and should gain benefit from broadband multichannel live-sound reproduction compared to the current trend of deriving an auxiliary mono signal for the subwoofers. This study explores whether the monophonic approach is a significant limit to perceptual quality and if stereo subwoofer systems can create a superior soundscape. The investigation combines binaural measurements and a series of listening tests to compare mono and stereo subwoofer systems when used within a typical left/right configuration.
Convention Paper 8970 (Purchase now)

P11-2 Auditory Adaptation to Loudspeakers and Listening Room AcousticsCleopatra Pike, University of Surrey - Guildford, Surrey, UK; Tim Brookes, University of Surrey - Guildford, Surrey, UK; Russell Mason, University of Surrey - Guildford, Surrey, UK
Timbral qualities of loudspeakers and rooms are often compared in listening tests involving short listening periods. Outside the laboratory, listening occurs over a longer time course. In a study by Olive et al. (1995) smaller timbral differences between loudspeakers and between rooms were reported when comparisons were made over shorter versus longer time periods. This is a form of timbral adaptation, a decrease in sensitivity to timbre over time. The current study confirms this adaptation and establishes that it is not due to response bias but may be due to timbral memory, specific mechanisms compensating for transmission channel acoustics, or attentional factors. Modifications to listening tests may be required where tests need to be representative of listening outside of the laboratory.
Convention Paper 8971 (Purchase now)

P11-3 Perception Testing: Spatial AcuityP. Nigel Brown, Ex'pression College for Digital Arts - Emeryville, CA, USA
There is a lack of readily accessible data in the public domain detailing individual spatial aural acuity. Introducing new tests of aural perception, this document specifies testing methodologies and apparatus, with example test results and analyses. Tests are presented to measure the resolution of a subject's perception and their ability to localize a sound source. The basic tests are designed to measure minimum discernible change across a 180° horizontal soundfield. More complex tests are conducted over two or three axes for pantophonic or periphonic analysis. Example results are shown from tests including unilateral and bilateral hearing aid users and profoundly monaural subjects. Examples are provided of the applicability of the findings to sound art, healthcare, and other disciplines.
Convention Paper 8972 (Purchase now)

P11-4 Evaluation of Loudness Meters Using Parameterization of Fader MovementsJon Allan, Luleå University of Technology - Piteå, Sweden; Jan Berg, Luleå University of Technology - Piteå, Sweden
The EBU recommendation R 128 regarding loudness normalization is now generally accepted and countries in Europe are adopting the new recommendation. There is now a need to know more about how and when to use the different meter modes, Momentary and Short term, proposed in R 128, as well as to understand how different implementations of R 128 in audio level meters affect the engineers’ actions. A method is tentatively proposed for evaluating the performance of audio level meters in live broadcasts. The method was used to evaluate different meter implementations, three of them conforming to the recommendation from EBU, R 128. In an experiment, engineers adjusted audio levels in a simulated live broadcast show and the resulting fader movements were recorded. The movements were parameterized into “Fader movement,” “Adjustment time,” “Overshoot,” etc. Results show that the proposed parameters produced significant differences caused by the meters and that the experience of the engineer operating the fader is a significant factor.
Convention Paper 8973 (Purchase now)

P11-5 Validation of the Binaural Room Scanning Method for Cinema Audio ResearchLinda A. Gedemer, University of Salford - Salford, UK; Harman International - Northridge, CA, USA; Todd Welti, Harman International - Northridge, CA, USA
Binaural Room Scanning (BRS) is a method of capturing a binaural representation of a room using a dummy head with binaural microphones in the ears and later reproducing it over a pair of calibrated headphones. In this method multiple measurements are made at differing head angles that are stored separately as data files. A playback system employing headphones and a headtracker recreates the original environment for the listener, so that as they turn their head, the rendered audio during playback matches the listeners' current head angle. This paper reports the results of a validation test of a custom BRS system that was developed for research and evaluation of different loudspeakers and different listening spaces. To validate the performance of the BRS system, listening evaluations of different in-room equalizations of a 5.1 loudspeaker system were made both in situ and via the BRS system. This was repeated using three different loudspeaker systems in three different sized listening rooms.
Convention Paper 8974 (Purchase now)

 
 

Friday, October 18, 2:30 pm — 4:00 pm (Room 1E11)

Sound for Picture: SP2 - Cinema Sound Standards Collapse Leaving Turmoil—An Overview of the State of the Art

Chair:
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Panelists:
Glenn Leembruggen, Acoustics Directions Pty Ltd. - Summer Hill, NSW, Australia; Sydney University
David Murphy, Krix Loudspeakers - Hackham, South Australia

Abstract:
Dr. Floyd Toole first documented in his book Sound Reproduction: Loudspeakers and Rooms in 2008 the failure of the Standards process in producing quality sound in movie theaters. The work was expanded on in experiments done by a group led by Philip Newell in Europe, and this work was cited by Brian McCarty in order to get both the SMPTE and AES to begin work on scientific, comprehensive new Standards for cinema and eventually home audio reproduction.

This workshop reviews the flawed Standards and presents new experiments that further define the areas of work that will need to be undertaken for new Standards to be written.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Cinema

 
 

Saturday, October 19, 9:00 am — 10:30 am (Room 1E11)

Sound for Picture: SP3 - Dialog Editing and Mixing for Film (Sound for Pictures Master Class)

Presenters:
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Fred Rosenberg

Abstract:
Film soundtracks contain three elements—dialog, music, and sound effects. Dialog is the heart of the process, with “telling the story” the primary goal of the dialog. With multiple sources of dialog available, the assessment and planning of the dialog and subsequent mixing is a critical element in the process. This Master Class with one of Hollywood's leading professionals puts the process under the microscope.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Cinema

 
 

Saturday, October 19, 9:00 am — 11:30 am (Room 1E07)

Paper Session: P13 - Applications in Audio—Part 2

Chair:
Hans Riekehof-Boehmer, SCHOEPS Mikrofone - Karlsruhe, Germany

P13-1 Level-Normalization of Feature Films Using Loudness vs SpeechEsben Skovenborg, TC Electronic - Risskov, Denmark; Thomas Lund, TC Electronic A/S - Risskov, Denmark
We present an empirical study of the differences between level-normalization of feature films using the two dominant methods: loudness normalization and speech (“dialog”) normalization. The sound of 35 recent “blockbuster” DVDs were analyzed using both methods. The difference in normalization level was up to 14 dB, on average 5.5 dB. For all films the loudness method provided the lowest normalization level and hence the greatest headroom. Comparison of automatic speech measurement to manual measurement of dialog anchors shows a typical difference of 4.5 dB, with the automatic measurement producing the highest level. Employing the speech-classifier to process rather than measure the films, a listening test suggested that the automatic measure is positively biased because it sometimes fails to distinguish between “normal speech” and speech combined with “action” sounds. Finally, the DialNorm values encoded in the AC-3 streams on DVDs were compared to both the automatically and the manually measured speech levels and found to match neither one well. AES 135th Convention Best Peer-Reviewed Paper Award Cowinner
Convention Paper 8983 (Purchase now)

P13-2 Sound Identification from MPEG-Encoded Audio FilesJoseph G. Studniarz, Montana State University - Bozeman, MT, USA; Robert C. Maher, Montana State University - Bozeman, MT, USA
Numerous methods have been proposed for searching and analyzing long-term audio recordings for specific sound sources. It is increasingly common that audio recordings are archived using perceptual compression, such as MPEG-1 Layer 3 (MP3). Rather than performing sound identification upon the reconstructed time waveform after decoding, we operate on the undecoded MP3 audio data as a way to improve processing speed and efficiency. The compressed audio format is only partially processed using the initial bitstream unpacking of a standard decoder, but then the sound identification is performed directly using the frequency spectrum represented by each MP3 data frame. Practical uses are demonstrated for identifying anthropogenic sounds within a natural soundscape recording.
Convention Paper 8984 (Purchase now)

P13-3 Pilot Workload and Speech Analysis: A Preliminary InvestigationRachel M. Bittner, New York University - New York, NY, USA; Durand R. Begault, Human Systems Integration Division, NASA Ames Research Center - Moffett Field, CA, USA; Bonny R. Christopher, San Jose State University Research Foundation, NASA Ames Research Center - Moffett Field, CA, USA
Prior research has questioned the effectiveness of speech analysis to measure a talker's stress, workload, truthfulness, or emotional state. However, the question remains regarding the utility of speech analysis for restricted vocabularies such as those used in aviation communications. A part-task experiment was conducted in which participants performed Air Traffic Control read-backs in different workload environments. Participant's subjective workload and the speech qualities of fundamental frequency (F0) and articulation rate were evaluated. A significant increase in subjective workload rating was found for high workload segments. F0 was found to be significantly higher during high workload while articulation rates were found to be significantly slower. No correlation was found to exist between subjective workload and F0 or articulation rate.
Convention Paper 8985 (Purchase now)

P13-4 Gain Stage Management in Classic Guitar Amplifier CircuitsBryan Martin, McGill University - Montreal, QC, Canada
The guitar amplifier became a common tool in musical creation during the second half of the 20th Century. This paper attempts to detail some of the internal mechanisms by which the tones are created and their dependent interactions. Two early amplifier designs are examined to determine the circuit relationships and design decisions that came to define the sound of the electric guitar.
Convention Paper 8986 (Purchase now)

P13-5 Audio Pre-Equalization Models for Building Structural Sound Transmission SuppressionCheng Shu, University of Rochester - Rochester, NY, USA; Fangyu Ke, University of Rochester - Rochester, NY, USA; Xiang Zhou, Bose Corporation - Framingham, MA, USA; Gang Ren, University of Rochester - Rochester, NY, USA; Mark F. Bocko, University of Rochester - Rochester, NY, USA
We propose a novel audio pre-equalization model that utilizes the transmission characteristics of building structures to reduce the interference reaching adjacent neighbors while maintaining the audio quality for the target listener. The audio transmission profiles are obtained by field acoustical measurements in several typical types of building structures. We also measure the spectrum of audio to adapt the pre-equalization model to a specific audio segment. We apply a computational auditory model to (1) monitor the perceptual audio quality for the target listener and (2) access the interference caused to adjacent neighbors. The system performance is then evaluated using subjective rating experiments.
Convention Paper 8987 (Purchase now)

 
 

Saturday, October 19, 10:30 am — 12:00 pm (Room 1E11)

Sound for Picture: SP4 - Music Production for Film (Sound for Pictures Master Class)

Presenters:
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Simon Franglen, Class1 Media - Los Angeles, CA, USA; London
Chris Hajian

Abstract:
Film soundtracks contain three elements: dialog, music, and sound effects. The creation of a music soundtrack is far more complex than previously, now encompassing “temp music” for preview screenings, synthesizer-enhanced orchestra tracks, and other special techniques. This Master Class with one of Hollywood's leading professionals puts the process under the microscope.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Cinema

 
 

Saturday, October 19, 2:00 pm — 3:30 pm (Room 1E11)

Sound for Picture: SP5 - Sound Design for Film (Sound for Pictures Master Class)

Presenters:
Michael Barry
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Eugene Gearty
Skip Lievsay

Abstract:
Film soundtracks contain three elements: dialog, music, and sound effects. Sound effects, which used to be an afterthought, are now constructed by sound designers, often working from the start of production. This Master Class with Hollywood's leading professionals puts the process under the microscope.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Cinema

 
 

Saturday, October 19, 3:00 pm — 4:30 pm (1EFoyer)

Poster: P15 - Applications in Audio—Part I

P15-1 An Audio Game App Using Interactive Movement Sonification for Targeted Posture ControlDaniel Avissar, University of Miami - Coral Gables, FL, USA; Colby N. Leider, University of Miami - Coral Gables, FL, USA; Christopher Bennett, University of Miami - Coral Gables, FL, USA; Oygo Sound LLC - Miami, FL, USA; Robert Gailey, University of Miami - Coral Gables, FL, USA
Interactive movement sonification has been gaining validity as a technique for biofeedback and auditory data mining in research and development for gaming, sports, and physiotherapy. Naturally, the harvesting of kinematic data over recent years has been a function of an increased availability of more portable, high-precision sensory technologies, such as smart phones, and dynamic real time programming environments, such as Max/MSP. Whereas the overlap of motor skill coordination and acoustic events has been a staple to musical pedagogy, musicians and music engineers have been surprisingly less involved than biomechanical, electrical, and computer engineers in research efforts in these fields. Thus, this paper proposes a prototype for an accessible virtual gaming interface that uses music and pitch training as positive reinforcement in the accomplishment of target postures.
Convention Paper 8995 (Purchase now)

P15-2 Evaluation of the SMPTE X-Curve Based on a Survey of Re-Recording MixersLinda A. Gedemer, University of Salford - Salford, UK; Harman International - Northridge, CA, USA
Cinema calibration methods, which include targeted equalization curves for both dub stages and cinemas, are currently used to ensure an accurate translation of a film's sound track from dub stage to cinema. In recent years, there has been an effort to reexamine how cinemas and dub-stages are calibrated with respect to preferred or standardized room response curves. Most notable is the work currently underway reviewing the SMPTE standard ST202:2010 "For Motion-Pictures - Dubbing Stages (Mixing Rooms), Screening Rooms and Indoor Theaters -B-Chain Electroacoustic Response." There are both scientific and anecdotal reasons to question the effectiveness of the SMPTE standard in its current form. A survey of re-recording mixers was undertaken in an effort to better understand the efficaciousness of the SMPTE standard from the users' point of view.
Convention Paper 8996 (Purchase now)

P15-3 An Objective Comparison of Stereo Recording Techniques through the Use of Subjective Listener Preference RatingsWei Lim, University of Michigan - Ann Arbor, MI, USA
Stereo microphone techniques offer audio engineers the ability to capture a soundscape that approximates how one might hear realistically. To illustrate the differences between six common stereo microphone techniques, namely XY, Blumlein, ORTF, NOS, AB, and Faulkner, I asked 12 study participants to rate recordings of a Yamaha Disklavier piano. I examined the inter-rating correlation between subjects to find a preferential trend toward near-coincidental techniques. Further evaluation showed that there was a preference for clarity over spatial content in a recording. Subjects did not find that wider microphone placements provided for more spacious-sounding recordings. Using this information, this paper also discusses the need to re-evaluate how microphone techniques are typically categorized by distance between microphones.
Convention Paper 8997 (Purchase now)

P15-4 Tampering Detection of Digital Recordings Using Electric Network Frequency and Phase AngleJidong Chai, University of Tennessee - Knoxville, TN, USA; Yuming Liu, Electrical Power Research Institute, Chongqing Electric Power Corp. - Chongqing, China; Zhiyong Yuan, China Southern Power Grid - Guangzhou, China; Richard W. Conners, Virginia Polytechnic Institute and State University - Blacksburg, VA, USA; Yilu Liu, University of Tennessee - Knoxville, TN, USA; Oak Ridge National Laboratory
In the field of forensic authentication of digital audio recordings, the ENF (electric network frequency) Criterion is one of the possible tools and has shown promising results. An important task for forensic authentication is to determine whether the recordings are tampered or not. Previous work performs tampering detection by looking for the discontinuity in either the extracted ENF or phase angle from digital recordings. However, using only frequency or phase angle to detect tampering may not be sufficient. In this paper both frequency and phase angle with a corresponding reference database are used to do tampering detection of digital recordings, which result in more reliable detection. This paper briefly introduces the Frequency Monitoring Network (FNET) at UTK and its frequency and phase angle reference database. A Short-Time Fourier transform (STFT) is employed to estimate the ENF and phase angle embedded in audio files. A procedure of using the ENF criterion to detect tampering, ranging from signal preprocessing, ENF and phase angle estimation, frequency database matching to tampering detection, is proposed. Results show that utilizing frequency and phase angle jointly can improve the reliability of tampering detection in authentication of digital recordings.
Convention Paper 8998 (Purchase now)

P15-5 Portable Speech Encryption Based Anti-Tapping DeviceC. R. Suthikshn Kumar, Defence Institute of Advanced Technology (DIAT) - Girinagar, Pune, India
Tapping telephones nowadays is a major concern. There is a need for a portable device that can be attached to a mobile phone that can prevent tapping. Users want to encrypt their voice during conversation, mainly for privacy. The encrypted conversation can prevent tapping of the mobile calls as the network operator may tap the calls for various reasons. In this paper we propose a portable device that can be attached to the mobile phone/landline phone that serves as an anti-tapping device. The device encrypts the speech and decrypts the encrypted speech in real time. The main idea is that speech is unintelligible when encrypted.
Convention Paper 8999 (Purchase now)

P15-6 Personalized Audio Systems—A Bayesian ApproachJens Brehm Nielsen, Technical University of Denmark - Kongens Lyngby, Denmark; Widex A/S - Lynge, Denmark; Bjørn Sand Jensen, Technical University of Denmark - Kongens Lyngby, Denmark; Toke Jansen Hansen, Technical University of Denmark - Kongens Lyngby, Denmark; Jan Larsen, Technical University of Denmark - Kgs. Lyngby, Denmark
Modern audio systems are typically equipped with several user-adjustable parameters unfamiliar to most listeners. To obtain the best possible system setting, the listener is forced into non-trivial multi-parameter optimization with respect to the listener's own objective and preference. To address this, the present paper presents a general interactive framework for robust personalization of such audio systems. The framework builds on Bayesian Gaussian process regression in which the belief about the user's objective function is updated sequentially. The parameter setting to be evaluated in a given trial is carefully selected by sequential experimental design based on the belief. A Gaussian process model is proposed that incorporates assumed correlation among particular parameters, which provides better modeling capabilities compared to a standard model. A five-band constant-Q equalizer is considered for demonstration purposes, in which the equalizer parameters are optimized for each individual using the proposed framework. Twelve test subjects obtain a personalized setting with the framework, and these settings are significantly preferred to those obtained with random experimentation.
Convention Paper 9000 (Purchase now)

 
 

Saturday, October 19, 3:30 pm — 5:00 pm (Room 1E11)

Sound for Picture: SP6 - World-Class Cinema Sound Mixers Discuss Their Craft

Chair:
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Panelists:
Marti Humphrey CAS, The Dub Stage - Burbank, CA, USA
Chris M. Jacobson, The Dub Stage - Los Angeles, CA, USA
Branko Neskov, Loudness Films - Lisbon, Portugal

Abstract:
In what is fast becoming one of the most popular events in the "sound for picture" track, we again put together a panel of four of the top sound mixers for film and television.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Cinema

 
 

Saturday, October 19, 5:00 pm — 7:00 pm (Room 1E09)

Photo

Historical: The 35mm Album Master Fad

Presenter:
Thomas Fine, (sole proprietor of private studio) - Brewster, NY, USA

Abstract:
In the late 1950s and early 1960s, a new market emerged for ultra-high fidelity recordings. Once cutting and playback of the stereo LP were brought up to high quality levels, buyers of this new super-realistic format wanted ever more "absolute" sound quality. The notion emerged, first with Everest Records, a small independent record label in Queens, to use 35mm magnetic film as the recording and mastering medium. 35mm had distinct advantages over tape formulations and machines of that time—lower noise floor, less wow and flutter, higher absolute levels before saturation, almost no crosstalk or print-through, etc. Everest Records made a splash with the first 35mm LP masters not connected to motion-picture soundtracks but quickly faltered as a business. The unique set of recording equipment and the Everest studio remained intact and was used to make commercially successful 35mm records for Mercury, Command, Cameo-Parkway, and Project 3. The fad faded by the mid-60s as tape machines and tape formulations improved, and the high cost of working with 35mm magnetic film became unsustainable. The original Everest equipment survived to be used in the Mercury Living Presence remasters for CD. Just recently, the original Everest 35mm recordings have been reissued in new high-resolution digital remasters. This presentation will trace the history of 35mm magnetic recording, the brief but high-profile fad of 35mm-based LPs, and the after-life of those original recordings. We will also look at the unique set of hardware used to make the vast majority of the 35mm LPs. The presentation will be augmented with plenty of audio examples from the original recordings.

 
 

Saturday, October 19, 5:00 pm — 7:30 pm (Room 1E13)

Workshop: W20 - What's Right and What's Wrong with Today's Motion Picture Sound?

Chair:
John F. Allen, High Performance Stereo - Newton, Massachusetts USA
Panelists:
Mark Collins, Marcus Theatres - Milwaukee, WI, USA
Douglas Greenfield, Dolby Labs - Burbank, CA, USA
Brian A. Vessa, Sony Pictures Entertainment - Culver City, CA, USA; Chair SMPTE 25CSS standards committee

Abstract:
Do you think movies are too loud? Do you admire their sound quality? Why do so many complain about motion picture sound? The answers may come as a surprise. To fully understand the complexities involved, one must separately explore both the way movies are made and they way they are played.

This workshop consists of a panel of experts that actually work in both creating and presenting motion pictures. Their candid presentations will begin by exploring the often inaccurate way sound system measurements are interpreted. Complicating matters, the resulting equalization errors are different for different parts of the audio spectrum. Theater sound system mis-calibration errors cannot only diminish the sound quality but can cause significant unintended playback level increases as well. This presentation will not only describe these problems but will offer solutions as well.

These and other issues are the focus of the recent standards work. The obstacles presented when sometimes working at the limits of technology will be described by a senior studio sound engineer who is also the chairman of the largest SMPTE committee assigned to motion picture sound.

Movies mixed all over the world must be created with such consistency that they can all be played in a theater without the need to adjust a fader or an equalizer. Perhaps no part of the audio production industry is closer to achieving this goal than motion pictures. This demands hours of work and many long days often diplomatically supporting movie makers and assisting them in building the final product they are striving to create. One of our panelists in a leader in this rather exclusive field.

After years in the making of a film, it all comes down to theatrical presentation. Building and maintaining hundreds and even thousands of screens is an art in itself, often executed with mixed results. One of exhibitions most accomplished technical directors will detail the day to day challenges one faces in such a role.

 
 

Sunday, October 20, 10:30 am — 12:30 pm (Room 1E11)

Sound for Picture: SP7 - Sound for "A Deadliest Catch"—Reality Is Hard Work

Chair:
Brian McCarty, Coral Sea Studios Pty. Ltd - Clifton Beach, QLD, Australia
Panelists:
Bob Bronow, Max Post - Burbank, CA; Audio Cocktail
Josh Earl, Original Productions - Burbank, CA, USA
Sound Crew from "Deadliest Catch"

Abstract:
Television has seen the development of a new category of TV program—the "reality" show. While many of these shows are TV fluff, one of the first was set around the dangerous profession of crab fishing in the Bering Sea. This hit rated show is one of the most difficult and challenging productions for not only the fisherman but for the capture and mixing of the soundtrack. We present two of the key Emmy-winning sound professionals in this workshop.

AES Technical Council This session is presented in association with the AES Technical Committee on Audio for Cinema

 
 


Return to Sound for Picture Track Events

EXHIBITION HOURS October 18th 10am ��� 6pm October 19th 10am ��� 6pm October 20th 10am ��� 4pm
REGISTRATION DESK October 16th 3pm ��� 7pm October 17th 8am ��� 6pm October 18th 8am ��� 6pm October 19th 8am ��� 6pm October 20th 8am ��� 4pm
TECHNICAL PROGRAM October 17th 9am ��� 7pm October 18th 9am ��� 7pm October 19th 9am ��� 7pm October 20th 9am ��� 6pm
AES - Audio Engineering Society