In This Section
- Eastern Region, USA/Canada
- VP: Robert Breen
- Central Region, USA/Canada
- VP: Michael Fleming
- Western Region, USA/Canada
- VP: Jonathan Novick
- Northern Region, Europe
- VP: Bill Foster
- Central Region, Europe
- VP: Nadja Wallaszkovits
- Southern Region, Europe
- VP: Umberto Zanghieri
- Latin American Region
- VP: Joel Vieira de Brito
- International Region
- VP: Kimio Hamasaki
AES Section Meeting Reports
New York - April 3, 2012
An enthusiastic crowd gathered in NYU's Clive Davis Institute of Recorded Music Dennis Riese Family Recording Studio to share the thoughts and experiences of our host and his presenters: Scott Lehrer, Theatrical Sound Designer, Producer, and Engineer; and Marc Salzberg, Theatrical Sound Designer. Questions and comments from the audience were woven into the fabric of this meeting. Committee member Chris Reba brought ten of his Music and Sound Recording students from the University of New Haven to the meeting
Mark is the Production Sound Mixer at Lincoln Center's Vivian Beaumont Theater, where he is now running
"War Horse." He has collaborated on at least ten shows with Scott, who won the first Sound Design Tony Award for Lincoln Center's revival of "South Pacific." Scott began the evening by defining the two categories of sound design: musicals and plays. Musical show sound installations are specific to a venue, while designing sound for a play involves creating a sound environment by working with composers, or even composing soundscapes and music cues themselves. He said that audiences for plays now demand a high level of quality and loudness. Mark told us that "War Horse" — a play -- uses 42 RF mics and has more than 100 sound cues, many of which are linked to video or projection elements.
Producers now want more elaborate sound plots but only one operator to mix all the elements together while also managing RF mics, balancing the band and playing sound effects. Daryl recalled that 30 years ago musicals were written to support the unamplified voice. The introduction of the rock drum kit required vocal amplification as well as increasing use of mics for the orchestra. It was becoming difficult to hear the voices over the more dense orchestrations. The panel described the "Otts Box" created by sound designer Otts Munderloh in the 1980s: The sound mixer had a box with two buttons which lit up on the companion box in front of the conductor during rehearsals — "Play Softer" or "Play Louder." Then and now there are endless discussions between the sound designer and orchestrator regarding the intensity of not only musical performances but underscores and transitions as well. Sometimes the solution to these problems is to route voice and orchestra to separate speaker systems so that the elements can be balanced more effectively than with just level control alone. Broadway's creativity this season will showcase "Ghost" with a feature-film 5.1-channel sound design in a live situation.
Successful sound design reflects a combination of both producers' and audiences' expectations. Two approaches are "Impact" (Big Box) for rock musicals vs "Source Reference" (position on stage) for conventional musicals such as "South Pacific" — each with their own system tuning techniques. PA levels for conventional musicals range from dialog at 65-70 dB, with music just above the actual level of the orchestra, at 80-85 dB. The rock musicals dialog can start at 75-80 dB with music hitting 85-95 dB or higher. Mark said that one can get people to listen carefully to quieter sections of a show if the sound quality is good. One method of ensuring this is to associate individual microphones with individual speakers: good aural localization is achieved with proper time delay and level control. This minimizes listener fatigue.
Daryl said that sound control technology is now much more affordable than in the past, without overwhelming the single sound operator while executing complex plots in real time. Scott recalled the 1980's show "Geniuses", which required Daryl, the mixer, to handle the all-analog mixing console, 4 multitrack tape machines, panning sounds around the theater in a sequence featuring helicopter flyovers, a typhoon and machine guns! Today's operators can handle such plots with relative ease thanks to digital control hardware and software.
Another advantage today is that when operators must be absent due to illness or vacations they can be covered by persons who might be more technical than artistic in their talents. Consistency of production sound over time is essential to support both the cast's needs and audience expectations.
Moving into the 1990's and "Angels in America" operators were still dealing with analog tape editing when fabricating or adjusting cues during rehearsals. It was still necessary to bring the materials back to a studio for re-timing, pre-balancing and re-recording. Hours were required to create effects or ambience loops. Routing sound effects to different speakers in the theater was still quite a chore, especially if the sounds had to be moved around in real time. MIDI-controlled panning and routing began to be used during this time, as well as samplers for effects playback. "SFX", an early PC-based effects playback program was prone to crashing during shows. These days, "QLab", an all-digital Mac-based program, makes it much easier to manage this work, which can now be accomplished quickly and right at the digital mixing position in the theater. It is almost universally used at every level of theatrical production. Sound effects field capture is also made easier with hand-held sound recorders, eliminating the need for very expensive Nagra-type devices. These new developments make the sound designer a more integral part of the production team.
Between the 1990's and about 2005 the Cadac J-type consoles were all the rage in New York and London stage productions. They came to incorporate motorized faders and recallable assignment switching, so complex sequences could be executed with consistency. Sound designs evolved in symbiosis with the availability of such control resources. In our era sound designers can achieve their goal of having an all-digital chain from the microphone output to speaker amp input.
As of now sound designers still can't reliably use the internet for importing elements into the theater in real time as such systems often run very slowly on certain days. Producers are loathe to install high-speed data lines for this purpose. Designers would prefer to deal with on-line acquisition of sound and music materials vs combing through CDs and LPs and associated catalogs. Scott showed us one of his small, fast, single hard disk drive libraries (complete with database), which he has backed up on several other drives.
Other challenges today include budget, time and in-theater real estate.
Theater sound consoles still cost north of $250,000 and must be used with show control software. By their nature as small-market items, these programs, along with high-quality mic preamps and RF equipment, force the designer to be very creative in order to make the budgeting process work.
Scott and Daryl told us that there is "never enough time to get it right." Usually two weeks are provided from installation to first audience performance. Designers often get to hear the orchestra run through the music only once before that important event. While lighting gets hours to finesse small changes the sound department must often make adjustments during the first tech rehearsal or preview performance. The production sound operator is also the mixer in the U.S., while in England these are two separate people, enabling a second set of ears to roam the theater to note and perhaps remotely adjust equipment during rehearsals using iPads or notebook computers. Up to 20 or more stage and house speaker zones are used in some of today's productions making such remote control essential.
Another challenge for designers and operators is consistency of levels during long-running productions. As the show's run continues, operators may tend to slowly increase or decrease overall levels due to their repeated hearing of the same dialog and lyrics. Intelligibility of dialog and lyrics in quiet passages needs to be finessed between the actors, director and designer. Performers need to allow the sound equipment to help expand their dynamic range while satisfying the producers' esthetics.
The presence or absence of audience box seating area adjacent to the stage is often a challenge for the sound designer. A show using "big box" left and right front speaker systems would ideally want to locate them in these boxes, but the lighting department often gets first claim on those spaces. Therefore those speakers must be located further from the stage and delayed to keep the apparent sound source "on the stage."
During rehearsals of a recent Broadway play, cast members were so startled when first hearing sound effects and music cues originating from upstage speakers that the designer relocated those cues to new speakers which reflected off of the proscenium. The cues could now be played at an appropriate level for the audience while not distracting the actors.
Part of the talent of a sound operator is the ability to "feel" the sound balance without referring to numbers or markings on the console. It is also necessary to understand the performers' energy and pacing on a daily basis so that one can anticipate cues and transitions. This builds the performers' confidence in the sound department so that they can ride on the energy of the overall mix, and balance themselves against the effects and music cues. Mark stated that "You can't have loud if you don't have quiet."
The second part of the evening was devoted to technology demonstrations. First up were examples of the "Haas" effect. This phenomenon, also known as "binary occlusion", reflects the fact that humans use our two ears to receive level and time delay information to localize sounds. Techniques have been evolved using short time delays to place sounds across the sound field at or between speakers so that they will be accurately located by all audience members. Pools of sound can be created on the stage and in the house so that body mics and footlight mics can be associated with particular speakers during each scene or section of a scene, using time delay and computer-assisted positioning of the apparent sound source as the actors move around the stage.
The QLab system, mentioned earlier in this report, provides a means of playing sound cues and also moving them within a destination matrix. Cues can be fired based on time intervals and can also interact with lighting and other systems such as pyrotechnics and scenic movement. The free version of QLab provides two outputs, and the paid version is very expandable to 48 independent channels of audio output per cue. Combined with Dropbox and other web-based large-file-sharing resources it is now easy to receive edit and distribute revised sound cues among show team members.