Thursday, September 29, 9:00 am — 10:30 am (Rm 404AB)
Abstract:
2016 brought Numerical Optimization to the forefront. Over 50% of the Systems providing reinforcement for today’s audiences either use or have the ability to use computational based optimization technologies to deliver a superior audio experience to the audience. Call it what you will but Array Processing, Adaptive Technology or Numerical Optimization- the Technology is changing how we listen and how we deliver the audience experience. Experts in design and implementation of Optimized Systems will discuss and explain the process – working to demystify and improve the everyday users understanding of what this does for them and their audience.
Thursday, September 29, 10:45 am — 12:15 pm (Rm 404AB)
Abstract:
Nearly all sound reinforcement, or SR, systems must have utility AC power. To keep them safe and legal, regulatory rules such as “Code” must be strictly observed. However, these compliant power distribution systems inherently create harmlessly small voltage differences, called GVD, between physical locations in the safety ground network. This session will explain:
1. Exactly how GVD is created in premises wiring;
2. New-construction wiring techniques that can reduce GVD by several orders of magnitude; and
3. How GVD couples into signal paths, causing hum, buzz, and myriad digital network problems.
Related topics such as electrician mistakes, earth grounding, power conditioning (including “balanced power”), and choosing proper signal cables will also be discussed.
Thursday, September 29, 2:15 pm — 3:45 pm (Rm 408B)
Abstract:
A panel with first-hand experience on these specialized and currently topical events has being assembled. Consultant Ken Fause, Sound Designers Patrick Baltzell, Stan Miller and Michael Abbott will discuss how sound aspects of the Republican and Democratic debates and conventions, the Presidential debates, and the inaugurations were handled in the distant and recent past. Sound reinforcement, broadcast sound, and audio communications will all be discussed.
Thursday, September 29, 4:00 pm — 5:30 pm (Rm 408B)
Abstract:
Spatial audio production relies on a combination of psychoacoustic cues, contextual expectations of sonic space, and when done well, a sense of surprise and wonder at being immersed in a new soundscape. The means to achieve this involves a variety of multichannel loudspeaker and panning formats including ambisonics, vector-based panning systems, WFS, and bespoke arrays of nearly any description. This panel will include sound designers, artists, technical production staff and sound system/room acoustic designers discussing techniques for live production across a variety of immersive audio formats. Discussions will range from methods for creating and manipulating content for sonic space to local monitoring techniques to 3D production interfaces.
Friday, September 30, 9:00 am — 10:30 am (Rm 402AB)
Abstract:
This tutorial discusses what constitutes a difficult acoustic space and the effects of reverberation, echo, and noise on speech intelligibility. Sound system Intelligibility measurement methods and techniques will be reviewed, while system optimization techniques are also examined. The tutorial includes numerous case histories and examples illustrating the techniques discussed. A fundamental question that will be raised will be "how intelligible do sound systems need to be and does this change with application or is intelligibility universal." The case histories and examples will include Churches, Cathedrals and Houses of Worship, Concert Halls, Ceremonial Halls / Banqueting Halls, Shopping Malls, Sports Facilities and Arenas, Railway & Metro Stations, Road Tunnels, and Airports. The challenges associated with each type of venue will be discussed, as will the thorny subject of aesthetics (i.e., architects) and loudspeaker size and placement.
Friday, September 30, 1:30 pm — 3:00 pm (Rm 408B)
Abstract:
Recent rulings by the FCC on RF spectrum as applies to wireless microphones will have a profound effect on our industry in the years to come. Added to the loss of the 700 MHz band just a few short years ago, now the 600 MHz has been auctioned as well. Due to this upcoming loss of spectrum and the resulting crowding in the remaining UHF bands, the panel will also discuss the additional changes in regulations for wireless microphone compliance. Also covered will be the potential for new frequency bands to become available for wireless mic use on a shared basis along with some of the other bands currently available including VHF, 902-928 MHz, 941-960 MHz, 1.4 GHz, 2.4 GHz, 3.5 GHz and 6 GHz.
Saturday, October 1, 9:00 am — 10:30 am (Rm 408B)
Abstract:
While digital mixing environments today incorporate many new exiting features, the main form of control is still based on channel strips adopted from analog mixers. But is this really the most intuitive, effective and creative way to work with audio production? Since everything today is digital and the control interface is now completely separated from the audio processing, we can design our control interfaces in any way we want. In this seminar future thinkers from academia, from R & D, as well as practitioners will discuss what the future mixing environment might look like. The discussion will look at alternative paradigms for controlling audio in variouss contexts asking questions about the importance of tradition, intuition, layout, touch, tangibility, speed, overview, feedback, visuals, etc.
Saturday, October 1, 1:30 pm — 3:00 pm (Rm 408B)
Abstract:
Producing live events directly to the various VR platforms requires special techniques and workflows. This session will discuss the following:
1) 3d Live Sound Acquisition—positional significant, location specific microphones, such as the basketball rim, cup on the 18th hole, etc., and the associated challenges, as well as binaurals, b-format, etc.;
2) Creating the Bed—the VR audio must not jar the viewer out of the moment, so creating a realistic but familiar feeling 3d bed to layer on the special tracks is critical to success;
3) Monitoring and Mixing for Live VR—how does one both monitor for all possible viewing angles and also keep up with the live action and primary viewing angle at the same time—special challenges require some special solutions;
4) QC checking the final product—The best laid plans can sometimes get complicated when the various VR platforms attempt to render out the streams provided. Care must be taken to insure success for all viewing platforms.
Sunday, October 2, 9:00 am — 10:30 am (Rm 406AB)
Abstract:
Is too much low end ruining the listening experience for the audience? Over the years, Howard Page has prescribed low-end diets for those struggling to unveil clarity in their concert audio systems. This session will reveal Howard’s approach to removing the variables to deliver a consistent listening experience that places the creative focus where it belongs—at the mixing console.
Sunday, October 2, 10:45 am — 12:15 pm (Rm 406AB)
Abstract:
In November 2015 Los Angeles was struck by Hopscotch, a large-scale, site-specific mobile opera performance produced by experimental opera company The Industry. A one-of-a-kind experience for audience and crew alike, Hopscotch consisted of 24 simultaneously performed scenes which took place inside 18 limousines and in various locations across downtown Los Angeles. The project's lead A/V Technician, Edward Carlson, talks about the many audio challenges faced while building a show of this scale and complexity. Between fighting frequencies in downtown, lugging antennas to the roof of an apartment building, and live streaming all of it to the audience's headphones in a central hub, it's no understatement that Hopscotch was a tremendous feat.