Thursday, September 29, 9:00 am — 10:30 am (Rm 402AB)
At McGill University an interdisciplinary seminar brought together scientific researchers as well as experts in many areas that consider “listening” as one of their fundamental activities. The course explored how learned auditory skills and fine discrimination constitute an essential requirement for the practice of various professions. While critical listening of music was the guiding motif, invited speakers lead the class in their exploration of “listening” as a main component in a variety of areas of human life, while recognizing connections. Topics include critical listening in music performance, sound recording and engineering, record production, music instrument making, as well as “listening” in psychoacoustics, perception and cognition, in the neurosciences, in education, psychiatry, media studies, urban planning, and in the preservation of oral tradition through story telling.
Thursday, September 29, 10:45 am — 12:15 pm (Rm 402AB)
Product management has developed into an interdisciplinary field that takes a holistic view at the entire product life cycle from inception to end of life, bringing together strategy, market needs, opportunity analysis, product plans, product development and engineering, manufacturing and operations, financial management, marketing, customer support, and additional functions as needed to ensure products meet customer needs and business requirements. This tutorial will walk through the major product management implications through the product life cycle and discuss key interdisciplinary functions for each phase.
Friday, September 30, 9:00 am — 10:30 am (Rm 404AB)
Listening tests and other forms of data collection methods that rely on human responses are important tools for audio professionals, as these methods assist our understanding of audio quality. There are numerous examples of tests, either formally recommended and widely used, or specially devised for a single occasion. In order to understand listening tests and related methods, and also to potentially design and fully benefit from their results, some basic knowledge is required. This tutorial aims to address audio professionals without prior knowledge of listening test design and evaluation. The fundamentals of what to ask for, how to do it, whom to engage as listeners, what sort of results that may be expected and similar issues will be covered. The goal is to create an understanding of the basic concepts used in experimental design in order to enable audio professionals to appreciate the possibilities of listening tests.
|This session is presented in association with the AES Technical Committee on Perception and Subjective Evaluation of Audio Signals|
Friday, September 30, 10:45 am — 12:15 pm (Rm 404AB)
A podcast can be more than a monologue or an interview; it can be a rich environment for using sound to tell your story. One can use sound in many ways, whether it’s to set the scene, illustrate a concept, or enliven a journalistic endeavor. The talk will take the audience on an international aural journey from the 'hollars’ of Kentucky to the streets of Grenada in search of sounds. With Jim Anderson’s deep background in broadcasting, he will demonstrate the power of sound to illustrate and enrich a podcast.
Friday, September 30, 5:00 pm — 6:30 pm (Rm 404AB)
This Master Class will review the control and monitoring requirements of EDM performers and concert tours using extensive electronic instruments and backing tracks, then walk through several case studies using recording studio technologies to build custom, real-time control and audio networks for use on stage. Application technologies include using SMPTE time code, MADI audio networking, and multiple control surfaces to manage content, control code, and audio signals from Ableton Live, Pro Tools, and Virtual Instruments. Case studies will include custom solutions for high profile concert tours and development of a solution for the EDM group UNA Music.
Saturday, October 1, 9:00 am — 10:30 am (Rm 404AB)
Fundamental acoustics are an important building block to any audio engineering application. This tutorial will review fundamental wave theory, reflections, absorption, diffraction, echoes, reverb, and standing waves. We will also review implications for listening locations, speaker placement, intelligibility, and the audible symptoms of various acoustic anomalies: comb filters, standing waves, sound field integration and accuracy. The tutorial will include case studies of principles in action: tuning a studio (absorption/diffusion/bass traps/EQ), speaker placement for live.
Saturday, October 1, 1:30 pm — 2:15 pm (Rm 404AB)
Over 85% of people listen to their music with headphones. So listening to simply stereo on headphones shouldn’t be the end but the beginning of involving music experiences with the flexibility of headphone virtualizations (binaural). The tutorial will show how to do this with common DAWs and tools and provide a lot of listening examples to inspire music producers.
Saturday, October 1, 3:15 pm — 4:45 pm (Rm 406AB)
CBT is a term originated by the U.S. military in a series of declassified Naval Research Lab underwater-sound ASA papers published in the late 1970s and early 80s. Mr. Keele applied the technology to loudspeaker arrays in a series of nine AES papers between 2000 and 2015 and three more which will be presented at this convention.
CBT arrays provide broadband constant directivity/beamwidth behavior with a 3D sound field control that is exceptionally uniform and well behaved with frequency at all distances, and offer directional performance and sound-field coverage control that is outstanding. CBT array possibilities extend over the full loudspeaker product range from professional, commercial, consumer, home theater, computer, and multimedia. Don will discuss the background and history of CBT arrays including implementation of typical arrays. CBT arrays can be implemented very simply in a passive circular-arc loudspeaker configuration that does not require any sophisticated DSP signal processing except for simple level changes. Straight-line CBT arrays can also be implemented, but these require complex multi-amp configurations with individual DSP delay blocks driving each speaker or complex passive RLC delay networks.
|This session is presented in association with the AES Technical Committee on Loudspeakers and Headphones|
Saturday, October 1, 5:00 pm — 6:30 pm (Rm 406AB)
This tutorial will detail and demonstrate the many ways ears and speech have evolved to utilize the phase relationships of vocal harmonics to separate sonic information from complex and noisy environments. Early reflections randomize these phases, and in most rooms at some distance the ability to detect these phases is lost. Speech becomes difficult to localize, intelligibility decreases, and information is difficult to recall. We call this the Limit of Localization Distance, or LLD. We believe the number of seats within the LLD is one of the most important determinants of acoustic quality. The tutorial participants will be able to hear for themselves how the LLD can be easily determined by simply walking around with eyes closed during a lecture, a rehearsal or a performance.
Sunday, October 2, 9:00 am — 10:30 am (Rm 404AB)
Developing sample libraries for virtual and hardware instruments requires a complex balance of recording knowledge, the real-world behavior of the instrument sampled, detailed editing, and final programming of the software of hardware product. This tutorial will review the general process of planning, recording, editing, and programming a sample library using real world case studies on pianos and drums.
Sunday, October 2, 10:45 am — 12:15 pm (Rm 404AB)
Practical science is a basic review of common audio and physics principles that serve as foundational building blocks for critical listening, system design, and system operation in both live sound and recording environments. Best practices describe how the science fits into audio workflows at all levels.
Sunday, October 2, 1:30 pm — 3:00 pm (Rm 406AB)
Overview of common cables used for analog and digital audio transmission. Basic construction, applications, and considerations. Objective comparison of marketing vs science.
Sunday, October 2, 1:30 pm — 3:00 pm (Rm 404AB)
Daniel Shores has recorded numerous critically acclaimed solo piano records for several labels. The styles and music have covered everything from Bach, amplified and looped piano, an ensemble with other instruments, and genre from Jazz to chamber orchestra. Each session has its own unique character and voice, highlighting the repertoire and the individual musicians’ playing style.
This tutorial will demonstrate different techniques and approaches to accentuating the music and capturing the sound. He will discuss how both music and player can dictate how a few inches can truly make the music jump off the page. The session includes pictures, video, and sound examples both from the Sono Luminus Studios and captured in the new Steinway Hall in New York using their new Spirio piano.
Sunday, October 2, 3:15 pm — 4:45 pm (Rm 404AB)
In 2015, Sono Luminus began its experimentation and implementation of 9.1 Auro-3D recording. To date, Sono Luminus has created numerous 9.1 recordings, five of which are now commercially available with more on the way. Immersive audio quite literally takes the listening experience of the home consumer to the next level. It allows Sono Luminus the opportunity to deliver an even more in-depth, intriguing, and unique listening experience.
Recording both on location, and in the 100 year old converted church in the Virginia countryside that is now the home of Sono Luminus Studios, Sono Luminus focuses on techniques for capturing native immersive audio, rather than mixing for the format. In the end though, it is all about the serving the music, and we have taken the opportunity to work with the musicians and composers to develop recordings that bring out the all of the musical nuances in a way not possible before.
Examples in this tutorial will demonstrate various styles of music including choral, early music, Celtic, percussion, electronics, experimental music just to name a few demonstrating vastly different instrumentation and sonic textures.
Sunday, October 2, 3:15 pm — 4:45 pm (Rm 406AB)
In order to expand the description of diffusion surfaces towards their effects in the sound field, several technical concepts and new definitions will be presented in this Diffusion Tutorial: Impulse responses: measurement and analysis, "texture" of an impulse response; statistical & temporal acoustic behavior of a room: the Crossover time; room´s early and late sound fields, measuring the Crossover time. What is a diffuser? … It is all about Autocorrelation. How many diffusers should be installed in a...? (theater, studio, concert hall, etc.) Describing the acoustic properties of a surface: absorption, diffusion and scattering coefficients. What is a diffuse sound field? Defining and quantifying the sound field diffusivity: SFDC - A systematic experiment. Calculation method of SFDC. Examples and applications.