About

Pacific Northwest AES Section Blog

Past Event: Masking — What is it, and when does it happen?

January 25, 2022 at 6:00 pm

Location: Cyberspace, via Zoom

Moderated by: Greg Dixon - Chair, AESPNW Section

Speaker(s): James D. (jj) Johnston - Immersion Networks

The Topic

Lately, there have been discussions about masking involving several topics, from "can I hear this instrument over that instrument," to "can I hear this at all." The answer to that lies in the phenomenon of masking, wherein the cochlear receptors, which have a total dynamic range circa 90dB, are actually 30dB receptors that are effectively gain-ranged. This means that a second signal that is very close in frequency to a stronger signal is most likely gone completely if it's 30dB down, and in some cases, gone at as little as 5.5dB or even 3.5dB lower than the masking signal.

In order for this talk to make sense easily, it would be a very, very good idea to go and listen to Hearing 099, presented in April 2019. Hearing 099 described the actual filtering that takes place in the cochlea, because in fact masking works within each cochlear filter bandwidth, so signal spectra must be taken completely into account when examining masking phenomena.

After discussion of the monaural situation, a discussion of the binaural situation will be included, pointing out the problem of Binaural Masking Level depression (which can drop the local spectrum masking level from -5.5 dB to -30dB, leading to the "Suzanne Vega Effect," and also pointing out the need for proper "panning" methods that allow unmasking for signals that would be otherwise inaudible.

The Presenter

James D. (jj) Johnston is Chief Scientist of Immersion Networks. He has a long and distinguished career in electrical engineering, audio science, and digital signal processing. His research and product invention spans hearing and psychoacoustics, perceptual encoding, and spatial audio methodologies.

He was one of the first investigators in the field of perceptual audio coding, one of the inventors and standardizers of MPEG 1/2 audio Layer 3 and MPEG-2 AAC. Most recently, he has been working in the area of auditory perception and ways to expand the limited sense of realism available in standard audio playback for both captured and synthetic performances.

Johnston worked for AT&T Bell Labs and its successor AT&T Labs Research for two and a half decades. He later worked at Microsoft and then Neural Audio and its successors before joining Immersion. He is an IEEE Fellow, an AES Fellow, a NJ Inventor of the Year, an AT&T Technical Medalist and Standards Awardee, and a co-recipient of the IEEE Donald Fink Paper Award. In 2006, he received the James L. Flanagan Signal Processing Award from the IEEE Signal Processing Society, and presented the 2012 Heyser Lecture at the AES 133rd Convention: Audio, Radio, Acoustics and Signal Processing: the Way Forward. In 2021, along with two colleagues, Johnston was awarded the Industrial Innovation Award by the Signal Processing Society "for contributions to the standardization of audio coding technology."

Mr. Johnston received the BSEE and MSEE degrees from Carnegie-Mellon University, Pittsburgh, PA in 1975 and 1976 respectively.

To RSVP

RSVP link

View Official Meeting Report

More Information


Posted: Friday, December 31, 2021

RSS News Feed

« Sylvia Massy's Mind Blowing Microphone Museum… | Main | An Introduction to Reaper »

AES - Audio Engineering Society