Meeting Topic: Auditory Mechanisms for Spatial Hearing
Moderator Name: Greg Dixon
Speaker Name: James D. (JJ) Johnston - Chief Scientist, Immersion Networks
Other business or activities at the meeting: The next PNW meeting was announced: Bob Olhsson will reflect on his days at Motown, Feb. 23, 2021.
Meeting Location: Redmond, WA - Via Zoom
The AES PNW Section presented a tutorial on the elements of human auditory mechanisms for spatial hearing for its January 2021 meeting on Zoom. AES Fellow and PNW Member James Johnston (JJ) gave this lecture, and approximately 74 persons attended (about 41 were AES members) from around the world.
JJ began with a review of what one ear can do, describing it as a frequency sensitive device, with filters and detectors. The hair cell structure and filtering action was described, leading to a discussion of the difference between loudness and intensity. Detector output was explained, as well as system non-linearities.
Next JJ spoke about how with two ears and the acoustics from the shape of the head, more things can be processed by the brain. Acoustics and direct sounds versus reflected sounds were detailed, as well as diffuse reflections and specular effects. He also went into reverberation effects and the Head-Related Transfer Function (HRTF). The Head Related Impulse Response (HRIR) is the time domain version of the HRTF. Distance perception and headphone problems were mentioned.
Many Q&As were fielded, and attendees were encouraged to stay on Zoom to introduce themselves. The meeting video, slides and notes will be available through the PNW Section website meeting archive at :
JJ earned the BSEE and MSEE degrees from Carnegie-Mellon University, Pittsburgh, PA in 1975 and 1976 respectively. JJ retired in 2002 after working for 26 years at AT&T Bell Labs and its successor AT&T Labs Research. He was one of the first investigators in the field of perceptual audio coding, one of the inventors and standardizers of MPEG 1/2 audio Layer 3 and MPEG-2 AAC, as well as the AT&T Bell Labs or AT&T Labs-Research PXFM (perceptual transform coding) and PAC (perceptual audio coding) and the ASPEC algorithm that provided the best audio quality in the MPEG-1 audio tests. Most recently he has been working in the area of auditory perception of soundfields, electronic soundfield correction, ways to capture soundfield cues and represent them, and ways to expand the limited sense of realism available in standard audio playback for both captured and synthetic performances. He was previously employed by Microsoft and then by Neural Audio and its successors. His current status is Chief Scientist of Immersion Networks. Mr. Johnston is an IEEE Fellow, an AES Fellow, a NJ Inventor of the Year, an AT&T Technical Medalist and Standards Awardee, and a co-recipient of the IEEE Donald Fink Paper Award. Mr. Johnston has presented many times for the PNW Section. You can see the depth and breadth of the topics he's presented for our section at: https://www.aes.org/sections/pnw/jj.htm.
In 2006, he received the James L. Flanagan Signal Processing Award from the IEEE Signal Processing Society, and presented the 2012 Heyser Lecture at the AES 133rd Convention: Audio, Radio, Acoustics and Signal Processing: the Way Forward. In 2020 he was a co-recipient of the IEEE Signal Processing Society Industrial Innovation Award with Karlheinz Brandenburg and Jürgen Herre, "for contributions to the standardization of audio coding technology."
Written By: Gary Louie