AES Store

Journal Forum

Reflecting on Reflections - June 2014
1 comment

Quiet Thoughts on a Deafening Problem - May 2014
1 comment

Perceptual Effects of Dynamic Range Compression in Popular Music Recordings - January 2014
5 comments

Access Journal Forum

AES E-Library

Immersive Sound Rendering Using Laser-Based Tracking

This paper describes the underlying concepts behind the spatial sound renderer built at the University of Southern California's Immersive Audio Laboratory. In creating this sound rendering system, the authors were faced with three main challenges: first, the rendering of sound using the head-related transfer functions (HRTFs); second, the cancellation of the crosstalk terms; and third, the localization of the listener's ears. To deal with the spatial rendering sound, a two-layer method of modeling the HRTFs was used. The first layer accurately reproduced the ITDs and IADs, and the second layer reproduced the spectral characteristics of the HRTFs. A novel method for generating the required crosstalk cancellation filters as the listener moves was developed based on low-rank modeling. Using Karhunen-Loeve expansion, the authors can interpolate among listener positions from a small number of HRTF measurements. Finally, a head detection algorithm for tracking the location of the listener's ears in real time using a laser scanner is presented.:

Authors:
Affiliation:
AES Convention: Paper Number:
Publication Date:
Subject:

Click to purchase paper or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $20 for non-members, $5 for AES members and is free for E-Library subscribers.

Learn more about the AES E-Library

E-Library Location:

Start a discussion about this paper!


 
Facebook   Twitter   LinkedIn   Google+   YouTube   RSS News Feeds  
AES - Audio Engineering Society