This work describes a new method for estimating the orientation of an active sound source given a distributed microphone network. The technique requires that a set of microphone pairs be distributed in a room, and then it exploits the coherence computed from each sensor pair in order to derive an estimation of the head orientation. A database consisting of an audio sequence reproduced by a loudspeaker with different orientations and different positions was collected in order to evaluate the algorithm behavior. Experiments conducted on that database show that our approach can provide an efficient estimation of the sound source orientation, with a RMS error of about 10 degrees.
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.