Mobile robotic platforms are equipped with multimodal human-like sensing e.g. haptic, vision and audition, in order to collect data from the environment. Recently, robotic binaural hearing approaches based on Head-Related Transfer Functions (HRTFs) have become a promising technique to localize sounds in a threedimensional environment with only two microphones. Usually, HRTF-based sound localization approaches are restricted to one sound source. To cope with this difficulty, Blind Source Separation (BSS) algorithms were utilized to separate the sound sources before applying HRTF localization. However, those approaches usually are computationally expensive and restricted to sparse and statistically independent signals for the underdetermined case. In this paper we present underdetermined sound localization that utilizes a superpositioned HRTF database. Our algorithm is capable of localizing sparse, as well as broadband signals, whereas the signals are not statistically independent.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.