Overlapped speech, where several speakers are speaking simultaneously, is a common occurence in multiparty discussions such as meetings. This kind of speech presents a great challenge to automatic speech processing systems such as speech recognition systems and speaker diarisation systems. In recent speaker diarisation systems, a large portion of the remaining error comes from overlapped speech. So far little work has been done on detecting overlapped speech and the number of speakers present in overlapped speech. In this paper we first describe a model-based approach for estimating the number of simultaneous speakers. Then, we propose a new approach called Spectral Peak Clustering where instead of training statistical models we extract spectral peaks from the input data and then cluster them into components by using a similarity measure between peaks where each component represents a speaker present in the input data.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.