Disguise or masking, which is the equivalent of pixelation in video, can be applied in an audio setting. We examine the application of semi-automatic speaker recognition to aid segmentation of audio intended for disguise or masking. In this work, we have proposed an innovative application of semi-automatic speaker recognition to automate audio segmentation in the process of disguising or masking the identity of vulnerable witnesses or undercover officers in police interviews. In this study, the accuracy of the semi-automatic speaker segmentation process was measured against baseline references as annotated by a human audio expert. Metrics were also established for estimating the accuracy of segmentation, and these include overlap errors, border accuracy, and feathered border accuracy. These were measured against analysis block sizes, number of Gaussian components, and training data length. Our experimentation has shown low error rates for speaker separation, and has also shown that accuracy metrics need to take transition boundaries between speakers into account. We have also considered the influence of the content of the training utterances and evaluated the influence of noise on performance. This approach was tested against real-case data used by the London Metropolitan Police.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.