The retrieval of sounds in large databases can benefit from novel paradigms that exploit human-computer interaction. This work introduces the use of non-speech voice imitations as input queries in a large user-contributed sound repository. We address first the analysis of the human voice properties when imitating sounds. Second, we study the automatic classification of voice imitations in clusters by means of user experiments. Finally, we present an evaluation that demonstrates the potential of a content-based classification using voice imitations as input queries. Future perspectives for using voice interfaces in sound assets retrieval are exposed.
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.