With the growing amount of multimedia data available everywhere and the necessity to provide efficient methods for browsing and indexing this plethora of audio content, automated musical similarity search and retrieval has gained considerable attention in recent years. This paper presents a system which combines a set of perceptual low level features with appropriate classification strategies for the task of retrieving similar sounding songs in a database. A method for analyzing the classification results while avoiding time consuming subjective listening tests for an optimum feature selection and combination is presented. It is based on a calculated ''similarity index'' which reflects the similarity between specifically embedded smilarity pairs. The system's performance as well as the usefulness of the analyzing method is evaluated by a subjective listening test.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.