Music Genre Categorization in Humans and Machines
Music Genre Classification is one of the most active tasks in Music Information Retrieval (MIR). Many successful approaches can be found in literature. Most of them are based on Machine Learning algorithms applied to different audio features automatically computed for a specific database. But there is no computational model that explains how musical features are combined in order to yield genre decision in humans. In this work we present a listening experiment where audio has been altered in order to preserve some properties of music (rhythm, harmony, etc) but at the same time degrading other ones. Results are compared with a series of state-of-the-art genre classifiers based on these musical properties and we draw some lessons from that comparison.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is temporarily free for AES members.