AES E-Library

AES E-Library

Generative Modeling of Metadata for Machine Learning Based Audio Content Classification

Automatic content classification technique is an essential tool in multimedia applications. Present research for audio-based classifiers look at short- and long-term analysis of signals, using both temporal and spectral features. In this paper we present a neural network to classify between the movie (cinematic, TV shows), music, and voice using metadata contained in either the audio/video stream. Towards this end, statistical models of the various metadata are created since a large metadata dataset is not available. Subsequently, synthetic metadata are generated from these statistical models, and the synthetic metadata is input to the ML classifier as feature vectors. The resulting classifier is then able to classify real-world content (e.g., YouTube) with an accuracy ˜ 90% with very low latency (viz., ˜ on an average 7 ms) based on real-world metadata.

AES Convention: eBrief:
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Learn more about the AES E-Library

The Engineering Briefs at this Convention were selected on the basis of a submitted synopsis, ensuring that they are of interest to AES members, and are not overly commercial. These briefs have been reproduced from the authors' advance manuscripts, without editing, corrections, or consideration by the Review Board. The AES takes no responsibility for their contents. Paper copies are not available, but any member can freely access these briefs. Members are encouraged to provide comments that enhance their usefulness.

Start a discussion about this paper!

AES - Audio Engineering Society