AES E-Library

AES E-Library

Musical Instrument Tagging Using Data Augmentation and Effective Noisy Data Processing

Document Thumbnail

This paper describes a promising method for an automatic music instrument tagging system using neural networks. Developing signal processing methods to extract information automatically has potential utility in many applications, as for example searching for multimedia based on its audio content, making context-aware mobile applications, and pre-processing for an automatic mixing system. However, the last-mentioned application needs a significant amount of research to reliably recognize real musical instruments in recordings. This research focuses on how to obtain data for efficient training, validating, and testing a deep-learning model by using a data augmentation technique. These data are transformed into 2D feature spaces, i.e. mel-scale spectrograms. The neural network used in the experiments consists of a single-block DenseNet architecture and a multihead softmax classifier for efficient learning with the mixup augmentation. For automatic noisy data labeling, the batch-wise loss masking, which is robust with regard to corrupting outliers in data, was applied. The method provides promising recognition scores even with real-world recordings that contain noisy data. 1D/2D Deep CNNs vs. Temporal Feature Integration for General Audio Classification Lazaros Vrysis, Nikolaos Tsipas, Iordanis Thoidis, and Charalampos Dimoulas 66 Semantic audio analysis has become a fundamental task in modern audio applications, making the improvement and optimization of classification algorithms a necessity. Standard frame-based audio classification methods have been optimized, and modern approaches introduce engineering methodologies that capture the temporal dependency between successive feature observations, following the process of temporal feature integration. Moreover, the deployment of the convolutional neural networks defined a new era on semantic audio analysis. This paper attempts a thorough comparison between standard feature-based classification strategies, state-of-the-art temporal feature integration tactics, and 1D/2D deep convolutional neural network setups on typical audio classification tasks. Experiments focus on optimizing a lightweight configuration for convolutional network topologies on a Speech/Music/Other classification scheme that can be deployed on various audio information retrieval tasks, such as voice activity detection, speaker diarization, or speech emotion recognition. The main target of this work is the establishment of an optimized protocol for constructing deep convolutional topologies on general audio detection classification schemes, minimizing complexity and computational needs.

Authors:
Affiliation:
JAES Volume 68 Issue 1/2 pp. 57-65; January 2020
Publication Date:
Permalink: https://www.aes.org/e-lib/browse.cfm?elib=20718

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Learn more about the AES E-Library

E-Library Location:

DOI:

Start a discussion about this paper!


AES - Audio Engineering Society