AES E-Library

AES E-Library

Using Neural Networks to Compute Time Offsets from Musical Instruments

Document Thumbnail

This research proposes an approach for computing the time offsets between audio sequences that contain musical sounds from different instruments produced in a distributed way and which have a set of weak features that are not useful as alignment points. It is therefore necessary to apply transformations in order to find a set of distinctive features to compute the offset values in a suitable way. The main issue that occurs with such a system is nonlinearity that does not allow the delay to be predicted by using a linear function. To solve this problem, the authors propose a set of long short-term memory (LSTM) layers to create a neural network model capable of learning such features transformations in a supervised approach, using a gradient-descent optimizer. This demonstrates the use of a recurrence matrix to extract timing information from a set of transformed features given by the neural network output. With this approach, the algorithm can classify up to 60% of a specific combination from the MedleyDB data set, and reduce the search space to five possibilities with accuracy up to 90% while keeping the precision of 10 ms. This performance is equal or better than state-of-the-art methods.

Authors:
Affiliations:
JAES Volume 68 Issue 3 pp. 157-167; March 2020
Publication Date:
Permalink: https://www.aes.org/e-lib/browse.cfm?elib=20726

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Learn more about the AES E-Library

E-Library Location:

DOI:

Start a discussion about this paper!


AES - Audio Engineering Society