A method for transferring the expressive musical nuances of real recordings to a MIDI synthesized version was successfully demonstrated. Three features (dynamics, tempo, and articulation) were extracted from the recordings and then applied to the MIDI note list in order to reproduce the performer’s style. Subjective results showed that the retargeted music is very natural and sounds similar to the original performance. Statistical tests confirmed that the output correlated with the original better than with other sources. The method can successfully distinguish among different styles. A variety of applications can use this approach.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.