Retargeting Expressive Musical Style from Classical Music Recordings Using a Support Vector Machine
A method for transferring the expressive musical nuances of real recordings to a MIDI synthesized version was successfully demonstrated. Three features (dynamics, tempo, and articulation) were extracted from the recordings and then applied to the MIDI note list in order to reproduce the performer’s style. Subjective results showed that the retargeted music is very natural and sounds similar to the original performance. Statistical tests confirmed that the output correlated with the original better than with other sources. The method can successfully distinguish among different styles. A variety of applications can use this approach.
Click to purchase paper or login as an AES member. If your company or school subscribes to the AES Journal then you can look for this paper in the institutional version of the Online Journal. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $20 for non-members, $5 for AES members and is free for E-Library subscribers.