Musical Eliza: An Automatic Musical Accompany System Based on Expressive Feature Analysis
We propose an interactive algorithm that musically accompanies musicians based on the matching of expressive feature patterns to existing archive recordings. For each accompany music segment, multiple realizations with different musical characteristics are performed by master music performers and recorded. Musical expressive features are extracted from each accompany segment and its semantic analysis is obtained using music expressive language model. When the performance of system user is recorded, we extract and analyze musical expressive feature in real time and playback the accompany track from the archive database that best matches the expressive feature pattern. By creating a sense of musical correspondence, our proposed system provides exciting interactive musical communication experience and finds versatile entertainment and pedagogical applications.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is temporarily free for AES members.