Separating singing voice from music accompaniment is an appealing but challenging problem, especially in the monaural case. One existing approach is based on computational audio scene analysis which uses pitch as the cue to resynthesize the singing voice. However, the unvoiced parts of the singing voice are totally ignored since they have no pitch at all. This paper proposes a method to detect unvoiced parts of an input signal and to resynthesize them without using pitch information. The experimental result demonstrates that the unvoiced parts can be reconstructed successfully, with 3.28 dB signal-to-noise ratio higher than that achieved by the currently state-of-the-art method in the literature.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.