Many audio applications use time-frequency representations such as the Gabor and wavelet transforms. For such applications, it is often required that the signal representation matches human auditory perception and allows reconstruction. On that purpose, this paper presents the results of psychoacoustical experiment on auditory time-frequency masking using stimuli with maximal concentration in the time-frequency plane. These new data constitute a crucial basis for the prediction of auditory masking in the time-frequency representations of sound signals. An algorithm that removes the inaudible components in the wavelet transform of a sound while causing no audible difference to the original sound after re-synthesis is proposed. Preliminary results are promising, although further development is required.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.