AES Store

Journal Forum

Reflecting on Reflections - June 2014
1 comment

Quiet Thoughts on a Deafening Problem - May 2014
1 comment

Perceptual Effects of Dynamic Range Compression in Popular Music Recordings - January 2014
5 comments

Access Journal Forum

AES E-Library

Fusing Grouping Cues for Partials Using Artificial Neural Networks

The primary stages of auditory stream separation are modelled here as a bottom-up organisation of primitive spectro-temporal regions into composites, which provide a succinct description of the sound environment in terms of a limited number of salient sound events. Many years of research on auditory streaming have determined a number of qualitative comparisons between physical attributes of sounds, or cues, that are well-correlated with the extent to which stream segregation or fusion of simple test stimuli occurs in listening tests. However, the relative importance of these cues is difficult to determine, especially in natural sound environments. This work represents some exploratory stages in using an artificial neural network to learn how to integrate multiple cues in nonlinear manner, for sound object formation. As a precursor to a more complex auditory front-end, a sinusoidal tracking algorithm was used to obtain the initial set of "spectro-temporal regions" or partial trajectories.

Authors:
Affiliation:
AES Conference:
Paper Number:
Publication Date:
Subject:

Click to purchase paper or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $20 for non-members, $5 for AES members and is free for E-Library subscribers.

Learn more about the AES E-Library

E-Library Location:

Start a discussion about this paper!


 
Facebook   Twitter   LinkedIn   Google+   YouTube   RSS News Feeds  
AES - Audio Engineering Society