Sonification is the systematic representation of data using sounds, such as text-to-speech, color readers, Geiger counters, acoustic radars, and MIDI synthesizers. This paper surveys existing sonification systems and suggests taxonomy of algorithms and devices. The sonification process requires an artificial mapping between two sensory modalities using a model based on either psychoacoustics or artificial heuristics. In the former, the paradigm exploits the natural discrimination of the source spatial parameters (distance, azimuth, and elevation, for instance). In the latter, the paradigm creates an artificial match between graphical and auditory cues. Artificial sonification uses nonspatial characteristics of the sound, such as frequency, brightness or timbre, formants, saturation, and time intervals, which are not related to the physical characteristics or parameters of objects or surroundings.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.