In This Section
Journal of the AES
2012 July/August - Volume 60 Number 7/8
Guest Editors’ Note: Special Issue on Auditory Display
The way in which auditory displays are presented can significantly influence the experience of listeners. Spatio-temporal factors influence and redirect the attention toward such displays. These factors include presentation speed, stimulus externalization, and location in the three dimensional space. Results show that stimuli perceived inside the head resulted in a more accurate and faster response than those that were externalized. For externalized auditory displays, those presented in the frontal hemisphere were attended to faster. Response times did not change linearly with presentation speed; there was an optimal presentation rate at which the response time is fastest without compromising accuracy.
When a visual menu, such as in a computer, needs to be presented with sound, there is a question about how to best communicate an item that is unavailable. This question was explored in three studies that showed that using a whispered voice for unavailable items was favored over an attenuated voice, saying “unavailable,” or skipping such items. In general, participants preferred a female voice over a male voice. Results are discussed in terms of acoustic theory, cognitive menu selection theory, and user interface accessibility. The authors asserted that designers should go beyond a naïve translation from text into speech when creating auditory systems, thereby creating subjective satisfaction.
The author reviews the requirements and design approaches for auditory displays in flight decks and air traffic control workstations as part of NextGen, the major reconfiguration of the National Airspace System. The design philosophy includes: filtering information to reduce the burden on mental resources; integrate, categorize, and prioritize information to enable mentally economic processing; and embed information in alerting signal to facilitate efficient allocation of attention.The quality of an auditory display can be viewed in terms of how each message hierarchically satisfies four general principles: detection, discrimination, intelligibility, and familiarity.
Sonification, which is the conversion of physical data (temperature, network traffic, or an electrocardiogram) into sound, is very suitable for augmenting task monitoring. The challenge to the sound designer is to create a monitoring display that maximizes the conveyed information while at the same time remaining unobtrusive. In this article an auditory augmentation approach was developed for monitoring evolutionary optimization algorithms. In order to augment short interaction sounds effectively with the information about the high-dimensional search space, a sonification method known as data sonograms was applied. Various mappings of data to sound features have been investigated, where the delay of the signal has been shown to be an interesting parameter that plays a role in both unobtrusiveness and information conveyed.
The means by which audio was delivered (headphones or loudspeakers) in a shared workplace environment influenced the dynamics of collaborations. In an experiment designed to show the influence of audio delivery, pairs of sighted individuals used audio as the sole means for communicating with one another while editing a shared diagram. The choice of working style affects how collaborators attend to the sounds present in a collaborative space, which in turn influences how they structure and organize their interactions. That in turn determines which information is relevant, dynamically changing according to how collaborators choose to work with sounds. Another conclusion was that the mere physical presence of audio in a shared space does not necessary imply that it is being attended to by those hearing it.
Because elite athletes require an unconscious and automated sense of time, and because sound is especially appropriate for conveying timing information, acoustic feedback can be especially useful in training of rowers. In the context of human movement, rhythm is a time accurate sequence of motor actions. Rhythm and synchronization are inseparable within a moving context. An auditory feedback signal based on boat acceleration helps rowers control their activities, and this sonified data can be stored in an audio file for later training and analysis. The improved sensitivity to the time-critical nature of the rowing cycle yielded an improved synchronization among the crew, as well as an improvement of individual athlete’s rowing technique.
Visually-impaired individuals and sighted individuals who are conducting tasks in divided-attention situations, both benefit from using sound to display information typically communicated visually. Auditory displays of statistical graphs are one such tool and can be effective in these situations. However, it is not obvious how these graphs should be designed. In a series of experiments in which information was conveyed by sound, subjects were divided into two groups: those hearing graphs using integral (pitch and loudness) and separable (pitch and timing) dimensions of sound. The results showed that pitch alone produced the worst performance and timing the best. However, designs using pitch and loudness redundantly provided good results as well.
In order for blind people to better use personal computers, an auditory virtual environment can be used to present information that might otherwise be available only with vision. Auditory objects can be spatial placed in the virtual environment if the user can successfully identify their location. In contrast to sighted subjects, blind subjects were better at detecting movements in the horizontal plane around the head, localizing static frontal audio sources, and orientation in a 2-D virtual audio display. On the other hand, sighted subjects performed better identifying ascending sound sources in the vertical plane and detecting static sources in the back.
Wearable Sensor-Based Real-Time Sonification of Motion and Foot Pressure in Dance Teaching and Training
As with tasks involved with motion and gesture, teaching dance can take advantage of auditory displays that map specific dance steps into their acoustic counterparts. Wearable sensors based on acoustic “fingerprints” accompany the dance movements in real-time. This kind of audio feedback has a positive influence on motor movement and perception. For example, joint angles, weight distribution, and energy of jumps are easily recognized through sound. With practice, a student can hear if a complex movement was correctly executed. The auditory system can hear complex patterns of rapid motion, especially aspects of a dance that are not easily seen.
Standards and Information Documents
AES Standards Committee News
Digital audio interfaces; metadata; audio connectors; microphone measurement and characterization
132nd Convention Report, Budapest
133rd Convention Preview, San Francisco
133rd Convention Exhibitor Previews
Leopold Stokowski was active as a recording artist from 1917 until 1977—virtually the entire period of the recording of music by analog technology. Robert Auld’s multimedia presentation at the 131st Convention paid special attention to Leopold Stokowski’s involvement in the development of multichannel sound recording, including his collaboration with Bell Labs starting in 1932, his work with Walt Disney for the film Fantasia, and his encouragement of recording in quadraphonic sound in the 1970s.
New Officers 2012/2013
132nd Convention Papers Order Form