Saturday, June 14, 1:30 PM - 4:30 PM
||Getting Out of the Box: Effective Design Practices for Audio on the Net|
||Maribeth Back, Xerox PARC, Palo Alto, CA, USA|
||Geoff Brown, Silicon Graphics, Mountain View, California, USA Michael Albers, Sun Microsystems, Cupertino, California, USA, Elizabeth Wenzel, NASA., Moffett Field, California, USA, David Thiel, Microsoft Research, Redmond, Washington, USA|
Maribeth Back: Sound design principles that work well in one environment often fail when transplanted to another (e.g., television techniques applied to live theatre). This happens when design parameters are not appropriately adjusted to fit the new environment. Psychoacoustic and cognitive psychology research provide guidelines for auditory resolution and perceived affect, as does design research and development through
prototyped systems. A design aesthetic and set of methodologies for sound in interactive systems arises from combining what research tells us about human perceptual mechanisms with what we know about the cultural mechanisms surrounding context and content.
Geoff Brown: VRML (Virtual Reality Modeling Language) now provides a means to deliver interactive spatial audio experiences across the internet. However, the constraints of the VRML language and net bandwidth demand that VRML authors use sounds very carefully. I will discuss effective techniques for using sound in VRML including sound formats, compression and streaming. I will also suggest some directions for improving sound transmission for VRML.
Michael Albers: New communication technologies mimic attributes of conventional media until they become established in mainstream culture. For example, current Internet audio technologies, such as streaming audio and net-based telephones, present well-known metaphors to their users. As the Internet expands and becomes embedded in everyday life, these new, ubiquitous technologies will empower unique interactions between users and computers.
Elizabeth M. Wenzel: Auditory complexity, freedom of movement and interactivity is not always possible in a "true" virtual environment, much less in web-based audio. However, a lot of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to spatial audio on the web. My talk will discuss some of these engineering constraints and their perceptual consequences, and attempt to relate these issues to implementation on the web.
David Thiel: Good sound for interaction requires late binding. The final audio result needs to be determined as close to the interaction as possible. The design goals of interactive audio are the same as in traditional postproduction. However, much of the work that is traditionally done in post must be done at run-time with only the facilities that are available on the run-time platform. Sound artifacts for interaction are synchronized, layered, mixed, and spatialized based on behavior that the interactive sound engineer authors.
Primary Discovery Questions:
- Why do we find certain kinds of sound design appropriate to some environments, but not others? For example, music that's appropriate for television will seem overblown and manipulative in a theatrical production. How does this observation translate to Net audio?
- What sort of perceptual or cognitive affordances are not really being used to advantage in Net audio right now?
- For example, how could spatialization be more effectively used? What will it take to implement it in the home or office environment?
- We've noted that right now on the Net we're largely just replicating older forms of media, while taking some baby steps towards new ones. What do you expect we might be hearing on the Net in ten years? Twenty? Fifty?
- What makes an effective audio artifact?
- What are the significant differences between games, toys, instruments, and tools? What role does audio tend to play in these?
- How does audio on the Internet differ from audio in more traditional media?
- Speculate on the kinds of tools that you'd really like to have for building Web audio.
- Expostulate on the most frustrating features of the tools you currently have for building Web audio.
- Can we identify design principles for the construction of sound in the interactive and multimodal artifacts and environments we are now developing?
- How do cultural differentiations affect the development of Web-based audio artifacts?
- As we move between different media in our daily lives, what assumptions do we carry that enable us to correctly interpret the information we find and the interaction we are expected to perform? How can we make artifacts that exploit these assumptions?
Maribeth Back, Geoff Brown, Michael Albers, Elizabeth M. Wenzel, David Thiel
MARIBETH BACK draws upon her professional background in audio recording and theatre to create sound designs for interactive installations, museums, live theatre, radio, CD-ROMs, and computer-based environments. In 1996, she was sound designer and a primary performer in Brain Opera, performed at Lincoln Center and Ars Electronica. Also in 1996, she earned a doctorate from the Harvard Graduate School of Design. Her current research at Xerox PARC involves audio design for awareness systems and virtual environments.
GEOFF BROWN has been involved with the internet since 1974, interactive multimedia since 1979 and computer audio since 1984 while working for BBN, Xerox, Apple, Electronic Arts and MacroMedia. At Silicon Graphics, Geoff has combined those three areas while helping to define and implement the audio part of the VRML specification. He is now developing noisy VRML worlds for web delivery. He also just contributed a chapter on spatialized sound to the recently published "Late Nite VRML (with Java)" book.
MICHAEL C. ALBERS is a User Interface Designer in Sun Microsystems' JavaSoft division. His interests include human-computer interaction, auditory interfaces, cognitive science, and the history of technology.
Elizabeth M. Wenzel
Elizabeth M. (Beth) Wenzel received a Ph.D. in cognitive psychology with an emphasis in psychoacoustics from the University of California, Berkeley in 1984. From 1985-1986 she was a National Research Council post-doctoral research associate at NASA-Ames Research Center working on the auditory display of information for aviation systems. Since 1986 she has been Director of the Spatial Auditory Displays Lab in the Flight Management & Human Factors Division at NASA-Ames, directing development of real-time display technology and conducting basic and applied research in auditory perception and localization in three-dimensional virtual acoustic displays. Her research collaborators in these areas include Dr. Durand Begault of NASA Ames and Dr. Frederic Wightman and Dr. Doris Kistler of the University of Wisconsin-Madison. The Convolvotron 3-D sound system, designed by Scott Foster of Crystal River Engineering, was also developed as part of the Ames 3-D Sound Project. Dr. Wenzel is an Associate Editor of the journal Presence and has published a number of articles and spoken at many conferences on the topic of virtual acoustic environments.
David D. Thiel has been making sound for interaction since 1981 when he joined a Chicago pinball company that was starting their video game division. Since then he has architected a half dozen hardware/software interactive audio systems, created more than 50 interactive sound tracks for coin-operated video, pinball, redemption products, home game consoles and disk based platforms. Four years ago David became a researcher in the User Interface Research Group of Microsoft Research.