AES London 2010
Paper Session P24

Tuesday, May 25, 14:00 — 17:30 (Room C3)

P24 - Innovative Applications


Chair: John Dawson

P24-1 The Serendiptichord: A Wearable Instrument for Contemporary Dance PerformanceTim Murray-Browne, Di Mainstone, Nick Bryan-Kinns, Mark D. Plumbley, Queen Mary University of London - London, UK
We describe a novel musical instrument designed for use in contemporary dance performance. This instrument, the Serendiptichord, takes the form of a headpiece plus associated pods that sense movements of the dancer, together with associated audio processing software driven by the sensors. Movements such as translating the pods or shaking the trunk of the headpiece cause selection and modification of sampled sounds. We discuss how we have closely integrated physical form, sensor choice, and positioning and software to avoid issues that otherwise arise with disconnection of the innate physical link between action and sound, leading to an instrument that non-musicians (in this case, dancers) are able to enjoy using immediately.
Convention Paper 8139 (Purchase now)

P24-2 A Novel User Interface for Musical Timbre DesignAllan Seago, London Metropolitan University - London, UK; Simon Holland, Paul Mulholland, Open University - UK
The complex and multidimensional nature of musical timbre is a problem for the design of intuitive interfaces for sound synthesis. A useful approach to the manipulation of timbre involves the creation and subsequent navigation or search of n-dimensional coordinate spaces or timbre spaces. A novel timbre space search strategy is proposed based on weighted centroid localization (WCL). The methodology and results of user testing of two versions of this strategy in three distinctly different timbre spaces are presented and discussed. The paper concludes that this search strategy offers a useful means of locating a desired sound within a suitably configured timbre space.
Convention Paper 8140 (Purchase now)

P24-3 Bi-Directional Audio-Tactile Interface for Portable Electronic DevicesNeil Harris, New Transducers Ltd. (NXT) - Cambridge, UK
When an audio system uses the screen or casework vibrating as the loudspeaker, it can also provide haptic feedback. Just as a loudspeaker may be used reciprocally as a microphone, the haptic feedback aspect of the design may be operated as a touch sensor. This paper considers how to model a basic system embodying these aspects, including the electrical part, with a finite element package. For a piezoelectric exciter, full reciprocal modeling is possible, but for electromagnetic exciters it is not, unless multi-physics simulation is supported. For the latter, a model using only lumped parameter mechanical elements is developed.
Convention Paper 8141 (Purchase now)

P24-4 Tactile Music Instrument Recognition for Audio MixersSebastian Merchel, Ercan Altinsoy, Maik Stamm, Dresden University of Technology - Dresden, Germany
To use touch screens for digital audio workstations, particularly audio mixing consoles, is not very common today. One reason is the ease of use and the intuitive tactile feedback that hardware faders, knobs, and buttons provide. Adding tactile feedback to touch screens will largely improve usability. In addition touch screens can reproduce innovative extra tactile information. This paper investigates several design parameters for the generation of the tactile feedback. The results indicate that music instruments can be distinguished if tactile feedback is rendered from the audio signal. This helps to improve recognition of an audio signal source that is assigned, e.g., to a specific mixing channel. Applying this knowledge, the use of touch screens in audio applications becomes more intuitive.
Convention Paper 8142 (Purchase now)

P24-5 Augmented Reality Audio EditingJacques Lemordant, Yohan Lasorsa, INRIA - Rhône-Alpes, France
The concept of augmented reality audio (ARA) characterizes techniques where a physically real sound and voice environment is extended with virtual, geolocalized sound objects. We show that the authoring of an ARA scene can be done through an iterative process composed of two stages: in the first one the author has to move in the rendering zone to apprehend the audio spatialization and the chronology of the audio events, and in the second one a textual editing of the sequencing of the sound sources and DSP acoustics parameters is done. This authoring process is based on the joint use of two XML languages, OpenStreetMap for maps and A2ML for Interactive 3-D audio. A2ML, being a format for a cue-oriented interactive audio system, requests for interactive audio services are done through TCDL, a tag-based cue dispatching language. This separation of modeling and audio rendering is similar to what is done for the web of documents with HTML and CSS style sheets.
Convention Paper 8143 (Purchase now)

P24-6 Evaluation of a Haptic/Audio System for 3-D Targeting TasksLorenzo Picinali, De Montfort University - Leicester, UK; Bob Menelas, Brian F. G. Katz, Patrick Bourdot, LIMSI-CNRS - Orsay, France
While common user interface designs tend to focus on visual feedback, other sensory channels may be used in order to reduce the cognitive load of the visual one. In this paper non-visual environments are presented in order to investigate how users exploit information delivered through haptic and audio channels. A first experiment is designed to explore the effectiveness of a haptic audio system evaluated in a single target localization task; a virtual magnet metaphor is exploited for the haptic rendering, while a parameter mapping sonification of the distance to the source, combined with 3-D audio spatialization, is used for the audio one. An evaluation is carried out in terms of the effectiveness of separate haptic and auditory feedbacks versus the combined multimodal feedback.
Convention Paper 8144 (Purchase now)

P24-7 Track Displays in DAW Software: Beyond Waveform ViewsKristian Gohlke, Michael Hltaky, Sebastian Heise, David Black, Hochschule Bremen (University of Applied Sciences) - Bremen, Germany; Jörn Loviscach, Fachhhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
For decades, digital audio workstation software has displayed the content of audio tracks through bare waveforms. We argue that the same real estate on the computer screen can be used for far more expressive and goal-oriented visualizations. Starting from a range of requirements and use cases, this paper discusses existing techniques from such fields as music visualization and music notation. It presents a number of novel techniques, aimed at better fulfilling the needs of the human operator. To this end, the paper draws upon methods from signal processing and music information retrieval as well as computer graphics.
Convention Paper 8145 (Purchase now)