v7.0, 20040922, me
Thursday, October 28, 2:00 pm 4:00 pm
Session Z1 Posters: MUSIC SYNTHESIS & AUDIO ARCHIVING, STORAGE AND RESTORATION; CONTENT MANAGEMENT
NOTE: During the first 10 minutes of the session all authors will present a brief outline of their presentation.
Z1-1 Frequency-Domain Additive Synthesis with an Oversampled Weighted Overlap-Add Filterbank for a Portable Low-Power MIDI SynthesizerKing Tam, Dspfactory Ltd., Waterloo, Ontario, Canada
This paper discusses a hybrid audio synthesis method employing both additive synthesis and DPCM audio playback, and the implementation of a miniature synthesizer system that accepts MIDI as an input format. Additive synthesis is performed in the frequency domain using a weighted overlap-add filterbank, providing efficiency gains compared to previously known methods. The synthesizer system is implemented on an ultra-miniature, low-power, reconfigurable application specific digital signal processing platform. This low-resource MIDI synthesizer is suitable for portable, low-power devices such as mobile telephones and other portable communication devices. Several issues related to the additive synthesis method, DPCM codec design, and system tradeoffs are discussed.
Convention Paper 6202
Z1-2 Virtual Air GuitarMatti Karjalainen, Teemu Mäki-Patola, Aki Kanerva, Antti Huovilainen, Pekka Jänis, Helsinki University of Technology, Espoo, Finland
A combination of hand-held controls and a guitar synthesizer is called the Virtual Air Guitar. The name refers to playing an air guitar, i.e., just acting the playing with music playback; the term virtual refers to making a playable synthetic instrument. Sensing of the left hand position is used for pitch control, the right hand movements for plucking, and the finger positions of both hands for the other features of sound production. The synthetic guitar algorithm supports electric as well as acoustic sounds, augmented with sound effects and intelligent mapping from playing gestures to synthesis parameters. The realization of the virtual instrument is described, and sound demonstrations are made available.
Convention Paper 6203
Z1-3 Visually Controlled Synthesis Using the Sonic Scanner and the Graphonic InterfaceDan Overholt, University of California at Santa Barbara, Santa Barbara, CA, USA, Studio for Electro-Instrumental Music, Amsterdam, The Netherlands
This paper describes the concepts, design, implementation, and evaluation of two new interfaces for music performance and composition, and their control of various syntheses algorithms through the visual domain. Both of the interfaces were inspired by the idea of generating music through drawing, but they approach the activity in very different ways. While the Graphonic Interface allows you to make music as you are drawing, the Sonic Scanner needs precomposed graphic material in order to make music. However, both of the devices are real-time controllers that produce sound in an interactive manner, thereby allowing them to be used as performance instruments.
Convention Paper 6204
Z1-4 Mana, a Tool for Human-Supervised Statistical Analysis in Audio-Content ExtractionNicolas Wack, Pompeu Fabra University, Barcelona, Spain
Nowadays, metadata and audio descriptors extraction (used in classification, for instance) is engaged in a rather blind and brute-force method, computing the most possible and then selecting what works using an often long and boring statistical analysis. Moreover, this analysis barely takes into account the intrinsic sense these descriptors may carry. Mana is a graphical user-interface (GUI) based system that aims at adding a bit of human supervision in the process, combining state-of-the-art classification methods (in audio-content extraction and classification) with an ease of use that provides the user with direct control over descriptors and their significance in classification.
Convention Paper 6205
Z1-5 Nonlinear Projection Algorithm for Evaluating Multiple Listener Equalization PerformanceSunil Bharitkar, Chris Kyriakakis, Audyssey Laboratories, Inc., Los Angeles, CA, USA
In this paper we present an application of a multi-dimension to two-dimension projection algorithm for visualizing room equalization at multiple locations. The algorithm allows easy visualization of the formation of clusters for our previously presented pattern recognition based multiple listener room equalization filter. Furthermore, the mapping provides an interesting perspective on the formation of room response clusters. We also compare the results obtained from using the proposed map to the results obtained by using the spectral deviation measure.
Convention Paper 6206