AES New York 2009
Poster Session P6
P6 - Production and Analysis of Musical Sounds
Friday, October 9, 3:30 pm — 5:00 pm
P6-1 Automated Cloning of Recorded Sounds by Software Synthesizers—Sebastian Heise, Michael Hlatky, Hochschule Bremen (University of Applied Sciences) - Bremen, Germany; Jörn Loviscach, Fachhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
Any audio recording can be turned into a digital musical instrument by feeding it into an audio sampler. However, it is difficult to edit such a sound in musical terms or even to control it in real time with musical expression. Even the application of a more sophisticated synthesis method will show little change. Many composers of electronic music appreciate the direct and clear access to sound parameters that a traditional analog synthesizer offers. Is it possible to automatically generate a synthesizer setting that approximates a given audio recording and thus clone a given sound to be controlled with the standard functions of the particular synthesizer employed? Even though this problem seems highly complex, we demonstrate that its solution becomes feasible with computer systems available today. We compare sounds on the basis of acoustic features known from Music Information Retrieval and apply a specialized optimization strategy to adjust the settings of VST instruments. This process is sped up using multi-core processors and networked computers.
Convention Paper 7858 (Purchase now)
P6-2 Low-Latency Conversion of Audible Guitar Tones into Visible Light Colors—Nermin Osmanovic, Microsoft Corporation - Redmond, WA, USA
Automated sound-to-color transformation system makes it possible to display the corresponding color map that matches the actual note played on the guitar at the same instant. One application of this is to provide intelligent color effect “light show” for live instruments on stage during performance. By using time and frequency information of the input signal, a computer can analyze sound events and determine which tone is currently being played. The knowledge about guitar tone sound event being played on the audio input provides a basis for the implementation of a digital sound-to-light converter. The converter streams live audio input, analyzes frames based on the signal’s power threshold, determines fundamental frequency of the current tone, maps this information to color, and displays the targeted light color in real-time. The final implementation includes full screen presentation mode with real time display of both pitch and intensity of the sound.
Convention Paper 7859 (Purchase now)
P6-3 TheremUS: The Ultrasonic Theremin—André Gomes, Globaltronic-Electrónica e Telecomunicaçoes - Águeda, Portugal; Daniel Albuquerque, Guilherme Campos, José Vieira, University of Aveiro - Aveiro, Portugal
In the Theremin, the performer’s hand movements, detected by two antennas, control the pitch and volume of the generated sound. The TheremUS builds on this concept by using ultrasonic sensing for hand position detection and processing all signals digitally, a distinct advantage in terms of versatility. Not only can different sound synthesis algorithms be programmed directly on the instrument but also it can be easily connected to other digital sound synthesis or multimedia devices; a MIDI interface was included for this purpose. The TheremUS also features translucent panels lit by controllable RGB LED devices. This makes it possible to specify sound-color mappings in the spirit of the legendary Ocular Harpsichord by Castel.
Convention Paper 7860 (Purchase now)
P6-4 Structural Segmentation of Irish Traditional Music Using Chroma at Set Accented Tone Locations—Cillian Kelly, Mikel Gainza, David Dorran, Eugene Coyle, Dublin Institute of Technology - Dublin, Ireland
An approach is presented that provides a structural segmentation of Irish Traditional Music. Chroma information is extracted at certain locations within the music. The resulting chroma vectors are compared to determine similar structural segments. Chroma is only calculated at “set accented tone” locations within the music. Set accented tones are considered to be impervious to melodic variation and are entirely representative of an Irish Traditional tune. Results show that comparing set accented tones represented by chroma signi?cantly increases the structural segmentation accuracy than when set accented tones are represented by pitch values.
Convention Paper 7861 (Purchase now)
P6-5 Rendering Audio Using Expressive MIDI—Stephen J. Welburn, Mark D. Plumbley, Queen Mary, University of London - London, UK
MIDI renderings of audio are traditionally regarded as lifeless and unnatural—lacking in expression. However, MIDI is simply a protocol for controlling a synthesizer. Lack of expression is caused by either an expressionless synthesizer or by the difficulty in setting the MIDI parameters to provide expressive output. We have developed a system to construct an expressive MIDI representation of an audio signal, i.e., an audio representation that uses tailored pitch variations in addition to the note base pitch parameters that audio-to-MIDI systems usually attempt. A pitch envelope is estimated from the original audio, and a genetic algorithm is then used to estimate pitch modulator parameters from that envelope. These pitch modulations are encoded in a MIDI file and rerendered using a sampler. We present some initial comparisons between the final output audio and the estimated pitch envelopes.
Convention Paper 7862 (Purchase now)
P6-6 Editing MIDI Data Based on the Acoustic Result—Sebastian Heise, Michael Hlatky, Hochschule Bremen (University of Applied Sciences) - Bremen, Germany; Jörn Loviscach, Fachhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
MIDI commands provide an abstract representation of audio in terms of note-on and note-off times, velocity, and controller data. The relationship of these commands to the actual audio signal is dependent on the actual synthesizer patch in use. Thus, it is hard to implement effects such as compression of the dynamic range or time correction based on MIDI commands alone. To improve on this, we have developed software that silently computes a software synthesizer’s audio output on each parameter update to support editing of the MIDI data based on the resulting audio data. Precise alignment of sounds to the beat, sample-correct changes in articulation, and musically meaningful dynamic compression through velocity data become possible.
Convention Paper 7863 (Purchase now)
P6-7 Sound Production and Audio Programming of the Sound Installation GROMA—Judith Nordbrock, Martin Rumori, Academy of Media Arts Cologne - Cologne, Germany
In this paper we shall be picturing the sound production, in particular the mixing scenarios and the audio programming, of GROMA. GROMA is a permanent urban sound installation that incorporates early historic texts on urbanization combined with environmental sounds from two of Cologne’s partner cities, Rotterdam and Liège, in an algorithmic multichannel composition. This installation was inaugurated in 2008 at the location of Europe's largest underground parking lot in Cologne, situated in the area of Rheinauhafen. For producing the sound material, special methods had to be developed, that allow for the fine-grained aesthetical design of the sound in the unconventional venue and that also support the aleatoric combination of the sound situations.
Convention Paper 7864 (Purchase now)