AES Conventions and Conferences

   Return to 121st
   Registration
   Housing Information
   Exhibition
   Technical Program
   Detailed Calendar
   4 Day Planner
   Paper Sessions
   Workshops
   Broadcast Events
   Special Events
   Tutorials
   Master Classes
   Live Sound Seminars
   Technical Tours
   Student / Career
   Historical
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Exhibitor Seminars
   Training Sessions
   Press Information
   Exhibitor Info
   Author Information
   SFO Exhibition

Last Updated: 20060821, mei

P14 - Analysis and Synthesis of Sound

Saturday, October 7, 8:30 am — 11:30 am

Chair: Duane Wise, Consultant - Boulder, CO, USA

P14-1 Determining the Need for Dither when Re-Quantizing a 1-D SignalCarlos Fabian Benitez-Quiroz, Shawn D. Hunt, University of Puerto Rico - Mayaguez, Puerto Rico
This paper presents novel methods for determining if dither is needed when reducing the bit depth of a one-dimensional digital signal. These are statistical-based methods in both the time and frequency domains, and are based on determining whether the quantization noise with no dither added is white. If this is the case, then no undesired harmonics are added in the quantization or re-quantization process. Experiments showing the effectiveness of the methods with both synthetic and real audio signals are presented.
Convention Paper 6931 (Purchase now)

P14-2 Shape-Changing Symmetric Objects for Sound SynthesisCynthia Bruyns, David Bindel, University of California at Berkely - Berkeley, CA, USA
In the last decade, many researchers have used modal synthesis for sound generation. Using a modal decomposition, one can convert a large system of coupled differential equations into simple, independent differential equations in one variable. To synthesize sound from the system, one solves these decoupled equations numerically, which is much more efficient than solving the original coupled system. For large systems, such as those obtained from finite-element analysis of a musical instrument, the initial modal decomposition is time-consuming. To design instruments from physical simulation, one would like to be able to compute modes in real-time, so that the geometry, and therefore spectrum, of an instrument can be changed interactively. In this paper we describe how to quickly compute modes of instruments that have rotational symmetry in order to synthesize sounds of new instruments quickly enough for interactive instrument design.
Convention Paper 6932 (Purchase now)

P14-3 Unisong: A Choir Singing SynthesizerJordi Bonada, Merlijn Blaauw, Alex Loscos, Universitat Pompeu Fabra - Barcelona, Spain; Kenmochi Hideki, YAMAHA Corporation - Hamamatsu, Japan
Computer-generated singing choir synthesis can be achieved by two means: clone transformation of a single voice or concatenation of real choir recording snippets. As of today, the synthesis quality for these two methods lack naturalness and intelligibility, respectively. Unisong is a new concatenation-based choir singing synthesizer able to generate a high quality synthetic performance out of the score and lyrics specified by the user. This paper describes all actions and techniques that take place in the process of virtual synthesis generation: choir recording scripts design and realization, human supervised automatic segmentation of the recordings, creation of samples database, and sample acquiring, transformation and concatenation. The synthesizer will be demonstrated with song sample.
Convention Paper 6933 (Purchase now)

P14-4 Accurate Low-Frequency Magnitude and Phase Estimation in the Presence of DC and Near-DC AliasingKevin Short, University of New Hampshire - Durham, NH, Groove Mobile, Bedford, MA, USA; Ricardo Garcia, Groove Mobile - Bedford, MA, USA
Efficient high resolution parameter estimation of sinusoidal elements has been shown to be of fundamental importance in applications such as measurement, parametric decomposition of signals, and low bit-rate audio coding. Certain methods such as the Complex Spectral Phase Evolution (CSPE) can be used to estimate the true frequency, magnitude, and phase of underlying tones in a signal with accuracy that is significantly more precise than the signal resolution of a transform-based analysis. These methods usually require the signal elements to be spectrally separated so that the mutual interference is minimal (often referred as the “analysis window main lobe width”). This paper extends the methods introduced in CSPE for low frequency real tone signals, where the interference or “leakage” from the negative frequencies is unavoidable, regardless of what analysis window is used. The new technique gives improved magnitude and phase estimates for the sinusoidal parameters.
Convention Paper 6934 (Purchase now)

P14-5 Frequency Domain Phase Model of Transient EventsKevin Short, University of New Hampshire - Durham, NH, USA, Groove Mobile, Bedford, MA, USA
Short time transient events are extremely challenging to represent in the transform domain employed by common transform-based codecs used in applications such audio compression. These short-time events last for a duration that is much shorter than a typical data window and, consequently, have power distributed throughout the transform domain. Accurate representation of these events in the transform domain requires higher bit rates than usually available. A common solution is to use window switching, where smaller windows are used for short time transient events, but this has a negative impact on the bit rate as well. In this paper we show that with certain simplifying assumptions, transient reconstruction can be reduced to a tractable problem that is performed in the frequency domain, so that the transient event can be easily mixed in with the representation of the non transient events. A closed form frequency domain representation for the phase of a transient event is introduced, and it is shown that this can be done in an iterative way that allows for increasingly complex transient structures back in the time domain.
Convention Paper 6935 (Purchase now)

P14-6 Doing Good by the “Bad Boy”: Performing George Antheil’s Ballet mécanique with RobotsPaul Lehrman, Tufts University - Medford, MA, USA; Eric Singer, League of Electronic Music Urban Robots - Brooklyn, NY, USA
The Ballet mécanique, by George Antheil, was a musical composition far ahead of its time. Written in 1924, it required technology that didn't exist: multiple synchronized player pianos. Not until 1999, with the aid of computers and MIDI, could the piece be performed the way the composer envisioned it. Since then, it has been played over 20 times in North America and Europe. But its most unusual performance was the result of a collaboration between the authors: one, the music technologist who revived the piece and the other, a musical robotics expert. At the request of the National Gallery of Art in Washington, DC, they built a completely automated 27-piece orchestra, which played the piece nearly 100 times, without a serious failure.
Convention Paper 6936 (Purchase now)


Back to AES 121st Convention Back to AES Home Page


(C) 2006, Audio Engineering Society, Inc.