AES Munich 2009
Home Visitors Exhibitors Press Students Authors
Technical Program

Detailed Calendar

Paper Sessions



Live Sound Seminars

Exhibitor Seminars

Special Events

Student Program

Technical Tours

Technical Council

Standards Committee

Heyser Lecture

AES Munich 2009
Paper Session P25

P25 - Sound Design and Processing

Sunday, May 10, 09:00 — 12:30
Chair: Michael Hlatky

P25-1 Hierarchical Perceptual MixingAlexandros Tsilfidis, Charalambos Papadakos, John Mourjopoulos, University of Patras - Patras, Greece
A novel technique of perceptually-motivated signal-dependent audio mixing is presented. The proposed Hierarchical Perceptual Mixing (HPM) method is implemented in the spectro-temporal domain; its principle is to combine only the perceptually relevant components of the audio signals, derived after the calculation of the minimum masking threshold, which is introduced in the mixing stage. Objective measures are presented indicating that the resulting signals have enhanced dynamic range and lower crest factor with no unwanted artifacts, compared to the traditionally mixed signals. The overall headroom is improved, while clarity and tonal balance are preserved.
Convention Paper 7789 (Purchase now)

P25-2 Source-Filter Modeling in Sinusoid DomainWen Xue, Mark Sandler, Queen Mary, University of London - London, UK
This paper presents the theory and implementation of source-filter modeling in sinusoid domain and its applications on timbre processing. The technique decomposes the instantaneous amplitude in a sinusoid model into a source part and a filter part, each capturing a different aspect of the timbral property. We show that the sinusoid domain source-filter modeling is approximately equivalent to its time or frequency domain counterparts. Two methods are proposed for the evaluation of the source and filter, including a least-square method based on the assumption of slow variation of source and filter in time, and a filter bank method that models the global spectral envelope in the filter. Tests show the effectiveness of the algorithms for isolation frequency-driven amplitude variations. Example applications are given to demonstrate the use of the technique for timbre processing.
Convention Paper 7790 (Purchase now)

P25-3 Analysis of a Modified Boss DS-1 Distortion PedalMatthew Schneiderman, Mark Sarisky, University of Texas at Austin - Austin, TX, USA
Guitar players are increasingly modifying (or paying someone else to modify) inexpensive mass-produced guitar pedals into boutique units. The Keeley modification of the Boss DS-1 is a prime example. In this paper we compare the measured and perceived performance of a Boss DS-1 before and after applying the Keeley All-Seeing-Eye and Ultra mods. This paper sheds light on psychoacoustics, signal processing, and guitar recording techniques in relation to low fidelity guitar distortion pedals.
Convention Paper 7791 (Purchase now)

P25-4 Phase and Amplitude Distortion Methods for Digital Synthesis of Classic Analog WaveformsJoseph Timoney, Victor Lazzarini, Brian Carty, NUI Maynooth - Maynooth, Ireland; Jussi Pekonen, Helsinki University of Technology - Espoo, Finland
An essential component of digital emulations of subtractive synthesizer systems are the algorithms used to generate the classic oscillator waveforms of sawtooth, square, and triangle waves. Not only should these be perceived to be authentic sonically, but they should also exhibit minimal aliasing distortions and be computationally efficient to implement. This paper examines a set of novel techniques for the production of the classic oscillator waveforms of analog subtractive synthesis that are derived from using amplitude or phase distortion of a mono-component input waveform. Expressions for the outputs of these distortion methods are given that allow parameter control to ensure proper bandlimited behavior. Additionally, their implementation is demonstrably efficient. Last, the results presented illustrate their equivalence to their original analog counterparts.
Convention Paper 7792 (Purchase now)

P25-5 Soundscape Attribute IdentificationMartin Ljungdahl Eriksson, Jan Berg, Luleå University of Technology - Luleå, Sweden
In soundscape research, the field’s methods can be employed in combination with approaches involving sound quality attributes in order to create a deeper understanding of sound images and soundscapes and how these may be described and designed. The integration of four methods are outlined, two from the soundscape domain and two from the sound engineering domain.
Convention Paper 7793 (Purchase now)

P25-6 SonoSketch: Querying Sound Effect Databases through PaintingMichael Battermann, Sebastian Heise, Hochschule Bremen (University of Applied Sciences) - Bremen, Germany; Jörn Loviscach, Fachhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
Numerous techniques support finding sounds that are acoustically similar to a given one. It is hard, however, to find a sound to start the similarity search with. Inspired by systems for image search that allow drawing the shape to be found, we address quick input for audio retrieval. In our system, the user literally sketches a sound effect, placing curved strokes on a canvas. Each of these represents one sound from a collection of basic sounds. The audio feedback is interactive, as is the continuous update of the list of retrieval results. The retrieval is based on symbol sequences formed from MFCC data compared with the help of a neural net using an editing distance to allow small temporal changes.
Convention Paper 7794 (Purchase now)

P25-7 Generic Sound Effects to Aid in Audio RetrievalDavid Black, Sebastian Heise, Hochschule Bremen (University of Applied Sciences) - Bremen, Germany; Jörn Loviscach, Fachhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
Sound design applications are often hampered because the sound engineer must either produce new sounds using physical objects, or search through a database of sounds to find a suitable sample. We created a set of basic sounds to mimic these physical sound-producing objects, leveraging the mind's onomatopoetic clustering capabilities. These sounds, grouped into onomatopoetic categories, aid the sound designer in music information retrieval (MIR) and sound categorization applications. Initial testing regarding the grouping of individual sounds into groups based on similarity has shown that participants tended to group certain sounds together, often reflecting the groupings our team constructed.
Convention Paper 7795 (Purchase now)