Return to Paper Sessions  
AES Barcelona 2005
Paper Session F - Analysis and Synthesis of Sound

Last Updated: 20050405, mei

Sunday, May 29, 09:00 — 13:00

Chair: Matti Karjalainen, Helsinki University of Technology - Espoo, Finland

F-1 Sound Analysis and Synthesis through Complex Bandpass Filter BanksJose R. Beltran, Jesus Ponce de Leon, University of Zaragoza - Zaragoza, Spain
In this paper a mathematical analysis based on the Complex Continuous Wavelet Transform over different theoretical signals is presented. The mathematical relationships between the Complex Continuous Wavelet Transform, the Hilbert Transform, and complex filter banks are presented in order to obtain useful filter bank design parameters. The mathematical analysis of three different signals is presented: a pure cosine, a sum of cosines, and a signal with frequency variations. The obtained theoretical results allow us to define a new algorithm to obtain an additive model of the input signal.
Convention Paper 6361 (Purchase now)

F-2 Voice Solo to Unison Choir TransformationJordi Bonada, Universitat Pompeu Fabra - Barcelona, Spain
In this paper we present a transformation that pretends to convert a voice solo into a large, unison choir. The basic idea behind the presented algorithm is to morph the input voice solo (dry recording) with a recorded sustained vowel of a unison choir. The processing algorithm is based on the rigid phase-locked vocoder adapted to harmonic sounds. Pitch and timbre are taken from the voice solo, and the local spectrum comes out from the analysis of the unison choir sample.
Convention Paper 6362 (Purchase now)

F-3 A Comparison of Sound Onset Detection Algorithms with Emphasis on Psychoacoustically Motivated Detection FunctionsNick Collins, Cambridge University - Cambridge, UK
While many onset detection algorithms for musical events in audio signals have been proposed, comparative studies of their efficacy for segmentation tasks are much rarer. This paper follows the lead of Bello et al. using the same hand-marked test database as a benchmark for comparison. That previous paper did not include in the comparison a psychoacoustically motivated algorithm originally proposed by Klapuri in 1999, an oversight that is corrected herein with respect to a number of variants of that model. Primary test domains are formed of nonpitched percussive (NPP) and pitched nonpercussive (PNP) sound events. Sixteen detection functions are investigated, including a number of novel and recently published models. Different detection functions are seen to perform well in each case, with substantially worse onset detection overall for the PNP case. It is contended that the NPP case is effectively solved by fast intensity change discrimination processes, but that stable pitch cues may provide a better tactic for the latter.
Convention Paper 6363 (Purchase now)

F-4 Automatic Characterization of Dynamics and Articulation of Expressive Monophonic RecordingsEsteban Maestre, Emilia Gomez, Universitat Pompeu Fabra - Barcelona, Spain
We describe a method to automatically extract a set of features from the audio signal that are related to musical expressivity, more concretely to dynamics and articulation. We define a description scheme based on intra-note segmentation into attack, sustain, release, and transition segments, as well as a subsequent amplitude and pitch contour characterization. Then, we present a series of algorithms to automatically perform intra-note segmentation and extract some features related to expressivity. We evaluate the performance of the methods for intra-note segmentation and feature extraction over a saxophone database of jazz standards and other recordings presenting expressive resources. Finally, we propose some future work and applications.
Convention Paper 6364 (Purchase now)

F-5 Application of Block-Based Physical Modeling for Digital Sound Synthesis of String InstrumentsStefan Petrausch, Rudolf Rabenstein, University of Erlangen-Nuremberg - Erlangen, Germany
Block-based physical modeling is a recently introduced method for digital sound synthesis via the simulation of physical models. The method follows a "divide-and-conquer" approach, where the elements are individually modeled and discretized, while their interaction topology is separately implemented. In this paper the application of this approach to string instruments is presented. The string is modeled with the functional transformation method, preserving all varieties of strings by a direct link from the physical string parameters to the algorithm. The excitation of the string is modeled separately with wave digital filters. Thanks to the block-based approach it is possible to simulate all kinds of string instruments, i.e., piano, guitar, and violin, with the same piece of code for the strings.
Convention Paper 6365 (Purchase now)

F-6 Frequency Modulation Tone Matching Using a Fuzzy Clustering Evolution StrategyThomas J. Mitchell, Charles W. Sullivan, University of the West of England - Bristol, UK
Frequency modulation parameter estimation has provided a continual challenge to researchers since its application to audio synthesis over thirty years ago. Recent research has made use of basic evolutionary optimization algorithms to evolve sounds produced by nonstandard frequency modulation arrangements. In contrast, this paper utilizes recent advancements in multimodal evolutionary optimization to perform dynamic-sound matching with traditional frequency modulation arrangements. In doing so, a technique is developed that is not synthesizer dependant and provides the potential for alternative methods of synthesis control.
Convention Paper 6366 (Purchase now)

F-7 Automated Wavetable Synthesizer TestingAntti Eronen, Matti Hämäläinen, Nokia Research Center - Tampere, Finland
Due to the nature of synthetic audio standards such as MIDI and Downloadable Sounds (DLS), testing of desired synthesizer behavior or standard conformance is not feasible with bit-exact test vectors. Instead, one can use signal analysis to determine whether the synthesizer output follows the standard within the given tolerances. Testing involves measuring the frequency, timing, and amplitude characteristics of the wavetable synthesizer output. In this paper a fully automated wavetable synthesizer test system is described. New signal-analysis based testing methods were developed for measuring the rendering precision of MIDI and instrument articulation parameters. The automated test system consists of a PC host, which controls the test execution, stores the test case parameters, runs the signal analysis, and generates the test report. A modular interface connects the test system to the MIDI synthesizer allowing testing to be done on different hardware platforms using a single set of test cases. Currently, 62 different test groups for SP-MIDI, Mobile DLS, and Mobile XMF standard conformance testing have been implemented. This paper demonstrates that the standard conformance of a Mobile XMF implementation can be tested accurately with objective signal analysis methods that are simple to implement without relying on extensive subjective evaluations that are more unpredictable and expensive to repeat.
Convention Paper 6367 (Purchase now)

F-8 An Approach to Expressive Music Performance ModelingRafael Ramirez, Amaury Hazan, Pompeu Fabra University - Barcelona, Spain
We describe an approach for generating expressive music performances of monophonic Jazz melodies. The system consists of a melodic transcription component which extracts a set of acoustic features from monophonic recordings, a machine learning component which induce an expressive transformation model fromthe extracted acoustic features, and a melody synthesis component which generates expressive monophonic output from inexpressive melody descriptions using the induced expressive transformation model.
Convention Paper 6368 (Purchase now)


 
©2005 Audio Engineering Society, Inc.