AES Conventions and Conferences

   Return to 116th
   Registration
   Exhibitors
   Detailed Calendar
         (in Excel)
   Calendar (in PDF)
   Preliminary Program
   4 Day Planner PDF
   Convention Program
         (in PDF)
   Exhibitor Seminars
         (in PDF)
   Multichannel
         Symposium
   Paper Sessions
   Tutorial Seminars
   Workshops
   Special Events
   Exhibitor Seminars
   Tours
   Student Program
   Historical
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Hotel Information
   Travel Info
   Press Information

v3.1, 20040329, ME

Session F Sunday, May 9 10:00 h–13:00 h
ANALYSIS AND SYNTHESIS OF SOUND—PART 2
(focus on synthesis)
Chair: Matti Karjalainen, Helsinki University of Technology, Espoo, Finland

F-1 Some Clues to Build a Sound Analysis Relevant to HearingLaurent Millot, ENS Louis Lumiere, Noisy le Grand, France
Analysis tools used in research laboratories for sound synthesis by musicians or sound engineers can be rather different. Discussion of the assumptions and of the limitations of these tools allows us to propose a tool as relevant and versatile as possible for all the sound actors with a major aim: one must be able to listen to each element of the analysis because hearing is the final reference tool. This tool should also be used, in the future, to reinvestigate the definition of sound (or acoustics) on the basis of some recent works on musical instrument modeling, speech production, and loudspeaker design. Audio illustrations will be given.
F-2 Synthesizing Coupled-String Musical Instruments by a Multichannel Recurrent Network—Wei-Chen Chang, Alvin W. Y. Su, National Cheng Kung University, Tainan, Taiwan
Struck string instruments such as pianos usually have groups of strings terminated at some common bridges. Because of the strong coupling phenomenon, the produced tones exhibit highly complex amplitude modulation patterns. Therefore, it is difficult to determine the synthesis model parameters such that the synthesized tones can match the recorded tones. In this paper a multichannel recurrent network is proposed based on three previous works: the coupled-string model, the commuted piano synthesis method, and the IIR synthesis method. This paper attempts to automatically extract the synthesis parameters by using a neural-network training algorithm without the knowledge of physical properties of the instruments. Encouraging results are shown in the computer simulations.
F-3 Nonlinearity Modeling for Spectral Pattern Recognition in Piano ChordsLuis Ortiz-Berenguer, Javier Casajús-Quirós, Universidad Politécnica de Madrid, Madrid, Spain
The nonlinear behavior of the piano strings is a very important issue when the chords have to be recognized using spectral patterns. In order to calculate the spectral patterns and masks used in the recognition algorithm it is necessary to model the effects of nonlinearity. A model using intermodulation products have proved to give good results. For validation of the model we recorded 11 pianos and analyzed the “A” note of the octaves 1 to 7, using 4 different forces. The basis of this model are presented in this paper.
F-4 Waveform Synthesis Using Bezier Curves with Control Point ModulationBob Lang, University of the West of England, Bristol, UK
Bezier curves are frequently used in graphical applications and drawing packages. In this paper the author presents a technique of direct sound wave synthesis using Bezier curves. The technique is further expanded by modulating the position of the Bezier control points as synthesis takes place to create waveforms with complex harmonic structures. The paper also outlines how the technique can be used to create a musical instrument (synthesizer).
F-5 A Highly Optimized Nonlinear Least Squares Technique for Sinusoidal Signal Analysis: from O(K2N) to O(N log (N))Wim D’haes, University of Antwerp, Antwerp, Belgium
In the field of sinusoidal modeling, two types of least squares amplitude estimation methods are distinguished. A first group of methods estimate the complex amplitude of each sinusoid in an iterative manner. Although their main disadvantage is that they are unable to resolve overlapping frequency responses, they are used frequently because of their computational complexity being O(N log(N)). By contrast, methods that compute all amplitudes simultaneously can resolve overlapping frequency responses but their computational complexity scales with a power of three in function of the number of sinusoidal components. In this work a method is proposed which allows to compute all amplitudes simultaneously and still has an O(N log(N)) complexity. This is realized by explicitly including a window with a band-limited frequency response in the least squares derivation resulting in a band diagonal system of equations which can be solved in linear time. Since overlapping frequency responses are allowed, an iterative method must be used to optimize the frequencies resulting in a nonlinear least squares technique. A commonly used technique is Newton optimization which requires the computation of the gradient and the Hessian matrix. Also here, the same computational gain is realized by applying the same methodology.
F-6 Partial Tracking Based on Future Trajectories ExplorationMathieu Lagrange1, Sylvain Marchand2, Jean-Bernard Rault1
1
France Telecom R&D, Cesson Sevigne cedex, France
2
University of Bordeaux, Bordeaux, France;
This paper introduces a partial-tracking algorithm suitable for the sinusoidal modeling of polyphonic sounds. A new method, based on the backward exploration of possible extensions of the partials in future frames, is proposed to cope with the lack or the corruption of spectral data. The allocation of spectral peaks to a partial is done by considering possible trajectories in future frames where frame hopping is allowed. A suitable transition probability that takes into account missing or rejected peaks is proposed. The trajectory that exhibits the higher probability is searched for and the corresponding peak for the current frame is chosen to extend the partial.

Back to AES 116th Convention Back to AES Home Page


(C) 2004, Audio Engineering Society, Inc.