AES Conventions and Conferences

   Return to 116th
   Registration
   Exhibitors
   Detailed Calendar
         (in Excel)
   Calendar (in PDF)
   Preliminary Program
   4 Day Planner PDF
   Convention Program
         (in PDF)
   Exhibitor Seminars
         (in PDF)
   Multichannel
         Symposium
   Paper Sessions
   Tutorial Seminars
   Workshops
   Special Events
   Exhibitor Seminars
   Tours
   Student Program
   Historical
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Hotel Information
   Travel Info
   Press Information

v3.1, 20040329, ME

Session E Saturday, May 8 16:00 h–18:00 h
ANALYSIS AND SYNTHESIS OF SOUND—PART 1
(focus on analysis)
Chair: Oliver Hellmuth, Fraunhofer Institute for Integrated Circuits IIS, Erlangen, Germany

E-1 A Methodology for Detection of Melody in Polyphonic Musical SignalsRui Pedro Paiva, Teresa Mendes, Amílcar Cardoso, University of Coimbra, Coimbra, Portugal
In this paper we present a bottom-up method for melody detection in polyphonic music signals. Our approach is based on the assumption that the melodic line is often salient in terms of note intensity (energy). First, trajectories of the most intense harmonic groups are constructed. Then, note candidates are obtained by trajectory segmentation (in terms of frequency and energy variations). Too short, low-energy, and octave-related notes are then eliminated. Finally, the melody is extracted by selecting the most important notes at each time, based on their intensity. We tested our method with excerpts from 12 songs encompassing several genres. In the songs where the sole stands out clearly, most of the melody notes were successfully deleted. However, for songs where the melody is not that salient, the algorithm performed poorly. Nevertheless, we could say that the results are encouraging.
E-2 Octave-Error Proof Timbre-Independent Estimation of Fundamental Frequency of Musical Sounds Alicja Wieczorkowska, Jakub Wróblewski, Polish-Japanese Institute of Information Technology, Warsaw, Poland
Estimation of fundamental frequency (so called pitch tracking) can be performed using various methods. However, all these algorithms are susceptible to errors, especially octave ones. In order to avoid these errors, pitch-trackers are usually adjusted to particular musical instruments. Therefore problems arise when one wants to extract fundamental frequency independent on the timbre. Our goal is to elaborate a method of fundamental frequency extraction, which works correctly for any timbre. We propose a multi-algorithm approach, where fundamental frequency estimation is based on results coming both from a range of frequency tracking methods and additional parameters of sound. We also propose frequency tracking methods based on direct analysis of a signal and its spectrum. We explain the structure of our estimator and the obtained results for various musical instruments and sound articulation techniques.
E-3 Further Steps towards Drum Transcription of Polyphonic MusicChristian Dittmar, Christian Uhle, Fraunhofer Institute for Digital Media Technology, Ilmenau, Germany
This paper presents a new method for the detection and classification of unpitched percussive instruments in real-world musical signals. The derived information is an important prerequisite for the creation of a musical score, i.e., automatic transcription, and for the automatic extraction of semantic meaningful metadata, e.g., tempo and musical meter. The proposed method applies independent subspace analysis using non-negative independent component analysis and principles of prior subspace analysis. An important extension of prior subspace analysis is the identification of frequency subspaces of percussive instruments from the signal itself. The frequency subspaces serve as information for the detection of the percussive events and the subsequent classification of the occurring instruments. Results are reported on 40 manually transcribed test items.
E-4 Generation of Musical Scores of Percussive Unpitched Instruments from Automatically Detected EventsChristian Uhle, Christian Dittmar, Fraunhofer Institute for Digital Media Technology, Ilmenau, Germany
This paper addresses the generation of a musical score of percussive unpitched instruments. A musical event is defined as the occurrence of a sound of a musical instrument. The presented method is restricted to events of percussive instruments without determinate pitch. Events are detected in the audio signal and classified into instrument classes, the temporal positions of the events are quantized on a tatum grid, musical meter is estimated, and preparatory beats are identified. The identification of rhythmic patterns on the basis of the frequency of their occurrence enables a robust identification of the tempo and gives valuable cues for the positioning of the bar lines using musical knowledge.

Back to AES 116th Convention Back to AES Home Page


(C) 2004, Audio Engineering Society, Inc.