AES Conventions and Conferences

   Return to 116th
   Detailed Calendar
         (in Excel)
   Calendar (in PDF)
   Preliminary Program
   4 Day Planner PDF
   Convention Program
         (in PDF)
   Exhibitor Seminars
         (in PDF)
   Paper Sessions
   Tutorial Seminars
   Special Events
   Exhibitor Seminars
   Student Program
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Hotel Information
   Travel Info
   Press Information

v3.0, 20040325, ME

Session Z8 Monday, May 10 15:30 h–17:00 h
Posters: Audio Recording and Reproduction & Archiving and Content Management

Audio Recording and Reproduction
The History of the Tonmeister Recording Technique in RussiaPavel Ignatov, St. Petersburg, Russia
The history of sound recording in Russia dates back to the end of the 19th century. The creation of the first sound recording studios began in the 1920s and 1930s. Although the technical facilities that were used seemed to be quite primitive, the work of such outstanding tonmeisters as Khustov, Grossman, and Gakhlin made outstanding recordings of classical music and live concerts. The main feature of the second half of the 20th century (1950-1980s) was the important development of TV, RB, and recording studios (292 large television centers and radio studios had been built by the 1980s). Today’s new digital technologies and surround sound systems are used in tonmeister practice. Such masters as Shugal, Vinogradov, Khondrashin, Dinov, and many others are creating new methods of digital sound recording. The main periods of the development of tonmeister technology are investigated in this paper.
Z8-2 Optimization of Microphone Setup for Symphonic Orchestra Recordings During RehearsalWitold Mickiewicz, Technical University of Szczecin, Szczecin, Poland
Many symphonic orchestras use a nonoptimal 2-microphone setup during rehearsal recordings. These recordings are used for archiving purposes and to evaluate and improve artistic skills of a whole orchestra and its members. For that purpose, good resolution of stereo image during reproduction is needed. The process of choosing the right microphone setup can be based on the geometric parameters of the orchestra podium and acoustical properties of a rehearsal hall. Some theoretical considerations presented in this paper are supported by real recordings made in the hall of the Philharmonic of Szczecin, Poland, and listening tests made by orchestra members.
Z8-3 3-D Audio Acquisition and Reproduction System Using Multiple Microphones on a Rigid Sphere—Taejin Lee1, Daeyoung Jang1, Kyeongok Kang1, Jinwoong Kim1, Daegwon Jeong2, Hareo Hamada3
Electronics and Telecommunications Research Institute, Daejeon, Korea;
Hankuk Aviation University, Goyang-city, Korea
Tokyo Denki University, Tokyo, Japan
Generally, a dummy-head microphone is used for 3-D audio acquisition. Because of its human-like shape, we can get good spatial images. However, its shape and size are also the restriction of its public use. In this paper we propose a 3-D audio acquisition and reproduction method using multiple microphones on a rigid sphere. We place the 5 microphones on a rigid sphere’s special points and generate various audio signals for the reproduction of headphone, stereo, stereo dipole, 4-channel and 5-channel reproduction environments. Subjective reproduction experiments of 4-channel and 5-channel loudspeaker configurations show that the front/back confusion, which is a common limitation of a 3-D audio reproduction system using dummy-head microphone, can be reduced dramatically.

Archiving and Content Management
BeatBank: An MPEG-7 Compliant Query by Tapping SystemGunnar Eisenberg, Jan-Mark Batke, Thomas Sikora, Technical University of Berlin, Berlin, Germany
A Query by Tapping System is a multimedia database containing rhythmic metadata descriptions of songs. This paper presents a Query by Tapping system called BeatBank, which allows the formulation of queries by tapping the melody line’s rhythm of a song requested on a MIDI keyboard or an e-drum. The query entered is converted into an MPEG-7 compliant representation. The actual search process takes only rhythmic aspects of the melodies into account by comparing the values of the MPEG-7 Beat Description Scheme. An efficiently computable similarity measure is presented, which enables the comparison of two database entries. This system works in real-time and computes the search process online. It computes and presents a new search result list after every tap made by the user.
Z8-5 A Query by Humming System Using MPEG-7 DescriptorsJan-Mark Batke, Gunnar Eisenberg, Philipp Weishaupt, Thomas Sikora, Technical University of Berlin, Berlin, Germany
Query by Humming (QBH) is a method for searching in a multimedia database system containing metadata descriptions of songs. The database can be searched by hummed queries; this means that a user can hum a melody into a microphone that is connected to the computer hosting the system. The QBH system searches the database for songs that are similar to the input query and presents the result to the user as a list of matching songs. This paper presents a modular QBH system using MPEG-7 descriptors in all processing stages. Due to the modular design all components can easily be substituted. The system is evaluated by changing parameters defined by the MPEG-7 descriptors.
Z8-6 Music Archive Metadata Processing Based on Flow Graphs Bozena Kostek, Andrzej Czyzewski, Gdansk University of Technology, Gdansk, Poland
The paper addresses the capabilities that should be expected from intelligent Web search tools in order to respond properly to a user’s music information retrieval needs. An advanced query algorithm was engineered employing a concept of inference rule derivation from flow graphs with regard to semantic data processing. This concept, introduced recently by Pawlak, is used for mining knowledge in databases. The created database searching engine utilizes knowledge acquired in advance and stored in flow graphs in order to enable searching in musical repositories. Results obtained show that employing the implemented method the resulting search matches are ranked optimally, thus metadata related to recorded sound can be retrieved efficiently with the use of this algorithm.
Z8-7 Nearest-Neighbor Generic Sound Classification with a WordNet-Based Taxonomy— Pedro Cano, Markus Koppenberger, Sylvain Le Groux, Julien Ricard, Nicolas Wack, Perfecto Herrera, Universitat Pompeu Fabra, Barcelona, Spain
Audio classification methods work well when fine-tuned to reduced domains, such as musical instrument classification or simplified sound effects taxonomies. Classification methods cannot currently offer the detail needed in general sound recognition. A real-world-sound recognition tool would require thousands of classifiers, each specialized in distinguishing little details and a taxonomy that represents the real world. We describe the use of WordNet, a semantic network that organizes real world knowledge as the taxonomy backbone. In order to overcome the huge number of classifiers to distinguish an ever growing number of sounds, the recognition engine uses a nearest-neighbor classifier with a database of isolated sounds unambiguously linked to WordNet concepts.

Back to AES 116th Convention Back to AES Home Page

(C) 2004, Audio Engineering Society, Inc.