• Spotlight on Broadcasting
• Spotlight on Live Sound
• Spotlight on Archiving
• Detailed Calendar
• Convention Planner
• Paper Sessions
• Exhibitor Seminars
• Application Seminars
• Special Events
• Student Program
• Technical Tours
• Technical Council
• Standards Committee
• Heyser Lecture
Last Updated: 20070320, meiP9 - Audio Recording and Reproduction
Sunday, May 6, 10:30 — 12:00
P9-1 Intelligent Editing of Studio Recordings with the Help of Automatic Music Structure Extraction—György Fazekas, Mark Sandler, Queen Mary, University of London - London, UK
In a complex sound editing project, automatic exploration and labeling of the semantic music structure can be highly beneficial as a creative assistance. This paper describes the development of new tools that allow the engineer to navigate around the recorded project using a hierarchical music segmentation algorithm. Segmentation of musical audio into intelligible sections like chorus and verses will be discussed briefly followed by a short overview of the novel segmentation approach by timbre-based music representation. Popular sound-editing platforms were investigated to find an optimal way of implementing the necessary features. The integration of music segmentation and the development of a new navigation toolbar in Audacity, an open-source multitrack editor, will be described in more detail.
Convention Paper 7039 (Purchase now)
P9-2 Constant Complexity Reverberation for any Reverberation Time—Tobias May, Philips Research Laboratories - Eindhoven, The Netherlands and Carl-von-Ossietzky University Oldenburg, Oldenburg, Germany; Daniel Schobben, Philips Research Laboratories - Eindhoven, The Netherlands
A new artificial reverberation system is proposed, which is based on perceptually relevant components in reverberated audio and, as such, allows for a very efficient implementation. The system first separates the signal into transient and steady-state components. The transient signal is reverberated by using an efficient time-varying recursive filter while the steady-state signal is processed separately with an all-pass filter. In contrast to common reverberation systems, the complexity of the recursive filter is determined solely by the duration of the transients and is therefore independent of the reverberation time.
Convention Paper 7040 (Purchase now)
P9-3 Outdoor and Indoor Recording for Motion Picture. A Comparative Approach on Microphone Techniques—Christos Goussios, Christos Sevastiadis, George Kalliris, Aristotle University of Thessaloniki - Thessaloniki, Greece
Several recording techniques and equipment are used in outdoor and indoor recordings for motion pictures. The choices are usually characterized from subjectivity and technical limitations irrelevant to the desired final sound quality. Our goal is to present results of comparative recordings in order to give answers to every-day-practice problems that arise. Overhead and underneath booming and the use of wireless microphones are compared through third octave frequency analysis.
Convention Paper 7041 (Purchase now)
P9-4 Semi-Automatic Mono to Stereo Up-mixing Using Sound Source Formation—Mathieu Lagrange, University of Victoria - Victoria, British Columbia, Canada; Luis Gustavo Martins, INESC Porto - Porto, Portugal; George Tzanetakis, University of Victoria - Victoria, British Columbia, Canada
In this paper we propose an original method to include spatial panning information when converting monophonic recordings to stereophonic ones. Sound sources are first identified using perceptively motivated clustering of spectral components. Correlations between these individual sources are then identified to build a middle level representation of the analyzed sound. This allows the user to define panning information for major sound sources thus enhancing the stereophonic immersion quality of the resulting sound.
Convention Paper 7042 (Purchase now)