AES Conventions and Conferences

   Registration
   Exhibitors
   Detailed Calendar
         (in Excel)
   Calendar (in PDF)
   Convention Planner
   Surround - Live:
         Symposium
   Paper Sessions
   Tutorial Seminars
   Workshops
   Special Events
   Exhibitor Seminars
         Room A
   Exhibitor Seminars
         Room B
   Technical Tours
   Student Program
   Historical
   Free Hearing
         Screenings
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Press Information
   Return to 115th

Monday, October 13 1:30 pm – 4:30 pm
Session P Archiving and Restoration


P-1 Subband Adaptive Filtering for Acoustic Noise ReductionHesu Huang, Chris Kyriakakis, University of Southern California, Los Angeles, CA, USA
Additive background noise and convolutive noise in the form of reverberation are two major types of noise in audio conferencing and hands-free telecommunication environments. To reduce this type of acoustic noise, we propose a two-step approach based on subband adaptive filtering techniques. In particular, we first reduce the additive noise using a Delayless Subband Adaptive Noise Cancellation (DSANC) technique, and then suppress the convolutive noise through Subband Adaptive Blind Deconvolution. The experiments show that our method has lower computational complexity and better performance than previously proposed methods.

P-2 Multi-Frequency Noise Removal Based on Reinforcement LearningChing-Shun Lin, Chris Kyriakakis, University of Southern California, Los Angeles, CA, USA
In this paper a neuro-fuzzy system is proposed to remove multifrequency noise from audio signals. There are two major elements in our method. The first comprises a fuzzy cerebellar model articulation controller (FCMAC) that is used to quantize the signals. The second one is developed based on the theory of stochastic real values (SRV) that is used to search the optimal frequencies for the overall trained system. We present a DSP implementation of the SRV algorithm and the results on its performance in removing spectral noise that is buried in audio signals.

P-3 Music Identification with MPEG-7Holger Crysandt, Aachen University, Aachen, Germany
Real-time music identification has become more and more interesting within the past few years. Possible fields are, for example, monitoring a radio station in order to create a playlist or scanning network traffic in search of copyright protected material. This paper presents a client-server application which is able to do this in real-time with the help of MPEG-7. It explains how to define the similarity between two segments of music and determines its robustness toward perceptional audio coding and filtering. It also introduces an indexing system to reduce the number of segments which have to be compared to the query.

P-4 High Frequency Reconstruction by Linear ExtrapolationChi-Min Liu, Wen-Chieh Lee, Han-Wen Hsu, National Chiao Tung University, Hsin-Chu, Taiwan
Current existing digital audio signals are always restricted by sampling rates and bandwidth fit for the various storage and communication bandwidths. Take for example the widely spread mp3 tracks encoded by the standard MPEG1 layer 3. The audio bandwidth in MP3 is restricted to 16 kHz due to the protocol constraints defined. This paper presents the method to reconstruct the lost high frequency components from the band-limited signals. Both the subjective and objective measures have been conducted and shown the better quality. Especially, the important objective measurement by the perceptual evaluation of audio quality system, which is the recommendation system by ITU-R Task Group 10/4 has proven a significant quality improvement.

P-5 Audio Storage and Networking in the Digital AgeDoug Perkins, Amnon Sarig, mSoft Inc., Woodland Hills, CA, USA
There are many ways to store audio files but traditional methods are not conducive to file sharing, which is the goal of the modern networked facility. While still evolving, the world of digital audio storage offers the technology to quickly and safely share files between users, and even allows simultaneous users on different platforms to access audio. Learn how your facility can simplify the transition to the digital world, what products are on the leading edge, and what to look for when you are ready to make the leap.

P-6 The Requirement for Standards in Metadata Exchange for Networked Audio EnvironmentsNicolas Sincaglia, FullAudio, Chicago, IL, USA
In a networked audio environment, metadata not only provides a human interface but is used for the identification, organization, tracking, reporting, and selling of digitized sound recordings. Establishing an open industry standard for this data will enable the entire industry to streamline its ability to make content available. The result will be a more efficient and uniform exchange of data, ultimately enabling a more versatile and profitable music industry.

Back to AES 115th Convention Back to AES Home Page


(C) 2003, Audio Engineering Society, Inc.