AES Conventions and Conferences

  Return to 119th
  4 Day Planner
  Paper Sessions
  Broadcast Events
  Master Classes
  Live Sound Seminars
  Exhibitor Seminars
  Training Sessions
  Student Program
  Historical Events
  Special Events
  Technical Tours
  Heyser Lecture
  Tech Comm Mtgs
  Standards Mtgs
  Travel Information
  Press Information
  Student Volunteers

Last Updated: 20051013, mei

P1 - Analysis, Synthesis of Sound

Friday, October 7, 9:30 am — 11:30 am

Chair: Durand Begault, Audio Forensic Center, Charles M. Salter Associates - San Francisco, CA, USA

P1-1 Perceptual Modeling of Piano TonesBrahim Hamadicharef, Emmanuel Ifeachor, University of Plymouth - Plymouth, Devon, UK
A modeling system for piano tones is presented. It fully automates the modeling process and includes the following three main stages: sound analysis, sound synthesis, and sound quality assessment. High quality piano sounds are analyzed in time and frequency domain. Analysis results are then used to design filter models matching the string resonance and create excitation signals using an inverse filtering technique for the excitation-filter synthesis model. The impact of each sound model parameter onto the perceived sound quality has been assessed using the Perceptual Evaluation of Audio Quality (PEAQ) algorithm. This is helping to optimize the DSP resource requirements for real-time implementation onto multimedia PC and FPGA-based hardware.
Convention Paper 6525 (Purchase now)

P1-2 Multichannel Audio Processing Using a Unified Domain RepresentationKevin Short, Ricardo Garcia, Michelle Daniels, Chaoticom Technologies - Andover, MA, USA
The Unified Domain representation for synchronized multichannel audio streams is introduced. This lossless and invertible transformation describes multiple streams of audio as a single frequency domain magnitude component multiplied by a complex matrix encoding the spatial and phase relationship information for each channel. Unified domain analysis and signal processing techniques for applications such as high-resolution frequency analysis, sound source separation, spatial psychoacoustic models and low bit rate audio coding are presented.
Convention Paper 6526 (Purchase now)

P1-3 Multichannel Audio Time-Scale ModificationDavid Dorran, Dublin Institute of Technology - Dublin, Ireland; Robert Lawlor, National University of Ireland - Maynooth, Ireland; Eugene Coyle, Dublin Institute of Technology - Dublin, Ireland
Phase vocoder-based approaches to audio time scale modification introduce a reverberant artifact into the time-scaled output. Recent techniques have been developed to reduce the presence of this artifact. However, these techniques have the effect of introducing additional issues relating to their application to multichannel recordings. This paper addresses these issues by collectively analyzing all channels prior to time-scaling each individual channel.
Convention Paper 6527 (Purchase now)

P1-4 Improving MPEG-7 Sound ClassificationHolger Crysandt, Aachen University (RWTH) - Aachen, Germany
This paper describes a mechanism to improve the sound classification algorithm included in the MPEG-7 standard without modifying or extending it. The sequential classification is turned into a hierarchical classification. Thereby it is possible to adopt the classification algorithm more flexible to the characteristics of the sound classes. This paper also gives a detailed view on how the algorithm is implemented using an XML database to store and request content information of the audio signals and model descriptions of sound classes using the MPEG-7 standard.
Convention Paper 6528 (Purchase now)

Back to AES 119th Convention Back to AES Home Page

(C) 2005, Audio Engineering Society, Inc.