AES Conventions and Conferences

   Return to 116th
   Detailed Calendar
         (in Excel)
   Calendar (in PDF)
   Preliminary Program
   4 Day Planner PDF
   Convention Program
         (in PDF)
   Exhibitor Seminars
         (in PDF)
   Paper Sessions
   Tutorial Seminars
   Special Events
   Exhibitor Seminars
   Student Program
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Hotel Information
   Travel Info
   Press Information

v3.0, 20040325, ME

Session Z4 Sunday, May 9 13:00 h–14:30 h
Posters: Spatial Perception and Processing & Analysis and Synthesis of Sound

Spatial Perception and Processing
Objective Measurements of Sound-Source Localization in a Multichannel Transmission System for VideoconferencingJuan José Gómez-Alfageme, Elena Blanco-Martin, S Torres-Guijarro, F. Javier Casajús-Quirós, Universidad Politécnica de Madrid, Madrid, Spain
In videoconference systems formed by microphone and loudspeaker arrays, the sound field reproduced in the receiving room must be as similar as possible to the sampled field by the microphone array (according to the wave field synthesis). Different measurements of objective and subjective quality can be made. A measurement method has been developed based on spatial localization in the horizontal plane. In order to do it, two different situations have been compared: first, a real source placed at different azimuth angles in front of the listener; second, the virtual source created by the loudspeaker array. Interpolated HRTFs have been calculated according to several methods and in order to determine the azimuth angle, the cross-correlation function (IACC) and the interaural time difference (ITD) have been evaluated.
Z4-2 Plane-Wave Decomposition of Volume Element Mesh Data SimulationsBård Støfringsdal, U. Peter Svensson, Norwegian University of Science and Technology, Trondheim, Norway
Sound-field simulations at low frequencies usually employ finite elements or other mesh-based methods. For auralization, output data from these methods need to be converted to a format compatible with auralization methods such as Wave Field Synthesis (WFS), Higher Order Ambisonics (HOA) or binaural reproduction. A method is proposed for converting the mesh data to plane wave components using a circular array of virtual sources centered around the listening position. The method is based on solving sets of linear propagation equations in the frequency domain. Results are presented for two-dimensional examples and numerical issues are discussed.
Z4-3 Headphone Processor Based on Individualized Head-Related Transfer Functions Measured in A Listening RoomWitold Mickiewicz, Jerzy Sawicki, Technical University of Szczecin, Szczecin, Poland
Listening via headphones as opposed to loudspeakers introduces changes in perception of an acoustic atmosphere and spaciousness (lateralization effect). This can be improved using Head-Related Transfer Function (HRTF) technology. In contrast to previous works, we propose a method based on individualized HRTFs measured simply by the end-user in acoustical conditions of a listening room using his own hi-fi set. It gives even better subjective results using standard equipment and a proper postprocessing (equalization) than available market products based on nonindividual filters. We present an idea based on individualized head-and room-related transfer function, the algorithm, and technical details of individualized headphone processors. All necessary processing can be done in DSP or FPGA to create a PC-independent consumer-electronics unit.
Z4-4 A Lateral Angle Tool for Spatial Auditory Analysis—Ben Supper, Tim Brookes, Francis Rumsey, University of Surrey, Guildford, Surrey, UK
A new method is presented for examining the spatial attributes of a sound recorded within a room. A binaural recording is converted into a running representation of an instantaneous lateral angle. This conversion is performed in a way that is influenced strongly by the workings of the human auditory system. Auditory onset detection takes place alongside the lateral angle conversion. These routines are combined to form a powerful analytical tool for examining the spatial features of the binaural recording. Exemplary signals are processed and discussed in this paper. Further work will be required to validate the system against existing auditory analysis techniques.

Analysis and Synthesis of Sound
A Layered Data Model for Information Management in Sound Coding ArchitecturesEnrique Alexandre, Antonio Pena, Universidade de Vigo, Vigo, Spain
This paper presents some ideas for the appropriate management of every information source present in a generic speech or audio coder. This task becomes more necessary as coding structures become more complex. An appropriate organization and processing of this information is a key point for an efficient implementation, in terms of complexity and quality. First, a data structure will be proposed, inspired by classic comprehension theories, which sorts the information into three different hierarchical levels. Based on this structure, a global sound encoder block diagram will be described. This model is based on blackboard models, commonly applied in speech recognition applications. Finally, it will be shown how an MPEG-2/4 AAC-LC coder can be considered as a particular case of the proposed model.
Z4-6 Real-Time Room Equalization Based on Complex Smoothing: Robustness ResultsPanagiotis Hatziantoniou, John Mourjopoulos, University of Patras, Patras, Greece
This paper investigates the robustness of room acoustics real-time equalization using inverse filters derived from the complex smoothing of the transfer function using perceptual criteria. The robustness of the method is assessed by real-time tests that compare the performance of complex smoothing-based equalization (for different filter lengths) with the traditional, ideal inverse filtering, over a range of room locations, which differ from the ones where response measurements were taken. Objective measurements and audio examples will show that the complex smoothing-based equalization performance is largely immune to position changes and does not introduce processing artifacts, problems affecting the traditional ideal inversion.
Z4-7 Personalized Mobile Ring Tone Generator Using Madelbrot Music—Suthikshn Kumar, Larsen & Toubro Infotech Ltd., Bangalore, India
Mandelbrot equations are very popular for generating images and music. We propose to use them for generating mobile ring tones. These mandelbrot ring tones are both entertaining and melodious. As the computations required for generating melodious mandelbrot tones are simple iterations, the ring tone generator can be integrated with the mobile handset. The Fuzzy Mandelbrot sets are proposed for extending the usefulness of the ring tone generator. This ring tone generator is personalized by using the audiogram. People with hearing impairments will benefit by the personalized ring tone generator. A PC-based mobile phone ring tone generator demonstration is being developed based on the Nokia series 60 SDK for Symbian OS mobile handsets. This will be used for demonstrating the concepts proposed in this paper.

Back to AES 116th Convention Back to AES Home Page

(C) 2004, Audio Engineering Society, Inc.