AES Conventions and Conferences

  Return to 119th
  4 Day Planner
  Paper Sessions
  Broadcast Events
  Master Classes
  Live Sound Seminars
  Exhibitor Seminars
  Training Sessions
  Student Program
  Historical Events
  Special Events
  Technical Tours
  Heyser Lecture
  Tech Comm Mtgs
  Standards Mtgs
  Travel Information
  Press Information
  Student Volunteers

Last Updated: 20050913, wtm

P5 - Multichannel Sound -1

Saturday, October 8, 9:00 am — 12:00 pm

Chair: Thomas Sporer, Fraunhofer IDMT - Ilmenau, Germany

P5-1 Perceptual Evaluation of 5.1 Downmix AlgorithmsThomas Sporer, Beate Klehs, Fraunhofer IDMT - Ilmenau, Germany; Judith Liebetrau, Felix Richter, Alexander Krake, Gabi Muckenschnabl, Mandy Weitzel, Technical University of Ilmenau - Ilmenau, Germany
In a workshop at the 118th AES Convention, problems and solutions for automatic down-mixing were summarized. The key question is for which items automatic algorithms provide acceptable quality. In close conjunction to this issue is the question how to evaluate the quality of the result. Standardized listening test procedures such as ITU-R BS.1116 and ITU-RBS.1534 are designed to evaluate the difference between an unimpaired reference and the modified signal under test. They never were intended to be used for the comparison of 5-channel signals to 2-channel signals. In this paper a new listening test procedure is described, which is designed to judge the quality of down-mixing algorithms. The first results of listening tests performed using this procedure are described.
Convention Paper 6543 (Purchase now)

P5-2 Discrimination of Auditory Source Focus for Musical Instrument Sounds with Varying Low-Frequency Cross Correlation in Multichannel Loudspeaker ReproductionSungyoung Kim, William Martens, Atsushi Marui, McGill University - Montreal, Quebec, Canada
This paper examines the changes in auditory spatial impression associated with changes in signal incoherence within the low-frequency portion of a multichannel loudspeaker reproduction. Multichannel recordings were made in reverberant concert settings of single notes played on musical instruments with significant low-frequency energy. A signal processing method was then developed to manipulate low-frequency correlation in the prerecorded material while maintaining high sound quality; subsequent listening tests measured the perceptual effects of varying low frequency correlation on otherwise identical recordings of low-pitch, single-note performances on musical instruments such as the bass violin. For cutoff frequencies ranging from 200 Hz down to 63 Hz, the effects of cutoff frequency on discrimination thresholds were measured for changes in low-frequency correlation using a two-alternative forced-choice task. Listeners also made forced-choice identifications regarding auditory source focus. Results indicated that both discrimination and identification performance was degraded in the presence of the higher-frequency portion of the musical stimuli.
Convention Paper 6544 (Purchase now)

P5-3 Optimizing Placement and Equalization of Multiple Low Frequency Loudspeakers in RoomsAdrian Celestinos, Sofus Birkedal Nielsen, Aalborg University - Aalborg, Denmark
Every room has strong influence on the low frequency performance of a loudspeaker. This is often problematic to control and to predict. The modal resonances modify the response of the loudspeaker depending on placement and listening position. In order to anticipate the behavior of low frequency loudspeakers in rooms a simulation tool has been created based on finite-difference time-domain approximations (FDTD). Simulations have shown that by increasing the number of loudspeakers and modifying their placement a significant improvement is achieved. A more even sound pressure level distribution along a listening area is obtained. The placement of loudspeakers has been optimized. Furthermore an equalization strategy can be implemented for optimization purpose. This solution can be combined with multi channel sound systems.
Convention Paper 6545 (Purchase now)

P5-4 An Immersive Audio Environment with Source Positioning Based on Virtual Microphone ControlJonas Braasch, Wieslaw Woszczyk, Timothy Ryan, McGill University - Montreal, Quebec, Canada
In this paper an auditory virtual environment (AVE) is described that uses virtual microphone control (ViMiC) to address a 24-channel loudspeaker system based on ribbon speakers. In the newly designed environment, the microphones, with adjustable directivity patterns and axes of orientation, can be spatially placed as desired. The system architecture was designed to comply with the augmented ITU surround-sound loudspeaker placement and to create sound imagery similar to that associated with standard sound recording practice. The AVE will be used with close-spot microphone techniques in two-way internet audio transmissions to avoid feedback loops and provide dynamic placement for a number of sources.
Convention Paper 6546 (Purchase now)

P5-5 Simulation and Visualization of Room Compensation for Wave Field Synthesis with the Functional Transformation MethodStefan Petrausch, Sascha Spors, Rudolf Rabenstein, University of Erlangen-Nuremberg - Erlangen, Germany
Active room compensation based on wave field synthesis (WFS) has been recently introduced. So far, the verification of the compensation algorithm is only possible through elaborate acoustical measurements. Therefore, a new simulation method is applied that is based on the functional transformation method (FTM). Compared with other simulation techniques, the FTM provides several advantages that facilitate the correct simulation of the complete wave field particularly in the interesting frequency ranges for WFS. The complete procedure, starting from the virtual "measurements" of the acoustical properties of the simulated room, via the correct excitation for the simulated wave field, toward the resulting animations and sounds is presented in this paper.
Convention Paper 6547 (Purchase now)

P5-6 Acoustic Intensity in Multichannel Rendering SystemsAntoine Hurtado-Huyssen, Jean-Dominique Polack, Université de Paris - Paris, France
Acoustic intensity is the mechanical energy stream through a point in the sound field. In the far field, it also identifies the direction of the main source (when there is one), but it remains neglected data in multichannel recording and reproduction systems. This paper describes the information contained in sound intensity, how it can be used and what can be expected from it. It also underlines the fact that this data is accessible in the frequency domain through any existing cardioid recording system, such as ambisonic or double MS.
Convention Paper 6548 (Purchase now)

Back to AES 119th Convention Back to AES Home Page

(C) 2005, Audio Engineering Society, Inc.