AES New York 2011
Paper Session P15
P15 - Sound Field Analysis and Reproduction—Part 2
Friday, October 21, 4:00 pm — 6:30 pm (Room: 1E09)
P15-1 Broadband Analysis and Synthesis for Directional Audio Coding Using A-Format Input Signals—Archontis Politis, Ville Pulkki, Aalto University - Espoo, Finland
Directional Audio Coding (DirAC) is a parametric non-linear technique for spatial sound recording and reproduction, with flexibility in terms of loudspeaker reproduction setups. In the general 3-dimensional case, DirAC utilizes as input B-format signals, traditionally derived from the signals of a regular tetrahedral first-order microphone array, termed A-format. For high-quality rendering, the B-format signals are also exploited in the synthesis stage. In this paper we propose an alternative formulation of the analysis and synthesis, which avoids the effect of non-ideal B-format signals on both stages, and achieves improved broadband estimation of the DirAC parameters. Furthermore, a scheme for the synthesis stage is presented that utilizes directly the A-format signals without conversion to B-format.
Convention Paper 8525 (Purchase now)
P15-2 Beamforming Regularization, Scaling Matrices, and Inverse Problems for Sound Field Extrapolation and Characterization: Part I—Theory—Philippe-Aubert Gauthier, Éric Chambatte, Cédric Camier, Yann Pasco, Alain Berry, Université de Sherbrooke - Sherbrooke, Québec, Canada, and McGill University, Montreal, Québec, Canada
Sound field extrapolation (SFE) is aimed at the prediction of a sound field in an extrapolation region using a microphone array in a measurement region. For sound environment reproduction purposes, sound field characterization (SFC) aims at a more generic or parametric description of a measured or extrapolated sound field using different physical or subjective metrics. In this paper an SFE method recently introduced is presented and further developed. The method is based on an inverse problem formulation combined with a beamforming matrix in the discrete smoothing norm of the cost function. The results obtained from the SFE method are applied to SFC for subsequent sound environment reproduction. A set of classification criteria is proposed to distinguish simple types of sound fields on the basis of two simple scalar metrics. A companion paper presents the experimental verifications of the theory presented in this paper.
Convention Paper 8526 (Purchase now)
P15-3 Beamforming Regularization, Scaling Matrices and Inverse Problems for Sound Field Extrapolation and Characterization: Part II—Experiments—Philippe-Aubert Gauthier, Éric Chambatte, Cédric Camier, Yann Pasco, Alain Berry, Université de Sherbrooke - Sherbrooke, Québec, Canada, and McGill University, Montreal, Québec, Canada
Sound field extrapolation (SFE) is aimed at the prediction of a sound field in an extrapolation region using microphone array. For sound environment reproduction purposes, sound field characterization (SFC) aims at a more generic or parametric description of a measured or extrapolated sound field using different physical or subjective metrics. In this paper experiments with a recently-developed SFE method (Part I—Theory) are reported in a first instance. The method is based on an inverse problem formulation combined with a recently proposed regularization approach: a beamforming matrix in the discrete smoothing norm of the cost function. In a second instance, the results obtained from the SFE method are applied to SFC as presented in Part I. The SFC classification method is verified in two environments that recreate ideal or complex sound fields. At the light of the presented results and discussion, it is argued that SFE and SFC proposed methods are effective.
Convention Paper 8527 (Purchase now)
P15-4 Mixed-Order Ambisonics Recording and Playback for Improving Horizontal Directionality—Sylvain Favrot, Marton Marschall, Johannes Käsbach, Technical University of Denmark - Lyngby, Denmark; Jörg Buchholz, Macquarie University - Sydney, NSW, Australia; Tobias Weller, Technical University of Denmark - Lyngby, Denmark
Planar (2-D) and periphonic (3-D) higher-order Ambisonics (HOA) systems are widely used to reproduce spatial properties of acoustic scenarios. Mixed-order Ambisonics (MOA) systems combine the benefit of higher order 2-D systems, i.e., a high spatial resolution over a larger usable frequency bandwidth, with a lower order 3-D system to reproduce elevated sound sources. In order to record MOA signals, the location and weighting of the microphones on a hard sphere were optimized to provide a robust MOA encoding. A detailed analysis of the encoding and decoding process showed that MOA can improve both the spatial resolution in the horizontal plane and the useable frequency bandwidth for playback as well as recording. Hence the described MOA scheme provides a promising method for improving the performance of current 3-D sound reproduction systems.
Convention Paper 8528 (Purchase now)
P15-5 Local Sound Field Synthesis by Virtual Acoustic Scattering and Time-Reversal—Sascha Spors, Karim Helwani, Jens Ahrens, Deutsche Telekom Laboratories, Technische Universität Berlin - Berlin, Germany
Sound field synthesis techniques like Wave Field Synthesis and near-field compensated higher order Ambisonics aim at synthesizing a desired sound field within an extended area using an ensemble of individually driven loudspeakers. Local sound field synthesis techniques achieve an increased accuracy within a restricted local listening area at the cost of stronger artifacts outside. This paper proposes a novel approach to local sound field synthesis that is based upon the scattering from a virtual object bounding the local listening area and the time-reversal principle of acoustics. The physical foundations of the approach are introduced and discussed. Numerical simulations of synthesized sound fields are presented as well as a comparison to other published methods.
Convention Paper 8529 (Purchase now)
Information Last Updated: 20111005, mei