Community

AES Journal Forum

Reproducing Sound Fields Using MIMO Acoustic Channel Inversion

(Subscribe to this discussion)

Document Thumbnail

There are many ways to reproduce a sound field over a wide spatial area using an array of loudspeakers, as for example, wavefield synthesis (WFS), ambisonics, and spectral division methods. A new approach, called sound field reconstruction (SFR), optimally reproduces a desired sound field in a given listening area while keeping loudspeaker driver signals within physical constraints. The reproduction of a continuous sound field is represented as an inversion of the discrete acoustic channel from a loudspeaker array to a grid of control points. Extensive simulations comparing WFS, which is the state of the art, with SFR show that on average SFR provides better reproduction accuracy.

Authors:
Affiliation:
JAES Volume 59 Issue 10 pp. 721-734; October 2011
Publication Date:

Click to purchase paper as a non-member or you can login as an AES member to see more options.

(Comment on this paper)

Comments on this paper

Default Avatar
Gary Eickmeier


Comment posted November 27, 2011 @ 20:40:50 UTC (Comment permalink)

My objection, or question, about this article has to do with the definition of "sound field." If they are going to claim Sound Field Reconstruction, then they need to be referring to the commonly understood definition, which I understand as all fields within an enclosed space. These are the direct, early reflected, and reverberant fields. If we can reconstruct all sound fields we will have reached the goal of maximum accuracy and realism of reproduction. Ambisonics is an attempt at sound field reconstruction. Dolby Digital 5.1 and 7.1 are attempts. These systems employ full surround speakers in order to reproduce the reflected fields found in the original. As far as I can see, both WFS and SFR feature no such attempt, and have only a line of speakers at the front of the room. This approach can only reproduce the direct field, which stereophonic two and three channel can already do just fine.

I would appreciate a more practical explanation of what the authors think they have achieved with this system, and what they mean by a "sound field."

Gary Eickmeier


Default Avatar
Author Response
Mihailo Kolundzija


Comment posted December 13, 2011 @ 18:08:17 UTC (Comment permalink)

By the term sound field we mean any physical sound field that is described by the wave equation. The direct sound field that you mention is a sound field that gets formed in free space (no boundary conditions), whereas the reflected/reverberant sound fields are "residuals" that come as a consequence of having boundary conditions (reflection/diffraction on walls/objects etc.).

Our approach is not limited to a line of speakers, and can use any speaker arrangement. If one wants to reproduce a sound field that emanates from sources distributed towards arbitrary directions (including virtual sources that model reflections from walls, for instance), an enclosing loudspeaker configuration needs to be used. This, however, changes nothing in the procedure described in the paper.

A line of loudspeakers was used for a fair comparison with WFS.


Subscribe to this discussion

RSS Feed To be notified of new comments on this paper you can subscribe to this RSS feed. Forum users should login to see additional options.

Join this discussion!

If you would like to contribute to the discussion about this paper and are an AES member then you can login here:
Username:
Password:

If you are not yet an AES member and have something important to say about this paper then we urge you to join the AES today and make your voice heard. You can join online today by clicking here.

AES - Audio Engineering Society