Live sounds at a concert have spatial relationships to each other and to their environment. The specific microphone technique used for recording the sounds, the placement and directional properties of the playback loudspeakers, and the room‚Äôs response determine the signals at the listener‚Äôs ears and thus the rendering of the concert recording. For the frequency range, in which Inter-aural Time Differences dominate directional hearing, a free-field transmission line model will be used to predict the placement of phantom sources between two loudspeakers. Level panning and time panning of monaural sources are investigated. Effectiveness and limitations of different microphone pairs are shown. Recording techniques can be improved by recognizing fundamental requirements for spatial rendering. Observations from a novel 4-loudspeaker setup are presented. It provides enhanced spatial rendering of 2-channel sound.
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.