Synchronized Swept-Sine: Theory, Application, and Implementation - October 2015
Effect of Microphone Number and Positioning on the Average of Frequency Responses in Cinema Calibration - October 2015
The Measurement and Calibration of Sound Reproducing Systems - July 2015
A Model-Based Technique for the Perceptual Optimization of Multimodal Musical Performances
As multi-channel audio and visual processing becomes more accessible to the general public, musicians are beginning to experiment with performances where players are in two or more remote locations. These co-located or telepresence performances challenge the conventions and basic rules of traditional musical experience. While they allow for collaboration with musicians and audiences in remote locations, the current limitations of technology restricts the communication between musicians. In addition, a telepresence performance introduces optical distortion that can result in impaired auditory communication, resulting in the need to study certain auditory-visual interactions. One such interaction is the relationship between a musician and a virtual visual environment. How does the attendant visual environment affect the perceived presence of a musician? An experiment was conducted to determine the magnitude of this effect. Two pre-recorded musical performances were presented through virtual display in a number of acoustically diverse environments under different relative background lighting conditions. Participants in this study were asked to balance the level of the direct-to-reverberant ratio, and reverberant level until the virtual musician's acoustic environment is congruent with that of the visual representation. One can expect auditory-visual interactions in the perception of a musician in varying virtual environments. Through a multivariate parameter optimization, the results from this study will be used to develop a parametric model that will control the current auditory rendering system, Virtual Microphone Control (ViMiC), in order to create a more perceptually accurate auditory visual environment for performance.
Click to purchase paper or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $20 for non-members, $5 for AES members and is free for E-Library subscribers.