Using conventional sound design, the audio signal in virtual reality applications is often rendered as a static stereophonic signal. It is accompanied by a visual signal that allows for interactive behavior such as looking around. In the current test, the in?uence of spatial offset between the audio and visual signals is investigated using reaction time measurements in a word recognition task. The audio-visual offset is introduced by a video presented at horizontal offset angles between ±21°, accompanied with a static central audio. Measurements are compared to reaction times from a test where both audio and visual signal are presented with the same angle. Results show that audio-visual offsets between 10° and 20° cause signi?cant differences in reaction time compared to spatially matched presentation.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.