Program: Poster Session 1

Home | Call for Contributions | Program | Registration | Venue & Facilities | Accessibility & Inclusivity Travel | Sponsors Committee Twitter |

  A women in white is wearing headphones and looking up. Text above her head says '2019 AES International Conference on Immersive and Interactive Audio. March 27-29, 2019. York, UK.


Poster Session 1: Perception


PS1-1: "Increasing the Vertical Image Spread of Natural Sound Sources using Band-Limited Interchannel Decorrelation"

Christopher Gribben and Hyunkook Lee

An experiment has been conducted to assess the perceptual effect of vertical interchannel decorrelation between pairs of vertically-spaced loudspeakers. The study focuses on band-limiting the vertical decorrelation of natural sound sources in groups of octave-bands, while reproducing the unprocessed octave-bands monophonically through the main-layer loudspeaker. The upper limit of the vertical decorrelation is fixed at the 16 kHz octave-band, with the lower limit varied across eight octave-bands (centre frequencies 63 Hz to 8 kHz). A monophonic unprocessed condition was also included in a multiple comparison test alongside the eight decorrelated conditions. The results demonstrate that vertical decorrelation of the 500 Hz octave-band and above can significantly increase the vertical spread of an auditory image, similar to that of broadband decorrelation.


PS1-2: "The Effect of Transitioning Between Individualized and Generic HRTFs on Localization Performance in a Virtual Environment"

Yun-Han Wu, Scott Murakami and Agnieszka Roginska

The main purpose of this paper is to observe and analyze how people’s localization performance changes as the HRTFs used transitions from an individualized set to a non-individualized (generic) set. Two common HRTF interpolation techniques, one in the time domain and the other in the frequency domain, are used to create averaged sets of HRTFs which fit between the two extremes and helps to examine the trend of localization behavior change through this continuum.


PS1-3: "Quantifying Factors of Auditory Immersion In Virtual Reality"

Callum Eaton and Hyunkook Lee

In this paper, the topic of auditory immersion and the issues with contradictory definitions for the terms immersion and presence are discussed in relation to content in Virtual Reality (VR). The issues of emotional variance in experimentation on immersion are also reviewed, and potential solutions on experimental methodology suggested, such as the use of an electroencephalogram (EEG). A survey designed to gather audio professional and consumer opinions of how important perceptual and technical auditory factors are for providing immersion is presented, and results show clearly that on average all factors questioned are perceived to be important for immersion. However vertical perception of sound was not perceived to be as important as horizontal sound perception.


PS1-4: "Statistical Analysis of Subjective Assessment Tests of 3D Audio on Mobile Phones"

Fesal Toosy and Muhammad Sarwar Ehsan

With the increasing use of mobile phones and other handheld electronic devices for surfing the internet and streaming audio and video clips, it was inevitable that technologies like 3D Audio would eventually be implemented such devices. It is important to know if 3D Audio offers any improvement in perceived audio quality over existing stereo and mono formats. It is also important to know what kind of factors have a larger effect in the rated perceived audio quality of such formats. This paper explores the results of an ANOVA analysis of two ITU standardized tests for the subjective assessment of 3D Audio. The results show that 3D Audio performs better than stereo mono formats in terms of basic audio quality.


PS1-6: "A Test Database for the Assessment of Immersive Audio Systems"

Harry Ogden, Jess Stubbs and Gavin Kearney

This paper presents a new test library for use in the spatial and timbral evaluation of immersive audio systems. Presented are synthesised test signals, anechoic speech and music recordings and Ambisonic conversational speech recordings in both anechoic and reverberant environments. The rationale for included stimuli is described, as well as the synthesizing/recording processes involved.


AES - Audio Engineering Society