AES E-Library

AES E-Library Search Results

Search Results (Displaying 1-10 of 36 matches) New Search
Sort by:
                 Records Per Page:

Bulk download: Download Zip archive of all papers from this conference

 

Multi-User Shared Augmented Audio Spaces Using Motion Capture Systems

Document Thumbnail

This paper describes a method for creating multi-user shared augmented reality audio spaces. By using a system of infrared cameras and motion capture software, it is possible to provide accurate low-latency head tracking for many users simultaneously, and stream binaural audio representing a realistic, shared virtual environment to each user. Participants can thus occupy and navigate a shared virtual aural space without the use of head-mounted displays, only headphones (with passive markers affixed) connected to lightweight in-ear monitor beltpacks. Potential applications include installation work, classroom use, and museum audio tours.

Author:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


"Augmented Audio": An Overview of the Unique Tools and Features Required for Creating AR Audio Experiences

Document Thumbnail

What a user sees in augmented reality is only part of the experience. To create a truly compelling journey, we must augment what a user hears in reality as well. In this paper we consider “Augmented Audio” to be the sound of AR, and discuss a technique by which the binaural rendering of virtual sounds (“Augmented”) is combined with the manipulation of the real-world sound surrounding a listener (“Reality”). We outline the unique challenges that arise when designing audio experiences for AR, and document the current state-of-the-art for Augmented Audio solutions. Using the Sennheiser AMBEO Smart Headset as a case study, we describe the essential features of an Augmented Audio device and its integration with an AR application.

Author:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Spatial Audio Production for 360-Degree Live Music Videos: Multi-Camera Case Studies

Document Thumbnail

This paper discusses the different aspects of mixing for 360-degree multi-camera live music videos. We describe our two spatial audio production workflows, which were developed and fine-tuned through a series of case studies including rock, pop, and orchestral music. The different genres were chosen to test if the production tools and techniques were equally efficient for mixing different types of music. In our workflows, one of the most important parts of the mixing process is to match the Ambisonics mix with a stereo reference. Among other things, the process includes automation, proximity effects, creating a sense a space, and managing transitions between cameras.

Authors:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Perception of Mismatched Auditory Distance—Cinematic VR

Document Thumbnail

This study examines auditory distance discrimination in cinematic virtual reality. It uses controlled stimuli with audio-visual distance variations, to determine if mismatch stimuli are detected. It asks if visual conditions—either equally or unequally distanced from the user, and environmental conditions—either a reverberant space as opposed to a freer ?eld, impact accuracy in discrimination between congruent and incongruent aural and visual cues. A Repertory Grid Technique-derived design is used, whereby participant-speci?c constructs are translated into numerical ratings. Discrimination of auditory event mismatch was improved for stimuli with varied visual-event distances, though not for equidistant visual events. This may demonstrate that visual cues alert users to matches and mismatches.

Authors:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Reaction Times of Spatially Coherent and Incoherent Signals in a Word Recognition Task

Document Thumbnail

Using conventional sound design, the audio signal in virtual reality applications is often rendered as a static stereophonic signal. It is accompanied by a visual signal that allows for interactive behavior such as looking around. In the current test, the in?uence of spatial offset between the audio and visual signals is investigated using reaction time measurements in a word recognition task. The audio-visual offset is introduced by a video presented at horizontal offset angles between ±21°, accompanied with a static central audio. Measurements are compared to reaction times from a test where both audio and visual signal are presented with the same angle. Results show that audio-visual offsets between 10° and 20° cause signi?cant differences in reaction time compared to spatially matched presentation.

Authors:
Affiliations:
AES Conference:
Paper Number: Permalink
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Toward Objective Measures of Auditory Co-Immersion in Virtual and Augmented Reality

Document Thumbnail

“Co-immersion” refers to the perception of real or virtual objects as contained within or belonging to a shared multisensory scene. Environmental features such as lighting and reverberation contribute to the experience of co-immersion even when awareness of those features is not explicit. Objective measures of co-immersion are needed to validate user experience and accessibility in augmented-reality applications, particularly those that aim for “face-to-face” quality. Here, we describe an approach that combines psychophysical measurement with virtual-reality games to assess users’ sensitivity to room-acoustic differences across concurrent talkers in a simulated complex scene. Eliminating the need for explicit judgments, Odd-one-out tasks allow psychophysical thresholds to be measured and compared directly across devices, algorithms, and user populations. Supported by NIH-R41-DC16578.

Authors:
Affiliations:
AES Conference:
Paper Number: Permalink
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Audio Quality Evaluation in Virtual Reality: Multiple Stimulus Ranking with Behavior Tracking

Document Thumbnail

Virtual reality systems with multimodal stimulation and up to six degrees-of-freedom movement pose novel challenges to audio quality evaluation. This paper adapts classic multiple stimulus test methodology to virtual reality and adds behavioral tracking functionality. The method is based on ranking by elimination while exploring an audiovisual virtual reality. The proposed evaluation method allows immersion in multimodal virtual scenes while enabling comparative evaluation of multiple binaural renderers. A pilot study is conducted to evaluate feasibility of the proposed method and to identify challenges in virtual reality audio quality evaluation. Finally, the results are compared to a non-immersive off-line evaluation method.

Authors:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Evaluation of Binaural Renderers: Multidimensional Sound Quality Assessment

Document Thumbnail

A multi-phase subjective experiment evaluating six commercially available binaural audio renderers was carried out. This paper presents the methodology, evaluation criteria, and main ?ndings of the tests that assessed perceived sound quality of the renderers. Subjects appraised a number of speci?c sound quality attributes—timbral balance, clarity, naturalness, spaciousness, and dialogue intelligibility—and ranked, in terms of preference, the renderers for a set of music and movie stimuli presented over headphones. Results indicated that differences between the perceived quality and preference for a renderer are discernible. Binaural renderer performance was also found to be highly content-dependent, with signi?cant interactions between renderers and individual stimuli being found, making it dif?cult to determine an “optimal” renderer for all settings.

Authors:
Affiliations:
AES Conference:
Paper Number: Permalink
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


ICHO: Immersive Concert for Homes

Document Thumbnail

Concert hall experience at home has been limited to stereo and 5.1 surround sound reproduction. However, these reproduction systems do not convey the spatial properties of the concert hall acoustics in detail, and speci?cally for headphones the sound tends to be perceived as playing inside the head. The ICHO project introduced in this paper aims to bring an immersive concert hall experience to home listeners. This is realized by using close pick-up of sound sources, spatial room impulse responses, and individualized head related transfer functions; all combined together for spatial sound reproduction with head-tracked head-phones. This paper outlines how this goal is going to be achieved and how the quality of the reproduction might be evaluated.

Authors:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Modular Design for Spherical Microphone Arrays

Document Thumbnail

Spherical microphones arrays are commonly utilized for recording, analyzing and reproducing sound-fields. In the context of higher-order Ambisonics, the spatial resolution depends on the number and distribution of sensors over the surface of a sphere. Commercially available arrays have set configurations that cannot be changed, which limits their usability for experimental and educational spatial audio applications. Therefore, an open-source modular design using MEMS microphones and 3D printing is proposed for selectively capturing frequency-dependent spatial components of sound-fields. Following a modular paradigm, the presented device is low cost and decomposes the array into smaller units (a matrix, connectors and microphones), which can be easily rearranged to capture up to third-order spherical harmonic signals with various physical configurations.

Authors:
Affiliations:
AES Conference:
Paper Number: Permalink
Publication Date:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


                 Search Results (Displaying 1-10 of 36 matches)
AES - Audio Engineering Society