Events

Program: Poster Session 3

Home | Call for Contributions | Program | Registration | Venue & Facilities | Accessibility & Inclusivity Travel | Sponsors Committee Twitter |

  A women in white is wearing headphones and looking up. Text above her head says '2019 AES International Conference on Immersive and Interactive Audio. March 27-29, 2019. York, UK.

  

Poster Session 3 - Creative workflows and sound design

 

PS3-1: "Web-based binaural audio and sonic narratives for cultural heritage"

Marco Comunità, Andrea Gerino, Veranika Lim and Lorenzo Picinali

This paper introduces Plugsonic Soundscape and Plugsonic Sample, two web-based applications for the creation and experience of binaural interactive audio narratives and soundscapes. The apps are being developed as part of the PLUGGY EU project (Pluggable Social Platform for Heritage Awareness and Participation). The apps audio processing is based on the Web Audio API and the 3D Tune-In toolkit. Within the paper, we report on the implementation, evaluation and future developments. We believe that the idea of a web-based application for 3D sonic narratives represents a novel contribution to the cultural heritage, digital storytelling and 3D audio technology domains.

www.aes.org/e-lib/browse.cfm?elib=20435

 

PS3-2: "Multichannel Audio Implementation for Virtual Reality"

Sungsoo Kim and Sripathi Sridhar

This paper describes a system that implements audio over a multichannel loudspeaker system for virtual reality (VR) applications. Real-time tracking data such as distances between the user and loudspeakers and head rotation angle are used to modify the output of a multichannel loudspeaker configuration in terms of panning, delay and compensated energy to achieve stationary music and a dynamic sweet spot. This system was adapted for a simple first person shooter VR game, and pilot tests were conducted to test its impact on the user experience.

www.aes.org/e-lib/browse.cfm?elib=20436

 

PS3-3: "Evaluation of car cabin acoustics using auralisation over headphones"

Jessica Camilleri, Neofytos Kaplanis and Enzo De Sena

The auralization schemes in the domain of automotive audio have primarily utilized dummy head recordings in the past. Recently, spatial reproduction allowed the auralization of cabin acoustics over large loudspeaker arrays. Yet no direct comparisons between those methods exist. In this study, the efficacy of headphone presentation is explored in this context. Six acoustical conditions were presented over headphones to experienced assessors (n=23), who were asked to compare them over six elicited perceptual attributes. In 24 out of 36 cases, the results indicate an agreement between headphone- and loudspeaker-based auralisation of identical stimuli set. It is concluded that, when compared to loudspeakers-based rendering, headphones-based rendering reveals similar judgment on timbral attributes, while certain spatial attributes should be assessed with caution.

www.aes.org/e-lib/browse.cfm?elib=20437

 

PS3-4: "Towards a Virtual Audiovisual Environment for Interactive 3D Audio Productions"

Robert Hupke, James Ordner, Jakob Bergner, Marcel Nophut, Stephan Preihs and Juergen Peissig

The emerging production of 3D Audio content results in new challenges regarding the optimization of the production process for VR/AR applications as well as for 3D music or film productions. This contribution presents a first approach of a virtual audiovisual environment for 3D audio production based on a real listening room with different loudspeaker arrangements. By wearing a Head Mounted Display, the producer can move sound sources in the listening room by hand gestures to create an immersive audio experience. For a verification of future listening experiments with the use of the implemented virtual environment, a semi-automatic measurement setup is presented to guarantee a controllable listening environment in terms of reverberation time, background noise, delay compensation and room response equalization.

www.aes.org/e-lib/browse.cfm?elib=20438

 

PS3-6: "AI and Automatic Music Generation for Mindfulness"

Duncan Williams, Victoria Hodge, Lina Gega, Damian Murphy, Peter Cowling and Anders Drachen

This paper presents an architecture for the creation of emotionally congruent music using machine learning aided sound synthesis. We analyse participant’s galvanic skin response (GSR) while listening to AI generated music pieces and evaluate the emotions they describe in a questionnaire conducted after listening. These analyses reveal that there is a direct correlation between the calmness/scariness of a musical piece, the users’ GSR reading and the emotions they describe feeling. From these, we will be able to estimate an emotional state using biofeedback as a control signal for a machine-learning algorithm, which generates new musical structures according to a perceptually informed musical feature similarity model. Our case study suggests various applications including in gaming, automated soundtrack generation, and mindfulness.

www.aes.org/e-lib/browse.cfm?elib=20439 

 

PS3-7: "Multimodality and Audiovisual perception: a case study involving Spatial Audio, Wave Field Synthesis and Dance Choreography"

Tommaso Perego

Spatial Audio Perception and Envelopment are key areas of investigation for understanding audience attention and engagement when experiencing works through loudspeakers sound diffusion systems. This paper reports about the findings of a practice-based interdisciplinary research study on the perception of movement through sound consisting in a joint choreography of sound and body movement, designed and performed through the 192 Loudspeakers Wave Field Synthesis System by The Game of Life Foundation (NL). The ideas and examples discussed are focussed on multimodality and audiovisual perception, on the modalities involved in movement perception and how they could be integrated in a spatial audio composition for dance, to inform the context of auditory engagement and attention with the perspectives of a composer and choreographer.

www.aes.org/e-lib/browse.cfm?elib=20440

 

PS3-8: "Applied Multichannel Recording of a Contemporary Symphony Orchestra for Virtual Reality"

Luke Reed, Alexandre Hurr and Mathew Knight

Capturing musical performances for Virtual Reality (VR) is of growing interest to engineers, cultural organisations and the public. The application of ambisonic workflows in conjunction with binauralisation through head related transfer functions enables perception and localisation of sound sources within three dimensional space, crucially enabling height perception. While there are many excellent examples of orchestral recordings in VR, few make use of the height perception and favour ‘on stage’ horizontal positioning. This brief presents a contemporary symphony orchestral performance captured and produced in second order ambisonics in which 51 performers were individually split and positioned across five levels of the performance space. This case study looks to critically discuss the methods employed addressing the workflow through pre-production, capture and post-production.

www.aes.org/e-lib/browse.cfm?elib=20441

 

PS3-9: "The physical evaluation of the efficiency of an enhanced pressure-matching beamforming method using eigen decomposition pseudoinverse mathematical approach"

Tahereh Afghah, Elliot Patros and Miller Puckette

The ever-improving immersive sound regeneration techniques have created an entire branch of novel signal processing techniques and 3D audio playback systems. The Pressure Matching Method (PMM) has been developed to efficiently recreate 3D sound with a speaker array. A new mathematical approach which enhances the effectiveness of PMM was presented in [1]. In this paper, the results of the physical measurements conducted to make an assessment of the method are demonstrated. The outcome shows that in comparison with the traditional PMM with Tikhonov regularization, the eigen decomposition pseudoinverse technique improves the PMM efficiency through giving rise to a greater segregation of the received sound at the listener’s ears and a more immersive sound field around the listener’s head.

www.aes.org/e-lib/browse.cfm?elib=20442

 

AES - Audio Engineering Society