144th AES CONVENTION Paper Session P25: Audio Applications

AES Milan 2018
Paper Session P25

P25 - Audio Applications


Saturday, May 26, 13:00 — 15:00 (Scala 4)

Chair:
Annika Neidhardt, Technische Universität Ilmenau - Ilmenau, Germany

P25-1 Films Unseen: Approaching Audio Description Alternatives to Enhance Perception, Immersion, and Imagery of Audio-Visual Mediums for Blind and Partially Sighted Audiences: Science FictionCesar Portillo, SAE Institute London - London, UK
“Films Unseen” is a research conducted to analyze the nature of audio description and the soundtrack features of Science Fiction films/content. The paper explores the distinctive immersive, sound spatialization and sound design features that could allow blind and partially sighted audiences to perceive and conduct an accurate interpretation of the optical elements, presented within visually complex audio visual mediums, such as the film used as a case study called How to Be Human (Centofanti, 2017). Correspondingly, the results collected from 15 experienced audio description users demonstrated the efficiency of SFX, immersive audio and binaural recording techniques to stimulate the perception of visual performances by appealing to auditory senses, evoking a more meaningful and understandable experience to visually impaired audiences in correlation with experimental approaches to sound design and audio description.
Convention Paper 10024 (Purchase now)

P25-2 "It's about Time!" A Study on the Perceived Effects of Manipulating Time in Overdub RecordingsTore Teigland, Westerdals University College - Oslo, Norway; Pål Erik Jensen, Westerdals Oslo ACT, University College - Oslo, Norway; Claus Sohn Andersen, Westerdals Oslo School of Arts - Oslo, Norway; Norwegian University of Science and Technology
In this study we made three separate recordings using both close, near, and room microphones. These recordings were then the subject for a listening test constructed to study a variety of perceived effects due to manipulating time in overdub recordings. While the use of time alignment to decrease comb filtering has been widely studied, there has been little work on investigating other perceived effects. Time alignment has become more and more common, but as this paper concludes, it should not be used without concern. The findings will shed light on a range of important factors affected by manipulating time between microphones in overdub recordings, while also concluding on which of, and when, the investigated techniques are normally preferred or not.
Convention Paper 10025 (Purchase now)

P25-3 Musicians’ Binaural Headphone Monitoring for Studio RecordingValentin Bauer, Paris Conservatoire (CNSMDP) - Paris, France; Hervé Déjardin, Radio France - Paris, France; Amandine Pras, University of Lethbridge - Lethbridge, Alberta, Canada
This study uses binaural technology for headphone monitoring in world music, jazz, and free improvisation recording sessions. We first conducted an online survey with 12 musicians to identify the challenges they face when performing in studio with wearable monitoring devices. Then, to investigate musicians’ perceived differences between binaural and stereo monitoring, we carried out three comparative tests followed by semi-directed focus groups. The survey analysis highlighted the main challenges of coping with an unusual performance situation and a lack of realism and sound quality of the auditory scene. Tests showed that binaural monitoring improved the perceived sound quality and realism, musicians’ comfort and pleasure, and encouraged better musical performances and more creativity in the studio.
Convention Paper 10026 (Purchase now)

P25-4 Estimation of Object-Based Reverberation Using an Ad-Hoc Microphone Arrangement for Live PerformanceLuca Remaggi, University of Surrey - Guildford, Surrey, UK; Philip Jackson, University of Surrey - Guildford, Surrey, UK; Philip Coleman, University of Surrey - Guildford, Surrey, UK; Tom Parnell, BBC Research & Development - Salford, UK
We present a novel pipeline to estimate reverberant spatial audio object (RSAO) parameters given room impulse responses (RIRs) recorded by ad-hoc microphone arrangements. The proposed pipeline performs three tasks: direct-to-reverberant-ratio (DRR) estimation; microphone localization; RSAO parametrization. RIRs recorded at Bridgewater Hall by microphones arranged for a BBC Philharmonic Orchestra performance were parametrized. Objective measures of the rendered RSAO reverberation characteristics were evaluated and compared with reverberation recorded by a Soundfield microphone. Alongside informal listening tests, the results confirmed that the rendered RSAO gave a plausible reproduction of the hall, comparable to the measured response. The objectification of the reverb from in-situ RIR measurements unlocks customization and personalization of the experience for different audio systems, user preferences, and playback environments.
Convention Paper 10028 (Purchase now)


Return to Paper Sessions