AES Dublin Engineering Brief EB02: E-Brief Poster Session 1

AES Dublin 2019
Engineering Brief EB02

EB02 - E-Brief Poster Session 1


Thursday, March 21, 10:45 — 12:45 (The Liffey B)

EB02-1 Automatic Mixing Level Balancing Enhanced through Source Interference IdentificationDave Moffat, Queen Mary University London - London, UK; Mark Sandler, Queen Mary University of London - London, UK
It has been well established that equal loudness normalization can produce a perceptually appropriate level balance in an automated mix. Previous work assumes that each captured track represents an individual sound source. In the context of a live drum recording this assumption is incorrect. This paper will demonstrate an approach to identify the source interference and adjust the source gains accordingly, to ensure that tracks are all set to equal perceptual loudness. The impact of this interference on the selected gain parameters and resultant mixture is highlighted.
Engineering Brief 497 (Download now)

EB02-2 Binaural Rendering of Phantom Image Elevation Using VHAPHyunkook Lee, University of Huddersfield - Huddersfield, UK; Maksims Mironovs, University of Huddersfield - Huddersfield, West Yorkshire, UK; Dale Johnson, University of Huddersfield - Huddersfield, UK
VHAP (virtual hemispherical amplitude panning) is a method developed to create an elevated phantom source on a virtual upper-hemisphere with only four ear-height loudspeakers. This engineering brief introduces a new VST plug-in for VHAP and evaluates the performance of the binaural rendering of VHAP with a simple but effective distance control method. Listening test results indicate that the binaural mode achieves the externalization of elevated phantom images in various degrees of perceived distance. VHAP is considered to be a cost-efficient and effective method for 3D panning in virtual reality applications as well as in horizontal loudspeaker reproduction. The plugin is available for free download in the Resources section at www.hud.ac.uk/apl.
Engineering Brief 498 (Download now)

EB02-3 A Web-Based Tool for Microphone Array Design and Phantom Image Prediction Using the Web Audio APINikita Goddard, University of Huddersfield - Huddersfield, UK; Hyunkook Lee, University of Huddersfield - Huddersfield, UK
A web-based interactive tool that facilitates microphone array design and phantom image prediction is presented within this brief. Originally a mobile app, this web version of MARRS (Microphone Array Recording and Reproduction Simulator) provides greater accessibility through most web browsers and further functionality for establishing the optimal microphone array for a desired spatial scene. In addition to its novel psychoacoustic algorithm based on interchannel time-level trade-offs for arbitrary loudspeaker angles, another main feature allows demonstration of the phantom image scene through virtual loudspeaker rendering and room simulation via the Web Audio API. The current version of the MARRS web app is available through the Resources section of the APL Website: http://www.hud.ac.uk/apl
Engineering Brief 499 (Download now)

EB02-4 CityTones: A Repository of Crowdsourced Annotated Soundfield SoundscapesAgnieszka Roginska, New York University - New York, NY, USA; Hyunkook Lee, University of Huddersfield - Huddersfield, UK; Ana Elisa Mendez Mendez, New York University - New York, NY, USA; Scott Murakami, New York University - New York, NY, USA; Andrea Genovese, New York University - New York, NY, USA
Immersive environmental soundscape capture and annotation is a growing area of audio engineering and research with applications in the reproduction of immersive sound experiences in AR and VR, sound classification, and environmental sound archiving. This Engineering Brief introduces CityTones as a crowdsourced repository of soundscapes captured using immersive sound capture methods that the audio community can contribute to. The database will include descriptors containing information about the technical details of the recording, physical information, subjective quality attributes, as well as sound content information.
Engineering Brief 500 (Download now)

EB02-5 Recovering Sound Produced by Wind Turbine Structures Employing Video Motion MagnificationSebastian Cygert, Gdansk University of Technology - Gdansk, Poland; Andrzej Czyzewski, Gdansk University of Technology - Gdansk, Poland; Marta Stefaniak, Gdansk University of Technology - Gdansk, Poland; Bozena Kostek, Gdansk University of Technology - Gdansk, Poland; Audio Acoustics Lab.
The recordings were made with a fast video camera and with a microphone. Using fast cameras allowed for observation of the micro vibrations of the object structure. Motion-magnified video recordings of wind turbines on a wind farm were made for the purpose of building a damage prediction system. An idea was to use video to recover sound and vibrations in order to obtain a contactless diagnostic method for wind turbines. The recovered signals can be analyzed in a way similar to accelerometer signals, employing spectral analysis. They can be also played back through headphones and compared with sounds recorded by microphones.
Engineering Brief 501 (Download now)

EB02-6 Modelling the Effects of Spectator Distribution and Capacity on Speech Intelligibility in a Typical Soccer StadiumRoss Hammond, University of Derby - Derby, Derbyshire, UK; Peter Mapp Associates - Colchester, UK; Peter Mapp, Peter Mapp Associates - Colchester, Essex, UK; Adam J. Hill, University of Derby - Derby, Derbyshire, UK
Public address system performance is frequently simulated using acoustic computer models to assess coverage and predict potential intelligibility. When the typical 0.5 speech transmission index (STI) criterion cannot be achieved in voice alarm systems under unoccupied conditions, justification must be made to allow contractual obligations to be met. An expected increase in STI with occupancy can be used as an explanation, though the associated increase in noise levels must also be considered. This work demonstrates typical changes in STI for different spectator distribution in a calibrated stadium computer model. The effects of ambient noise are also considered. The results can be used to approximate expected changes in STI caused by different spectator occupation rates.
Engineering Brief 502 (Download now)

EB02-7 Influence of the Delay in Monitor System on the Motor Coordination of Musicians while PerformingSzymon Zaporowski, Gdansk University of Technology - Gdansk, Poland; Maciej Blaszke, Gdansk University of Technology - Gdansk, Poland; Dawid Weber, Gdansk University of Technology - Gdansk, Poland; Marta Stefaniak, Gdansk University of Technology - Gdansk, Poland
This paper provides a description and results of measurements of the maximum acceptable value of delay tolerated by a musician, while playing an instrument, that does not cause de-synchronization and discomfort. First, methodology of measurements comprising audio recording and a fast camera is described. Then, the measurement procedure for acquiring the maximum value of delay conditioning comfortably playing is presented. Results of musician’s response while playing an instrument along with a delayed signal reproduced from the monitor system are shown. Finally, a presentation of the highest values of delays for musicians playing different instruments is given along with a detailed discussion on the methodology used.
Engineering Brief 503 (Download now)


Return to Engineering Briefs