AES New York 2017
Engineering Brief EB07
EB07 - Posters—Part 3
Saturday, October 21, 2:00 pm — 3:30 pm (Poster Area)
EB07-1 Cycle-Frequency Wavelet Analysis of Electro-Acoustic Systems—Daniele Ponteggia, Audiomatica Srl - Firenze (FI), Italy
A joint time-frequency analysis of the response of electro-acoustic systems has been long sought since the advent of PC based measurement systems. While there are several available tools to inspect the time-frequency response, when it comes to inspect resonant phenomena, there are always issues with time-frequency resolution. With a rather simple variable substitution in the Wavelet Analysis it is possible to switch one of the analysis axes from time to cycles. With this new cycle-frequency distribution it is then possible to analyze very easily resonances and decays. A PC based measurement tool capable of Cycle-Frequency analysis will be introduced.
Engineering Brief 388 (Download now)
EB07-2 Sharper Spectrograms with Fast Local Sharpening—Robin Lobel, Divide Frame - Paris, France
Spectrograms have to make compromises between time and frequency resolution because of the limitations of the short-time Fourier transform (Gabor,1946). Wavelets have the same issue. As a result spectrograms often appear blurry, either in time, frequency, or both, A method called Reassignment was introduced in 1978 (Kodera et al.) to make spectrograms look sharper. Unfortunately it also adds visual noise, and its algorithm does not make it suitable for realtime scenarios. Fast Local Sharpening is a new method that attempts to overcome both these drawbacks.
Engineering Brief 389 (Download now)
EB07-3 An Interactive and Intelligent Tool for Microphone Array Design—Hyunkook Lee, University of Huddersfield - Huddersfield, UK; Dale Johnson, The University of Huddersfield - Huddersfield, UK; Manchester, UK; Maksims Mironovs, University of Huddersfield - Huddersfield, West Yorkshire, UK
This engineering brief will present a new microphone array design app named MARRS (microphone array recording and reproduction simulator). Developed based on a novel psychoacoustic time-level trade-off algorithm, MARRS provides an interactive, object-based workflow and graphical user interface for localization prediction and microphone array configuration. It allows the user to predict the perceived positions of multiple sound sources for a given microphone configuration. The tool can also automatically configure suitable microphone arrays for the user’s desired spatial scene in reproduction. Furthermore, MARRS overcomes some of the limitations of existing microphone array simulation tools by taking into account microphone height and vertical orientations as well as the target loudspeaker base angle. The iOS and Android app versions of MARRS can be freely downloaded from the Apple App Store and the Resources section of the APL website: https://www.hud.ac.uk/apl, respectively.
Engineering Brief 390 (Download now)
EB07-4 Real-Time Multichannel Interfacing for a Dynamic Flat-Panel Audio Display Using the MATLAB Audio Systems Toolbox—Arvind Ramanathan, University of Rochester - Rochester, NY, USA; Michael Heilemann, University of Rochester - Rochester, NY, USA; Mark F. Bocko, University of Rochester - Rochester, NY, USA
Flat-panel audio displays use an array of force actuators to render sound sources on a display screen. The signal sent to each force actuator depends on the actuator position, the resonant properties of the panel, and the source position on the screen. A source may be translated to different spatial locations using the shifting theorem of the Fourier transform. A real-time implementation of this source positioning is presented using the MATLAB Audio Systems Toolbox. The implementation includes a graphical interface that allows a user to dynamically position the sound source on the screen. This implementation may be combined with audio source separation techniques to align audio sources with video images in real-time as part of a multimodal display.
Engineering Brief 391 (Download now)
EB07-5 Perceived Differences in Timbre, Clarity, and Depth in Audio Files Treated with MQA Encoding vs. Their Unprocessed State—Mariane Generale, McGill University - Montreal, QC, Canada; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada
The purpose of this engineering brief is to detail a planned experiment in examining any perceived differences in timbre, clarity, and depth between WAV and Master Quality Authenticated (MQA) audio files. A study proposes examining the responses of engineers, musicians, and casual listeners on whether any changes to timbre, clarity, and depth are perceived between WAV and MQA. A blind listening test is considered in a controlled environment using both professional and consumer level loudspeakers and headphones. Additional interests include a comparison of responses between the target groups on different listening mediums.
Engineering Brief 392 (Download now)
EB07-6 The BACH Experience: Bring a Concert Home—Sattwik Basu, University of Rochester - Rochester, NY, USA; Saarish Kareer, University of Rochester - Rochester, NY, USA
Inverse filtering of rooms to improve their frequency response or reverberation time is a well-researched topic in acoustical signal processing. With the aim of giving music lovers the experience of a concert hall in their own homes, we describe a system that employs signal processing techniques, including inverse filtering, to accurately reproduce concert hall acoustics in a home listening space. First, binaural impulse responses were measured at a few chosen seating positions in the concert hall. Next, the listening location along with its loudspeaker configuration is acoustically characterized and inverse filtered using MINT and Cross-talk Cancellation algorithms to produce a flat-frequency response. We observed that speech and music, after our inverse filtering method showed near-anechoic qualities which allowed us to subsequently impress the acoustical response of a wide range of concert halls upon the original audio. A demonstration will be provided using 4 loudspeakers for a quadraphonic sound reproduction at the listening area. In continuing work, to produce a sufficiently wide listening area, we are combining head tracking with adaptive inverse filtering to adjust to the listeners’ movements.
Engineering Brief 393 (Download now)
EB07-7 Early Reflection Remapping in Synthetic Room Impulse Responses: Theoretical Foundation—Gregory Reardon, New York University - New York, NY, USA
In audio-visual augmented and virtual reality applications, the audio delivered must be consistent with the physical or virtual environment, respectively, in which the viewer/listener is located. Artificial binaural reverberation processing can be used to match the listener’s/viewer's environment acoustics. Typical real-time artificial binaural reverberators render the binaural room impulse response in three distinct section for computational efficiency. Rendering the response using different techniques means that within the response the early reflections and late reverberation may not give the same room-acoustic impression. This paper lays the theoretical foundation for early reflection remapping. This is accomplished by acoustically characterizing the virtual room implied by the early reflections renderer and then later removing that room-character from the response through frequency-domain reshaping.
Engineering Brief 394 (Download now)
EB07-8 Acoustic Levitation—Standing Wave Demonstration—Bartlomiej Chojnacki, AGH University of Science and Technology - Kracow, Poland; Adam Pilch, AGH University of Science and Technology - Krakow, Poland; Marcin Zastawnik, AGH University of Science and Technology - Krakow, Poland; ProperSound - The Spokesmen of Science; Aleksandra Majchrzak, AGH University of Science and Technology - Krakow, Poland
Acoustic levitation is a spectacular phenomenon, perfect for standing waves demonstration. There are a few propositions for such a construction in scientific literature, however they are often expensive and difficult to build. The aim of this project was to create a functional stand - easy to construct, with no need for much expensive software or hardware. Piezoelectric transducers, typical for ultrasonic washing machines, were used as a sound source; their directivity pattern and frequency characteristics have been measured. The final result of the project was a stand-alone acoustic levitator with very little need for calibration, and with no walls, so the effect can be observed easily. The paper presents whole design process and describes all functionalities of the final stand.
Engineering Brief 395 (Download now)
EB07-9 Developing a Reverb Plugin; Utilizing Faust Meets JUCE Framework—Steve Philbert, University of Rochester - Rochester, NY, USA
Plug-ins come in many different shapes, sizes and sounds, but what makes one different from another? The coding of audio and the development of the graphical User Interface (GUI) play a major part in how the plugin sounds and how it functions. This paper details methods of developing a reverb plugin by comparing different programming methods based around the Faust meets JUCE framework launched in February of 2017. The methods include: Faust direct to a plugin, Faust meets JUCE compiled with different architectures, and C++ with JUCE Framework. Each method has its benefits; some are easier to use while others provide a better basis for customization.
Engineering Brief 396 (Download now)