AES Rome 2013
Recording Track Event Details
Saturday, May 4, 11:00 — 12:30 (Sala Alighieri)
Tutorial: T1 - Microphone TechnologyPresenter:
Ron Streicher, Pacific Audio-Visual Enterprises - Pasadena, CA, USA
How do microphones work? What differentiates one operating type of transducer from another? How and why do they sound different? What are polar patterns, and how do they affect the way a microphone responds to sound? What is “proximity effect” and why do some mics exhibit more of it than others? What’s the truth about capsule size — does it really matter? These are just a few of the topics covered in this in-depth tutorial on Microphone Technology.
Saturday, May 4, 16:30 — 18:30 (Auditorium Loyola)
Tutorial: T2 - Are We There Yet? The Ultimate Ultra-Portable Production/Recording Studio. From Idea to Final Master: How to Write, Sequence, Produce Your Music, and Control Your Studio Using only Your iPadPresenter:
In this highly interactive and hands-on presentation you will learn the tools, techniques, tips, and tricks required to write, produce, and mix a song using only your iPad.
Through practical examples and scenarios you will learn how to:
• Pick the best software for sequencing, producing, and mixing your music
• Pick the best iPad-compatible hardware tools (microphones, audio interface, MIDI interfaces, controller, etc.)
• Setup your mobile production studio
• Sketch your musical ideas
• Use your iPad as a creative inspirational tool for music composition and sound design
• Sequence and arrange your music ideas on your iPad
• Add real instruments and vocals
• Do a final professional mix
• Master your final mix
In addition, through practical examples and scenarios, you will learn: how to set up your studio in order to use your iOS device as a controller; configuration of iOS device for Logic Pro, Pro Tools, Digital Performer, Live, and Cubase/Nuendo; wireless (MIDI over WI-FI) and wired MIDI connection; proprietary versus open source (OSC, etc.) options; designing your own graphic interface and controllers; ergonomic aspects of using your iOS device in the studio.
Who should attend? Anyone who wants to create some music with their iPads, from beginners to advanced, Musicians, producers, recording engineers, home studio owners from intermediate to advanced.
Saturday, May 4, 17:00 — 18:30 (Sala Foscolo)
Tutorial: T4 - Understanding Microphone SpecificationsPresenter:
Eddy B. Brixen, EBB-consult - Smorum, Denmark
There are lots and lots of microphones available to the audio engineer. The final choice is often made on the basis of experience or perhaps just habits. (Sometimes the mic is chosen because of the looks … .) Nevertheless, there is good information to be found in the microphone specifications. This tutorial will present the most important microphone specs and provide the attendee with up-to-date information on how these specs are obtained and understood. It also takes a critical look on how specs are presented to the user, what to look for and what to expect.
This tutorial is a follow up on the project X85, which has been taking place in the AES standards committee (SC 04-04).
Sunday, May 5, 09:00 — 13:00 (Sala Carducci)
Paper Session: P6 - Recording and Production
Alex Case, University of Massachusetts—Lowell - Lowell, MA, USA
P6-1 Automated Tonal Balance Enhancement for Audio Mastering Applications—Stylianos-Ioannis Mimilakis, Technological Educational Institute of Ionian Island - Lixouri, Greece; Konstantinos Drossos, Ionian University - Corfu, Greece; Andreas Floros, Ionian University - Corfu, Greece; Dionysios Katerelos, Technological Educational Institute of Ionian Island - Lixouri, Greece
Modern audio mastering procedures are involved with the selective enhancement or attenuation of specific frequency bands. The main reason is the tonal enhancement of the original / unmastered audio material. The aforementioned process is mostly based on the musical information and the mode of the audio material. This information can be retrieved from a listening procedure of the original stimuli, or the correspondent musical key notes. The current work presents an adaptive and automated equalization system that performs the aforementioned mastering procedure, based on a novel method of fundamental frequency tracking. In addition to this, the overall system is being evaluated with objective PEAQ analysis and subjective listening tests in real mastering audio conditions.
Convention Paper 8836 (Purchase now)
P6-2 A Pairwise and Multiple Stimuli Approach to Perceptual Evaluation of Microphone Types—Brecht De Man, Queen Mary University of London - London, UK; Joshua D. Reiss, Queen Mary University of London - London, UK
An essential but complicated task in the audio production process is the selection of microphones that are suitable for a particular source. A microphone is often chosen based on price or common practices, rather than whether the microphone actually sounds best in that particular situation. In this paper we perceptually assess six microphone types for recording a female singer. Listening tests using a pairwise and multiple stimuli approach are conducted to identify the order of preference of these microphone types. The results of this comparison are discussed, and the performance of each approach is assessed.
Convention Paper 8837 (Purchase now)
P6-3 Comparison of Power Supply Pumping of Switch-Mode Audio Power Amplifiers with Resistive Loads and Loudspeakers as Loads—Arnold Knott, Technical University of Denmark - Kgs. Lyngby, Denmark; Lars Press Petersen, Technical University of Denmark - Kgs. Lyngby, Denmark
Power supply pumping is generated by switch-mode audio power amplifiers in half-bridge configuration, when they are driving energy back into their source. This leads in most designs to a rising rail voltage and can be destructive for either the decoupling capacitors, the rectifier diodes in the power supply or the power stage of the amplifier. Therefore precautions are taken by the amplifier and power supply designer to avoid those effects. Existing power supply pumping models are based on an ohmic load attached to the amplifier. This paper shows the analytical derivation of the resulting waveforms and extends the model to loudspeaker loads. Measurements verify that the amount of supply pumping is reduced by a factor of four when comparing the nominal resistive load to a loudspeaker. A simplified and more accurate model is proposed and the influence of supply pumping on the audio performance is proven to be marginal.
Convention Paper 8838 (Purchase now)
P6-4 The Psychoacoustic Testing of the 3-D Multiformat Microphone Array Design and the Basic Isosceles Triangle Structure of the Array and the Loudspeaker Reproduction Configuration—Michael Williams, Sounds of Scotland - Le Perreux sur Marne, France
Optimizing the loudspeaker configuration for 3-D microphone array design can only be achieved with a clear knowledge of the psychoacoustic parameters of reproduction of height localization. Unfortunately HRTF characteristics do not seem to give us useful information when applied to loudspeaker reproduction. A set of psychoacoustic parameters have to be measured for different configurations in order to design an efficient microphone array recording system, even more so, if a minimalistic approach to array design is going to be a prime objective. In particular the position of a second layer of loudspeakers with respect to the primary horizontal layer is of fundamental importance and can only be based on the psychoacoustics of height perception. What are the localization characteristics between two loudspeakers situated in each of the two layers? Is time difference as against level difference a better approach to interlayer localization? This paper will try to answer these questions and also justify the basic isosceles triangle loudspeaker structure that will help to optimize the reproduction of height information.
Convention Paper 8839 (Purchase now)
P6-5 A Perceptual Audio Mixing Device—Michael J. Terrell, Queen Mary University of London - London, UK; Andrew J. R. Simpson, Queen Mary University of London - London, UK; Mark B. Sandler, Queen Mary University of London - London, UK
A method and device is presented that allows novice and expert audio engineers to perform mixing using perceptual controls. In this paper we use Auditory Scene Analysis [Bregman, 1990, MIT Press, Cambridge] to relate the multitrack component signals of a mix to the perception of that mix. We define the multitrack components of a mix as a group of audio streams, which are transformed into sound streams by the act of reproduction, and which are ultimately perceived as auditory streams by the listener. The perceptual controls provide direct manipulation of loudness balance within a mixture of sound streams, as well as the overall mix loudness. The system employs a computational optimization strategy to perform automatic signal gain adjustments to component audio-streams, such that the intended loudness balance of the associated sound-streams is produced. Perceptual mixing is performed using a complete auditory model, based on a model of loudness for time-varying sound streams [Glasberg and Moore, J. Audio Eng. Soc., vol. 50, 331-342 (2002 May)]. The use of the auditory model enables the loudness balance to be automatically maintained regardless of the listening level. Thus, a perceptual definition of the mix is presented that is listening-level independent, and a method of realizing the mix practically is given.
Convention Paper 8840 (Purchase now)
P6-6 On the Use of a Haptic Feedback Device for Sound Source Control in Spatial Audio Systems—Frank Melchior, BBC Research and Development - Salford, UK; Chris Pike, BBC Research and Development - Salford, York, UK; Matthew Brooks, BBC Research and Development - Salford, UK; Stuart Grace, BBC Research and Development - Salford, UK
Next generation spatial audio systems are likely to be capable of 3-D sound reproduction. Systems currently under discussion require the sound designer to position and manipulate sound sources in three dimensions. New intuitive tools, designed to meet the requirements of audio production environments, are needed to make efficient use of this new technology. This paper investigates a haptic feedback controller as a user interface for spatial audio systems. The paper will give an overview of conventional tools and controllers. A prototype has been developed based on the requirements of different tasks and reproduction methods. The implementation will be described in detail and the results of a user evaluation will be given.
Convention Paper 8841 (Purchase now)
P6-7 Audio Level Alignment—Evaluation Method and Performance of EBU R 128 by Analyzing Fader Movements—Jon Allan, Luleå University of Technology - Piteå, Sweden; Jan Berg, Luleå University of Technology - Piteå, Sweden
A method is proposed for evaluating audio meters in terms of how well the engineer conforms to a level alignment recommendation and succeeds to achieve evenly perceived audio levels. The proposed method is used to evaluate different meter implementations, three conforming to the recommendation EBU R 128 and one conforming to EBU Tech 3205-E. In an experiment, engineers participated in a simulated live broadcast show and the resulting fader movements were recorded. The movements were analyzed in terms of different characteristics: fader mean level, fader variability, and fader movement. Significant effects were found showing that engineers do act differently depending on the meter and recommendation at hand.
Convention Paper 8842 (Purchase now)
P6-8 Balance Preference Testing Utilizing a System of Variable Acoustic Condition—Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Brett Leonard, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Scott Levine, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Grzegorz Sikora, Bang & Olufsen Deutschland GmbH - Pullach, Germany
In the modern world of audio production, there exists a significant disconnect between the music mixing control room of the audio professional and the listening space of the consumer or end user. The goal of this research is to evaluate a series of varying acoustic conditions commonly used in such listening environments. Expert listeners are asked to perform basic balancing tasks, under varying acoustic conditions. The listener can remain in position while motorized panels rotate behind a screen, exposing a different acoustic condition for each trial. Results show that listener fatigue as a variable is thereby eliminated, and the subject’s aural memory is quickly cleared, so that the subject is unable to adapt to the newly presented condition for each trial.
Convention Paper 8843 (Purchase now)
Sunday, May 5, 11:00 — 13:00 (Auditorium Loyola)
Tutorial: T6 - Drum ProgrammingPresenter:
Justin Paterson, London College of Music, University of West London - London, UK
Drum programming has been evolving at the heart of many studio productions for some 30 years. Over this period, technological opportunities for enhanced creativity have multiplied in numerous directions. This tutorial will demonstrate a number of these directions as they are often implemented in contemporary professional practice, showing contrasting techniques used in the creation of both human emulation and the unashamedly synthetic. Alongside this, many of the studio production techniques often used to enhance such work will be discussed, ranging from dynamic processing to intricate automation.
The session will include numerous live demonstrations covering a range of approaches. Although introducing all key concepts from scratch, its range and hybridization should provide inspiration even for experienced practitioners.
Sunday, May 5, 14:30 — 18:00 (Sala Carducci)
Paper Session: P9 - Room Acoustics
Chris Baume, BBC Research and Development - London, UK
P9-1 Various Applications of Active Field Control—Takayuki Watanabe, Yamaha Corp. - Hamamatsu, Shizuoka, Japan; Masahiro Ikeda, Yamaha Corporation - Hamamatsu, Shizuoka, Japan
The Active Field Control system is an acoustic enhancement system that was developed to improve the acoustic conditions of a space so as to match the acoustic conditions required for a variety of different types of performance programs. This system is unique in that it uses FIR filtering to ensure freedom of control and the concept of spatial averaging to achieve stability with a lower number of channels than comparative systems. This system has been used in over 70 projects in both the U.S. and Japan. This paper will provide an overview of the characteristics of the system and examples of how the system has been applied.
Convention Paper 8859 (Purchase now)
P9-2 Comparative Acoustic Measurements: Spherical Sound Source vs. Dodecahedron—Plamen Valtchev, Univox - Sofia, Bulgaria; Denise Gerganova, Spherovox
Spherical sound source, consisting of a pair of coaxial loudspeakers and a pair of compression drivers and radiating into a common radially expanding horn, is used for acoustic measurements of rooms for speech and music. For exactly the same source-microphone pair positions, comparative measurements are made with a typical dodecahedron, keeping the same microphone technique, identical signals, and recording hardware under the same measuring conditions. Several software programs were used for evaluation of the acoustical parameters extracted from impulse responses. Parameters are presented in tables and graphics for better sound source comparisons. Spherical sound source reveals higher dynamic range and perfectly repeatable parameters with source rotation, which is in contrast to dodecahedron, where rotation steps resulted in some parameters’ deviation.
Convention Paper 8860 (Purchase now)
P9-3 Archaeoacoustics: An Introduction—A New Take on an Old Science—Lise-Lotte Tjellesen, CLARP - London, UK; Karen Colligan, CLARP - London, UK
What is Archaeoacoustics and how is it defined? This paper will discuss the history and varying aspects of the discipline of archaeoacoustics, i.e., sound that has been measured, modeled, and analyzed with modern techniques in and around Ancient sites, temple complexes, and standing stones. Piecing together sound environments from a long lost past it is brought to life as a tool for archaeologists and historians. This paper will provide a general overview of some of the most prolific studies to date, discuss some measurement and modeling methods, and discuss where archaeoacoustics may be headed in the future and what purpose it serves in academia.
Convention Paper 8861 (Purchase now)
P9-4 Scattering Effects in Small-Rooms: From Time and Frequency Analysis to Psychoacoustic Investigation—Lorenzo Rizzi, Suono e Vita - Acoustic Engineering - Lecco, Italy; Gabriele Ghelfi, Suono e Vita - Acoustic Engineering - Lecco, Italy
This work continues the authors’ effort to optimize a DSP tool for extrapolating from R.I.R. information regarding mixing time and sound scattering effects with in-situ measurements. Confirming past thesis, a new specific experiment allowed to scrutinize the effects of QRD scattering panels over non-Sabinian environments, both in frequency and in time domain. Listening tests have been performed to investigate perception of scattering panels effecting small-room acoustic quality. The sound diffusion properties have been searched with specific headphone auralization interviews, convolving known R.I.R.s with anechoic musical samples and correlating calculated data to psychoacoustic responses. The results validate the known effect on close field recording in small rooms for music and recording giving new insights.
Convention Paper 8862 (Purchase now)
P9-5 The Effects of Temporal Alignment of Loudspeaker Array Elements on Speech Intelligibility—Timothy J. Ryan, Webster University - St. Louis, MO, USA; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Jonas Braasch, Rensselaer Polytechnic Institute - Troy, NY, USA; William L. Martens, University of Sydney - Sydney, NSW, Australia
The effects of multiple arrivals on the intelligibility of speech produced by live-sound reinforcement systems are examined. Investigated variables include the delay time between arrivals from multiple loudspeakers within an array and the geometry and type of array. Subjective testing, using captured binaural recordings of the Modified Rhyme Test under various treatment conditions, was carried out to determine the first- and second-order effects of the two experimental variables. Results indicate that different interaction effects exist for different amounts of delay offset.
Convention Paper 8863 (Purchase now)
P9-6 Some Practical Aspects of STI Measurement and Prediction—Peter Mapp, Peter Mapp Associates - Colchester, Essex, UK
The Speech Transmission Index (STI) has become the internationally accepted method of testing and assessing the potential intelligibility of sound systems. The technique is standardized in IEC 60268-16, however, it is not a flawless technique. The paper discusses a number of common mechanisms that can affect the accuracy of STI measurements and predictions. In particular it is shown that RaSTI is a poor predictor of STI in many sound system applications and that the standard speech spectrum assumed by STI often does not replicate the speech spectrum of real announcements and is not in good agreement with other speech spectrum studies. The effects on STI measurements of common signal processing techniques such as equalization, compression, and AGC are also demonstrated and the implications discussed. The simplified STI derivative STIPA is shown to be a more robust method of assessing sound systems than RaSTI and when applied as a direct measurement method can have significant advantages over Impulse Response-based STI measurement techniques.
Convention Paper 8864 (Purchase now)
P9-7 Combined Quasi-Anechoic and In-Room Equalization of Loudspeaker Responses—Balazs Bank, Budapest University of Technology and Economics - Budapest, Hungary
This paper presents a combined approach to loudspeaker/room response equalization based on simple in-room measurements. In the first step, the anechoic response of the loudspeaker, which mostly determines localization and timbre perception, is equalized with a low-order non-minimum phase equalizer. This is actually done using the gated in-room response, which of course means that the equalization is incorrect at low frequencies where the gate time is shorter than the anechoic impulse response. In the second step, a standard, fractional-octave resolution minimum-phase equalizer is designed based on the in-room response pre-equalized with the quasi-anechoic equalizer. This second step, in addition to correcting the room response, automatically compensates the low-frequency errors made in the quasi-anechoic equalizer design when we were using gated responses.
Convention Paper 8826 (Purchase now)
Sunday, May 5, 14:45 — 16:15 (Auditorium Loyola)
Tutorial: T9 - Electric Guitar—What a Recordist Ought to KnowPresenter:
Alex Case, University of Massachusetts—Lowell - Lowell, MA, USA
Musicians obsess about every detail of their instrument. Engineers do the same with every detail of their studio. This tutorial merges those obsessions, so that a recording engineer can be more fully informed about the key drivers of tone for the electric guitar. Know the instrument first and let that drive your decisions for recording and mixing the instrument.
Monday, May 6, 09:00 — 10:30 (Auditorium Loyola)
Tutorial: T10 - Creative Distortion—You Are in the Over-Driver's SeatPresenter:
Alex Case, University of Massachusetts—Lowell - Lowell, MA, USA
Distortion, strategically applied to elements of your mix, is a source of energy that lifts tracks up out of a crowded arrangement and adds excitement to the performance. Accidental distortion, on the other hand, is a certain sign that the production is unprofessional, dragging down its chance for success. Amps, stomp boxes, tubes, transformers, tape machines, the plug-ins that emulate them, and the plug-ins that create wholly new forms of distortion all offer a rich palette of distortion colors. Mix engineers must know how to choose among them, and how to tailor them to the music. This tutorial takes a close look at distortion, detailing the technical goings-on when things break-up, and defining the production potential of this, the caffeine of effects.
Monday, May 6, 10:00 — 11:30 (Foyer)
Poster: P13 - Room Acoustics
P13-1 The Effect of Playback System on Reverberation Level Preference—Brett Leonard, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Richard King, McGill University - Montreal, Quebec, Canada; The Centre for Interdisciplinary Research in Music Media and Technology - Montreal, Quebec, Canada; Grzegorz Sikora, Bang & Olufsen Deutschland GmbH - Pullach, Germany
The critical role of reverberation in modern acoustic music production is undeniable. Unlike many other effects, reverberation’s spatial nature makes it extremely dependent upon the playback system over which it is experienced. While this characteristic of reverberation has been widely acknowledged among recording engineers for years, the increase in headphone listening prompts further exploration of these effects. In this study listeners are asked to add reverberation to a dry signal as presented over two different playback systems: headphones and loudspeakers. The final reverberation levels set by each subject are compared for the two monitoring systems. The resulting data show significant level differences across the two monitoring systems.
Convention Paper 8886 (Purchase now)
P13-2 Adaptation of a Large Exhibition Hall as a Concert Hall Using Simulation and Measurement Tools—Marco Facondini, TanAcoustics Studio - Pesaro (PU), Italy; Daniele Ponteggia, Studio Ing. Ponteggia - Terni (TR), Italy
Due to the growing demand of multifunctional performing spaces, there is a strong interest in the adaptation of non-dedicated spaces to host musical performances. This leads to new challenges for the acousticians with new design constraints and very tight time frames. This paper shows a practical example of the adaptation of the “Sala della Piazza” of the “Palacongressi” of Rimini. Thanks to the combined use of prediction and measurement tools it has been possible to design the acoustical treatments with a high degree of accuracy, reaching all targets and at the same time respecting the tight deadlines.
Convention Paper 8887 (Purchase now)
P13-3 Digital Filter for Modeling Air Absorption in Real Time—Carlo Petruzzellis, ZP Engineering S.r.L. - Rome, Italy; Umberto Zanghieri, ZP Engineering S.r.L. - Rome, Italy
Sound atmospheric attenuation is a relevant aspect of realistic space modeling in 3-D audio simulation systems. A digital filter has been developed on commercial DSP processors to match air absorption curves. This paper focuses on the algorithm implementation of a digital filter with continuous roll-off control, to simulate high frequency damping of audio signals in various atmospheric conditions, along with rules to allow a precise approximation of the behavior described by analytical formulas.
Convention Paper 8888 (Purchase now)
P13-4 Development of Multipoint Mixed-Phase Equalization System for Multiple Environments—Stefania Cecchi, Universitá Politecnica della Marche - Ancona, Italy; Marco Virgulti, Universitá Politecnica della Marche - Ancona, Italy; Stefano Doria, Leaff Engineering - Ancona, Italy; Ferruccio Bettarelli, Leaff Engineering - Ancona, Italy; Francesco Piazza, Universitá Politecnica della Marche - Ancona (AN), Italy
The development of a mixed-phase equalizer is still an open problem in the field of room response equalization. In this context, a multipoint mixed-phased impulse response equalization system is presented taking into consideration a magnitude equalization procedure based on a time-frequency segmentation of the impulse responses and a phase equalization technique based on the group delay analysis. Furthermore, an automatic software tool for the measurement of the environment impulse responses and for the design of a suitable equalizer is presented. Taking advantage of this tool, several tests have been performed considering objective and subjective analysis applied in a real environment and comparing the obtained results with different approaches.
Convention Paper 8889 (Purchase now)
P13-5 Acoustics Modernization of the Recording Studio in Wroclaw University of Technology—Magdalena Kaminska, Wroclaw University of Technology - Wroclaw, Poland; Patryk Kobylt, Wroclaw University of Technology - Wroclaw, Poland; Bartlomiej Kruk, Wroclaw University of Technology - Wroclaw, Poland; Jan Sokolnicki, Wroclaw University of Technology - Wroclaw, Poland
The aim of this paper is to present results of the acoustic modernization at the Wroclaw University of Technology recording studio. During the project realization, the focus is on the problem arising in one part of the recording studio—the so-called flutter echoes phenomenon. To minimize this effect we present a several-stage process in which the studio is accommodated to expect this occurrence. The first step was to make some measurements of acoustic properties in the room with the concentration on the previously mentioned effect. Next, a one-dimension diffuser was designed and placed in the phenomenon incidence. The last stage of the research was an acoustic measurement after modification and comparison with the properties before the changes.
Convention Paper 8890 (Purchase now)
P13-6 Accurate Acoustic Echo Reduction with Residual Echo Power Estimation for Long Reverberation—Masahiro Fukui, NTT Corporation - Musashino-shi, Tokyo, Japan; Suehiro Shimauchi, NTT Corporation - Musashino-shi, Tokyo, Japan; Yusuki Hioka, University of Canterbury - Christchurch, New Zealand; Hitoshi Ohmuro, NTT Corporation - Musashino-shi, Tokyo, Japan; Yoichi Haneda, The University of Electro-Communications - Chofu-shi, Tokyo, Japan
This paper deals with the problem of estimating and reducing residual echo components that result from reverberant components beyond the length of FFT block. The residual echo reduction process suppresses the residual echo by applying a multiplicative gain calculated from the estimated echo power spectrum. However, the estimated power spectrum reproduces only a fraction of the echo-path impulse response and so all the reverberant component are not considered. To address this problem we introduce a finite nonnegative convolution method by which each segment of echo-impulse response is convoluted with a received signal in a power spectral domain. With the proposed method, the power spectra of each segment of echo-impulse response are collectively estimated by solving the least-mean-squares problem between the microphone and the estimated-residual-echo power spectra. The performance of this method was demonstrated by simulation results in which speech distortions were decreased compared with the conventional method.
Convention Paper 8891 (Purchase now)
Monday, May 6, 14:15 — 16:15 (Sala Alighieri)
Tutorial: T14 - Rub & Buzz and Other Irregular Loudspeaker DistortionPresenter:
Wolfgang Klippel, Klippel GmbH - Dresden, Germany
Loudspeaker defects caused by manufacturing, aging, overload, or climate impact generate a special kind of irregular distortion commonly known as rub & buzz, which are highly audible and intolerable for the human ear. Contrary to regular loudspeaker distortions defined in the design process, the irregular distortions are hardly predictable and are generated by an independent process triggered by the input signal. Traditional distortion measurements such as THD fail in the reliable detection of those defects. This tutorial discusses the most important defect classes, new measurement techniques, audibility, and the impact on perceived sound quality.
Monday, May 6, 14:30 — 17:30 (Sala Foscolo)
Workshop: W8 - Current Reference Listening Room Standards: Are They MeaningfulChair:
Todd Welti, Harman International - Northridge, CA, USA
Sean Olive, Harman International - Northridge, CA, USA
Francis Rumsey, Logophon Ltd - Oxfordshire, UK
Andreas Silzle, Fraunhofer Institute for Integrated Circuits IIS - Erlangen, Germany
Thomas Sporer, Fraunhofer IDMT - Ilmenau, Germany
The ITU BS.1116 standard “Methods for the Subjective Assessment of Small Impairments in Audio Systems … ” contains guidelines for standard listening environments used for assessing subjective quality of audio systems. This includes factors like dynamics and frequency/directivity response for loudspeakers, physical, and acoustical properties of the room. This standardized playback environment should be consistent and representative of high quality systems, yet representative of systems that actually exist. This workshop takes a fresh look at the ITU BS.1116 standard and how it could be improved. In fact some changes that in theory might improve the specification might not actually be practical. Many aspects of the discussion would be relevant to reference listening room standards other then the BS.1116 standard, thus the more general workshop title.
Monday, May 6, 14:30 — 17:30 (Sala Carducci)
Paper Session: P14 - Applications in Audio
Juha Backman, Nokia Corporation - Espoo, Finland
P14-1 Implementation of an Intelligent Equalization Tool Using Yule-Walker for Music Mixing and Mastering—Zheng Ma, Queen Mary University of London - London, UK; Joshua D. Reiss, Queen Mary University of London - London, UK; Dawn A. A. Black, Queen Mary University of London - London, UK
A new approach for automatically equalizing an audio signal toward a target frequency spectrum is presented. The algorithm is based on the Yule-Walker method and designs recursive IIR digital filters using a least-squares fitting to any desired frequency response. The target equalization curve is obtained from the spectral distribution analysis of a large dataset of popular commercial recordings. A real-time C++ VST plug-in and an off-line Matlab implementation have been created. Straightforward objective evaluation is also provided, where the output frequency spectra are compared against the target equalization curve and the ones produced by an alternative equalization method.
Convention Paper 8892 (Purchase now)
P14-2 On the Informed Source Separation Approach for Interactive Remixing in Stereo—Stanislaw Gorlow, University of Bordeaux - Talence, France; Sylvain Marchand, Université de Brest - Brest, France
Informed source separation (ISS) has become a popular trend in the audio signal processing community over the past few years. Its purpose is to decompose a mixture signal into its constituent parts at the desired or the best possible quality level given some metadata. In this paper we present a comparison between two ISS systems and relate the ISS approach in various configurations with conventional coding of separate tracks for interactive remixing in stereo. The compared systems are Underdetermined Source Signal Recovery (USSR) and Enhanced Audio Object Separation (EAOS). The latter forms a part of MPEG’s Spatial Audio Object Coding technology. The performance is evaluated using objective difference grades computed with PEMO-Q. The results suggest that USSR performs perceptually better than EOAS and has a lower computational complexity.
Convention Paper 8893 (Purchase now)
P14-3 Scene Inference from Audio—Daniel Arteaga, Fundacio Barcelona Media - Barcelona, Spain; Universitat Pompeu Fabra - Barcelona, Spain; David García-Garzón, Universitat Pompeu Fabra - Barcelona, Spain; Toni Mateos, imm sound - Barcelona, Spain; John Usher, Hearium Labs - San Francisco, CA, USA
We report on the development of a system to characterize the geometric and acoustic properties of a space from an acoustic impulse response measured within it. This can be thought of as the inverse problem to the common practice of obtaining impulse responses from either real-world or virtual spaces. Starting from an impulse response recorded in an original scene, the method described here uses a non-linear search strategy to select a scene that is perceptually as close as possible to the original one. Potential applications of this method include audio production, non-intrusive acquisition of room geometry, and audio forensics.
Convention Paper 8894 (Purchase now)
P14-4 Continuous Mobile Communication with Acoustic Co-Location Detection—Robert Albrecht, Aalto University - Espoo, Finland; Sampo Vesa, Nokia Research Center - Nokia Group, Finland; Jussi Virolainen, Nokia Lumia Engineering - Nokia Group, Finland; Jussi Mutanen, JMutanen Software - Jyväskylä, Finland; Tapio Lokki, Aalto University - Aalto, Finland
In a continuous mobile communication scenario, e.g., between co-workers, participants may occasionally be located in the same space and thus hear each other naturally. To avoid hearing echoes, the audio transmission between these participants should be cut off. In this paper an acoustic co-location detection algorithm is proposed for the task, grouping participants together based solely on their microphone signals and mel-frequency cepstral coefficients thereof. The algorithm is tested both on recordings of different communication situations and in real time integrated into a voice-over-IP communication system. Tests on the recordings show that the algorithm works as intended, and the evaluation using the voice-over-IP conferencing system concludes that the algorithm improves the overall clarity of communication compared with not using the algorithm. The acoustic co-location detection algorithm thus proves a useful aid in continuous mobile communication systems.
Convention Paper 8895 (Purchase now)
P14-5 Advancements and Performance Analysis on the Wireless Music Studio (WeMUST) Framework—Leonardo Gabrielli, Universitá Politecnica delle Marche - Ancona, Italy; Stefano Squartini, Università Politecnica delle Marche - Ancona, Italy; Francesco Piazza, Universitá Politecnica della Marche - Ancona (AN), Italy
Music production devices and musical instruments can take advantage of IEEE 802.11 wireless networks for interconnection and audio data sharing. In previous works such networks have been proved able to support high-quality audio streaming between devices at acceptable latencies in several application scenarios. In this work a prototype device discovery mechanism is described to improve ease of use and flexibility. A diagnostic tool is also described and provided to the community that allows to characterize average network latency and packet loss. Lower latencies are reported after software optimization and sustainability of multiple audio channels is also proved by means of experimental tests.
Convention Paper 8896 (Purchase now)
P14-6 Acoustical Characteristics of Vocal Modes in Singing—Eddy B. Brixen, EBB-consult - Smorum, Denmark; Cathrine Sadolin, Complete Vocal Institute - Copenhagen, Denmark; Henrik Kjelin, Complete Vocal Institute - Copenhagen, Denmark
According to the Complete Vocal Technique four vocal modes are defined: Neutral, Curbing, Overdrive, and Edge. These modes are valid for both the singing voice and the speaking voice. The modes are clearly identified both from listening and from visual laryngograph inspection of the vocal cords and the surrounding area of the vocal tract. In a recent work a model has been described to distinguish between the modes based on acoustical analysis. This paper looks further into the characteristics of the voice modes in singing in order to test the model already provided. The conclusion is that the model is too simple to cover the full range. The work has also provided information on singers’ SPL and formants’ repositioning in dependence of pitch. Further work is recommended.
Convention Paper 8897 (Purchase now)
Tuesday, May 7, 11:45 — 13:15 (Auditorium Loyola)
Tutorial: T16 - Microphones: The Physics, Metaphysics, and PhilosophyPresenter:
Ron Streicher, Pacific Audio-Visual Enterprises - Pasadena, CA, USA
Before you can place the first microphone in the studio, you need to develop a clear understanding of the sound that you want to emanate from the loudspeakers when the project is finished. To do this, you need to determine what are the elements that are essential for creating the “sonic illusion,” and then decide how to balance the often conflicting elements and competing demands of technology vs. art. Microphone techniques—although critical—are only a part of this process. Equally important are the criteria for monitoring and evaluating the results. Using recorded examples and practical demonstrations, the various aspects of this creative process are developed and brought into focus.