AES New York 2015
Engineering Brief Details

EB1 - Transducers—Part 1

Thursday, October 29, 12:00 pm — 12:45 pm (Room 1A07)

Michael Smithers, Dolby Laboratories - Sydney, NSW, Australia

EB1-1 Wireless Speaker Synchronization: SolvedSimon Forrest, Imagination Technologies - Hertfordshire, UK
Many high-end stereo systems offer the opportunity to connect several speakers together wirelessly to create a multi-room audio experience. However, linking speakers wirelessly to create stereo pairs or surround sound systems is technically challenging, due to the extremely tight synchronization necessary to accurately reproduce a faithful sound stage and maintain channel separation. Imagination measures several competing technologies on the market today and illustrates how innovative application of Wi-Fi networking protocols in audio chips can deliver several orders of magnitude improvement, creating opportunity for high quality wireless audio and producing results that are indistinguishable from wired speaker systems.
Engineering Brief 202 (Download now)

EB1-2 Multiphysical Simulation Methods for Loudspeakers—Advanced CAE-Based Simulations of Vibration SystemsAlfred Svobodnik, Konzept-X GmbH - Karlsruhe, Germany; Roger Shively, JJR Acoustics, LLC - Seattle, WA, USA; Marc-Olivier Chauveau, Moca Audio - Tours, France; Tommaso Nizzoli, Acoustic Vibration Consultant - Reggio Emilia, Italy; Dieter Thöres, Konzept-X GmbH - Karlsruhe, Germany
This is the second in a series of papers on the details of loudspeaker design using multiphysical computer aided engineering simulation methods. In this paper the simulation methodology for accurately modeling the structural dynamics of loudspeaker’s vibration systems will be presented. Primarily, the calculation of stiffness, or its inverse the compliance in the virtual world, will be demonstrated. Furthermore, the predictive simulation of complex vibration patterns, e.g., rocking or break-up, will be shown. Finally the simulation of coupling effects to the motor system will be discussed. Results will be presented, correlating the simulated model results to the measured physical parameters. From that, the important aspects of the modeling that determine its accuracy will be discussed.
Engineering Brief 203 (Download now)

EB1-3 New Design Methodologies in Mark Levinson AmpsTodd Eichenbaum, Harman Luxury Audio - Shelton, CT, USA
The HARMAN Luxury Audio electronics engineering team has designed a completely new generation of Mark Levinson amplifiers. Combining tried-and-true technologies with innovative implementations and unique improvements has yielded products with exemplary measured and subjective performance. In this article are circuit design highlights of the No536, a fully balanced mono power amplifier rated for 400W/8 ohms and 800W/4 ohms.
Engineering Brief 204 (Download now)


EB2 - Spatial Audio

Thursday, October 29, 5:30 pm — 6:30 pm (Room 1A08)

Bryan Martin, McGill University - Montreal, QC, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) - Montreal, QC, Canada

EB2-1 Array-Based HRTF Pattern Emulation for Auralization of 3D Outdoor Sound Environments with Direction-Based Muffling of SourcesPieter Thomas, Ghent University - Gent, Belgium; Timothy Van Renterghem, Ghent University - Ghent, Belgium; Dick Botteldooren, Ghent University - Ghent, Belgium
The use of spatial audio reproduction techniques is widely employed for the subjective analysis of concert halls and, more recently, complex outdoor sound environments. In this work a binaural reproduction technique is developed based on a 32-channel spherical microphone array, optimized for the simulation of a virtual microphone with directional characteristics that approximate the directivity of the human head. A set of weights is calculated for each microphone of the constituting array based on a regularized least-square solution. This technique allows for adaptation of the auditory scene based on source direction. The performance of variants of the technique has been evaluated by means of listening tests. Furthermore, its use for the auralization of outdoor soundscapes has been illustrated.
Engineering Brief 205 (Download now)

EB2-2 Polar Pattern Comparisons for the Left, Center, and Right Channels in a 3-D Microphone ArrayMargaret Luthar, Sonovo Mastering - Stavanger, Norway; Elaine Maltezos, University of Stavanger - Bergen, Norway
Standard 5.1 microphone arrays are long established and have been applied to psychoacoustic research, as well as for commercial purposes in film and music. Recent interest in the creative possibilities of “3-D audio” (a lateral layer of microphones, as well as an additional height layer) has led to research in both adapting 5.1 arrays for 3-D recordings as well as creating new methods to better capture the listener’s experience. The LCR configuration in a 5.1 array is a factor that contributes to the stability and localization of the auditory image in the horizontal plane. In this experiment, two different LCR configurations have been adapted for 9.1 in a traditional concert-recording environment. They are then compared in various combinations for their ability to produce a stable, natural, and effective frontal image in a 9.1 reproduction method. Preliminary listening suggests that the polar characteristics of the L,C, and R microphones do affect the sense of envelopment, spaciousness, and localization of the frontal image, as well as cohesiveness within the entire 9.1 image. These results have led to options for further study, as suggested by the researchers.
Engineering Brief 206 (Download now)

EB2-3 Coding Backward Compatible Audio Objects with Predictable Quality in a Very Spatial WayStanislaw Gorlow, Gorlow Brainworks - Paris, France
A gradual transition from channel-based to object-based audio can currently be observed throughout the film and the broadcast industries. One paramount example of this trend is the new MPEG-H 3D Audio standard, which is under development. Other object-based standards in the market place are DTS:X and Dolby Atmos. In this engineering brief a newly developed prototype of an object-based audio coding system is introduced and discussed in terms of its technical characteristics. The codec can be of use everywhere where a given sound scene is to be rerendered according to the listener’s preference or environment in a backward compatible manner. The areas of application cover not only interactive music listening or remixing, but also location-dependent, immersive, and 3D audio rendering.
Engineering Brief 207 (Download now)

EB2-4 Decorrelated Audio Imaging in Radial Virtual Reality EnvironmentsBryan Dalle Molle, University of Illinois at Chicago - Chicago, IL, USA; James Pinkl, University of Illinois at Chicago - Chicago, IL, USA; Mark Blewett, University of Illinois at Chicago - Chicago, IL, USA
University of Illinois at Chicago's CAVE2 is a large-scale, 320-degree radial visualization environment with a 360-degree 20.2 channel radial speaker system. The purpose of our research is to develop solutions for spatially accurate playback of audio within a virtual reality environment, reconciling differences between the circular speaker array, the location of a user in the physical space, and the location of virtual sound objects within CAVE2’s OmegaLib virtual reality software, all in real time. Previous research presented at AES 137 detailed our work on object geometry, dynamically mapping a virtual object’s width and distance to the speaker array with volume and delay compensation. Our recent work improves virtual width perception using dynamic decorrelation with transient fidelity, implemented via Supercollider on the CAVE2 sound server.
Engineering Brief 208 (Download now)


EB3 - Transducers—Part 2

Friday, October 30, 12:00 pm — 1:00 pm (Room 1A07)

Michael Smithers, Dolby Laboratories - Sydney, NSW, Australia

EB3-1 Dual Filtering Technique for Speech Signal EnhancementsMahdi Ali, Hyundai America Technical Center, Inc. - Superior Township, MI, USA
Hands free applications in automobiles, such as voice recognition and Bluetooth communications, have been one of the great features added to vehicles’ infotainment systems. However, cabin and road noises degrade audio quality and negatively impact consumers’ experience. Research has been conducted and several noise reduction techniques have been proposed. However, due to the complexity of noise environments inside and outside vehicles, the hands free sounds quality still poses an issue for consumers. This continuously existed problem requires more research in this field. This paper proposes a novel technique to reduce noise and enhance voice recognition and Bluetooth audio quality in vehicles’ hands free applications. It utilizes the dual Kalman Filtering (KF) technique to suppress noise. This method has been validated using a MATLAB/SIMULINK simulation environment, which showed improvements in noise reductions in both Gaussian and non-Gaussian environments.
Engineering Brief 209 (Download now)

EB3-2 Implementation of Segmented Circular-Arc Constant Beamwidth Transducer (CBT) Loudspeaker ArraysD.B. (Don) Keele, Jr., DBK Associates and Labs - Bloomington, IN, USA
Circular-arc loudspeaker line arrays composed of multiple loudspeaker sources are used very frequently in loudspeaker applications to provide uniform vertical coverage [1, 2, and 4]. To simplify these arrays, the arrays may be formed using multiple straight-line segments or individual straight-line arrays. This approximation has errors because some of the speakers are now no longer located on the circular arc and exhibit a “bulge error.” This error decreases as the number of segments increase or the splay angle of an individual straight segment is decreased. The question is: How small does the segment splay angle have to be so that the overall performance is not compromised compared to the non-segmented version of the array? Based on two simple spacing limitations that govern the upper operating frequency for each type of array, this paper shows that the bulge deviation should be no more than about one-fourth the center-to-center spacing of the sources located on each straight segment and that surprisingly, the maximum splay angle and array radius depends only on the number (N) of equally-spaced sources that are on a straight segment. As the number of sources on a segment increases, the maximum segment splay angle decreases and the required minimum array radius of curvature increases. Design guidelines are presented that allow the segmented array to have nearly the same performance as the accurate circular arc array.
Engineering Brief 210 (Download now)

EB3-3 Speech Intelligibility Advantages Using an Acoustic Beamformer DisplayDurand Begault, NASA Ames Research Center - Moffet Field, CA, USA; Kaushik Sunder, NASA Ames Research Center - Moffet Field, CA, USA; San Jose State University Foundation - San Jose, CA, USA; Martine Godfroy, NASA Ames Research Center - Moffett Field, CA, USA; San Jose State University Foundation; Peter Otto, UC San Diego - La Jolla, CA, USA
A speech intelligibility test conforming to the Modified Rhyme Test of ANSI S3.2 “Method for Measuring the Intelligibility of Speech Over Communication Systems” was conducted using a prototype 12-channel acoustic beamformer system. The target speech material (signal) was identified against speech babble (noise), with calculated signal-noise ratios of 0, 5 and 10 dB. The signal was delivered at a fixed beam orientation of 135 degrees (re 90 degrees as the frontal direction of the array) and the noise at 135 (co-located) and 0 degrees (separated). A significant improvement in intelligibility from 58% to 75% was found for spatial separation for the same signal-noise ratio (0 dB). Significant effects for improved intelligibility due to spatial separation were also found for higher signal-noise ratios.
Engineering Brief 211 (Download now)

EB3-4 Designing Near-Field MVDR Acoustic Beamfomers for Voice User InterfacesAndrew Stanford-Jason, XMOS Ltd. - Bristol, UK
We present an analysis and design recommendations for a reduced computational complexity minimum variance distortionless response(MVDR) beamforming microphone. An overview of MVDR beamforming is given then decomposed into a generalized implementation to aid mapping to a microcontroller with some discussion of optimizations for real-world performance.
Engineering Brief 212 (Download now)


EB4 - Listening, Hearing, & Production

Friday, October 30, 5:30 pm — 6:45 pm (Room 1A07)

Bruno Fazenda, University of Salford - Salford, Greater Manchester, UK

EB4-1 Why Do My Ears Hurt after a Show (And What Can I Do to Prevent It)Dennis Rauschmayer, REVx Technologies/REV33 - Austin, TX, USA
In this brief we review the traditional methods of preventing ear fatigue, short-term ear damage, and long term ear damage. A new method to prevent ear fatigue, focused on performing musicians is then presented. This method, which reduces noise and distortion in the artist’s mix, is discussed. Qualitative and quantitative results from a series of trials and experiments is presented. Qualitative results from artist feedback indicate less ear fatigue, less ringing in the ears, and a better ability to have normal conversations after a performance when noise and distortion in their mix is reduced. Quantitative results are consistent with the qualitative results and show a reduction in the change in otoacoustic emissions measured for a set of musicians when noise and distortion are reduced. The result of the study suggests that there is an important new tool for musicians to use to combat ear fatigue and short term hearing loss.
Engineering Brief 213 (Download now)

EB4-2 Classical Recording with Custom Equipment in South BrazilMarcelo Johann, UFRGS - Porto Alegre, RS, Brazil; Andrei Yefinczuk, UFRGS - Porto Alegre, Brazil; Marcio Chiaramonte, Meber Metais - Bento Gonçalves, Brazil; Hique Gomez, Instituto Marcello Sfoggia - Porto Alegre, Brasil
This paper describes the process developed by Marcello Sfoggia for recording acoustic and classical music in south of Brazil, making intensive use of custom equipment. Sfoggia spent most of his lifetime building dedicated circuits to optimize sound reproduction and recording. He took the task of registering major performances in the city of Porto Alegre, using his home-developed equipment, what became a reference process. We describe the system employed for both sound capture and mixdown. Key components of the signal flow include preamplifiers with precision op-amps, short signal paths, modified A/D/A converters and the mixing desk with pure vacuum tube circuitry. Finally, we address our current efforts to continue his activities and improve upon his system with updated circuits and techniques.
Engineering Brief 214 (Download now)

EB4-3 Techniques For Mixing Sample-Based MusicPaul "Willie Green" Womack, Willie Green Music - Brooklyn, NY, USA
Samples are a great way to add impact, vibe, and texture to a song and can often be the primary component of a new work. From a production standpoint, audio that is already mixed and mastered can add to a producer’s sonic palette. From a mixing perspective, however, these same bonuses also provide a number of challenges. Looking more closely at each of the common issues an engineer often faces with sample-based music, I will illustrate techniques that can enable an engineer to better manipulate a sample, allowing it to sit more naturally inside the mix as a whole.
Engineering Brief 215 (Download now)

EB4-4 Case Studies of Inflatable Low- and Mid-Frequency Sound Absorption TechnologyNiels Adelman-Larsen, Flex Acoustics - Copenhagen, Denmark
Surveys among professional musicians and sound engineers reveal that a long reverberation time at low frequencies in halls during concerts of reinforced music is a common cause for an unacceptable sounding event. Mid- and high-frequency sound is seldom a reason for lack of clarity and definition due to a 6 times higher absorption by audience compared to low frequencies, and a higher directivity of speakers at these frequencies. Lower frequency sounds are, within the genre of popular music however, rhythmically very active and loud, and a long reverberation leads to a situation where the various notes and sounds cannot be clearly distinguished. This reverberant bass sound rumble often partially masks even the direct higher pitched sounds. A new technology of inflated, thin plastic membranes seems to solve this challenge of needed low-frequency control. It is equally suitable for multipurpose halls that need to adjust their acoustics by the push of a button and for halls and arenas that only occasionally present amplified music and need to be treated just for the event. This paper presents the authors’ research as well as the technology showing applications in dissimilarly sized venues, including before and after measurements of reverberation time versus frequency.
Engineering Brief 216 (Download now)

EB4-5 Advanced Technical Ear Training: Development of an Innovative Set of Exercises for Audio EngineersDenis Martin, McGill University - Montreal, QC, Canada; CIRMMT - Montreal, QC, Canada; George Massenburg, Schulich School of Music, McGill University - Montreal, Quebec, Canada; Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) - Montreal, Quebec, Canada
There are currently many automated software solutions to tackle the issue of timbral/EQ training for audio engineers but only limited offerings for developing other skills needed in the production process. We have developed and implemented a set of matching exercises in Pro Tools that fill this need. Presented with a reference track, users are trained in matching inter-instrument levels/gain, lead instrument volume automation, instrument spatial positioning/panning, reverberation level, and compression settings on a lead element within a full mix. The goal of these exercises is to refine the listener’s degree of perception along these production parameters and to train the listener to associate these perceived variations to objective parameters they can control. We also discuss possible future directions for exercises.
Engineering Brief 217 (Download now)


EB5 - Acoustics

Saturday, October 31, 3:45 pm — 4:15 pm (Room 1A07)

Jung Wook (Jonathan) Hong, McGill University - Montreal, QC, Canada; GKL Audio Inc. - Montreal, QC, Canada

EB5-1 Visualization of Compact Microphone Array Room Impulse ResponsesLuca Remaggi, University of Surrey - Guildford, Surrey, UK; Philip Jackson, University of Surrey - Guildford, Surrey, UK; Philip Coleman, University of Surrey - Guildford, Surrey, UK; Jon Francombe, University of Surrey - Guildford, Surrey, UK
For many audio applications, availability of recorded multichannel room impulse responses (MC-RIRs) is fundamental. They enable development and testing of acoustic systems for reflective rooms. We present multiple MC-RIR datasets recorded in diverse rooms, using up to 60 loudspeaker positions and various uniform compact microphone arrays. These datasets complement existing RIR libraries and have dense spatial sampling of a listening position. To reveal the encapsulated spatial information, several state of the art room visualization methods are presented. Results confirm the measurement fidelity and graphically depict the geometry of the recorded rooms. Further investigation of these recordings and visualization methods will facilitate object-based RIR encoding, integration of audio with other forms of spatial information, and meaningful extrapolation and manipulation of recorded compact microphone array RIRs.
Engineering Brief 218 (Download now)

EB5-2 Sensible 21st Century Saxophone Selection Thomas Mitchell, University of Miami - Coral Gables, FL, USA
This paper presents a method for selecting a saxophone, using data mining techniques with both subjective and objective data as criteria. Immediate, subjective personal impressions are given equal weight with more-objective observations made after the fact, and with hard data distilled from audio data using MIR Toolbox. Offshoots and directions for future research are considered.
Engineering Brief 219 (Download now)


EB6 - Posters 1

Sunday, November 1, 10:00 am — 11:30 am (S-Foyer 1)

EB6-1 Duplex Panner: Spatial Source Panning for Commercial Music ApplicationsSamuel Nacach, Element Audio Group - New York, USA; New York University, Abu Dhabi - Abu Dhabi, UAE
The Duplex Panner, introduced at the 137th AES Convention, combines elements from binaural processing, Ambiophonics, the Haas effect, and other widening techniques, to develop a tool, that when listening on headphones, renders preferred stereo imagery over the unprocessed version of the same musical content. To understand if this algorithm can translate to loudspeaker systems without distortion, this paper examines the methodologies employed to achieve spatial panning and how the algorithm was built, how the processing affects the signal, and accordingly what its psychoacoustic implications may be. Through this detailed analysis, we conclude that, unlike other spatial panning techniques, the Duplex Panner is unlikely to be constrained by physical or psychoacoustic limitations in both headphone and loudspeaker systems.
Engineering Brief 220 (Download now)

EB6-2 Development of the Sound Field 3D Intensity Probe Based on Miniature MicrophonesJozef Kotus, Gdansk University of Technology - Gdansk, Poland; W. Moskwa; Andrzej Czyzewski, Gdansk University of Technology - Gdansk, Poland; Bozena Kostek, Gdansk University of Technology - Gdansk, Poland; Audio Acoustics Lab.
The engineered measuring probe uses three pairs of miniature microphones coupled. The signals from the microphones after an initial amplification are fed to differential circuits. Due to the required symmetry of the circuit it was necessary to select electronic components very carefully. Moreover, additional digital signal processing techniques were applied to avoid amplitude and phase mismatch. The view of the engineered probe is presented in photographs. Characteristics of the probe measured in an anechoic chamber are attached followed by a discussion of achieved results. The obtained results were compared with the reference USP probe, produced by the Microflown company.
Engineering Brief 221 (Download now)

EB6-3 GaME: Game for Music EducationRaphaël Marczak, Aquitaine Science Transfert - Pessac, France; Pierre Hanna, LaBRI - University of Bordeaux - Talence, France; Matthias Robine, LaBRI - University of Bordeaux - Talence, France; Elodie Duru, Aquitaine Science Transfert
Music teachers wish that their students spend as much time as possible with their instrument in hands between lessons. By using methods derived from game studies and computer science, GaME offers a ludo-pedagogical solution for keeping young audiences motivated. The motivation is sustained through the use of well-designed involvement mechanisms and real-time feedback about the performances. GaME relies on signal processing algorithms for extracting and comparing musical information, thus enabling an automatic recognition of the chords and notes as actually played by the musician. GaME can be played on computer, tablet, smartphone and even online. GaME includes a score editor and a gameplay metric system to provide feedback helping teachers and parents to create new levels based on specific musical concepts.
Engineering Brief 222 (Download now)

EB6-4 User-Interactive Binaural Rendering Algorithm Using Head-Related Transfer Function and ReverberationHyun Jo, DMC R&D Center, Samsung Electronics Co. - Suwon, Gyeonggi-do, Korea; Jaeha Park, Samsung Electronics Co. Ltd. - Suwon, Gyeonggi-do, Korea; Sangmo Son, DMC R&D Center, Samsung Electronics Co. - Suwon, Gyeonggi-do, Korea; Sunmin Kim, DMC R&D Center, Samsung Electronics Co. - Suwon, Gyeonggi-do, Korea
This paper introduces an adaptive binaural rendering algorithm that renders sound image into a desired location for user-interactive headphone listening. The proposed algorithm provides steady sound localization during the listener's head movement by minimizing both localization error and timbral degradation caused by filtering HRTF. It is achieved by direct-ambient separation of input channel signal and the corresponding HRTF filtering with desired reverberation to the listener's head position. By a set of experiments, it is shown that the proposed algorithm provides precise localization.
Engineering Brief 223 (Download now)

EB6-5 Computational "Drop" Detection in Modern Dance MusicAndrew Ortiz, University of Miami - Coral Gables, FL, USA; Colby N. Leider, University of Miami - Coral Gables, FL, USA
Many of today’s popular dance music records are identifiable by a ”drop”—a section of the song that is commonly the highest in both listener-perceived and actual signal energy. In this paper we examine several computational methods for locating the exact time at which the drop occurs in a given audio sample. Various metrics are compared and contrasted based on relevant audio signal features. This technology has potential applications within automated DJ software, online music streaming services, computational ethnomusicology research, and more.
Engineering Brief 224 (Download now)

EB6-6 RECUERDAMEJorge Sierra Aguilar, Sr., San Buenaventura - Bogota, Cundinamarca, Colombia; Eduard Enrique Ramirez Garcia, Universidad San Buenaventura Bogota - Bogota, Cundinamarca, Colombia; Juan David Valencia, Sr., Universidad San Buenaventura - Bogota, Cundinamarca, Colombia
The RECUERDAME is a portable practical tool intended for patients diagnosed with Alzheimer's disease in stages (GDS) 2, 3, 4, 5 (according to Global deterioration scale). This tool provides rehabilitation intervention and cognitive stimulation through a game that the patient plays so that, through the interaction of visual and auditory stimuli, s/he can recognize their family agents through the name requested by the device. This seeks to help reduce the cognitive impairment of a person with Alzheimer's disease. The device will be developed and pilot tested with patients for a generalized projection of treatments for this disease.
Engineering Brief 225 (Download now)

EB6-7 Measurements of Spherical Microphone Array Characteristics in an Anechoic RoomTomasz Zernicki, Zylia sp. z o.o. - Poznan, Poland; Lukasz Januszkiewicz, Zylia Sp. z o.o. - Poznan, Poland; Marcin Chryszczanowicz, Zylia sp. z.o.o. - Poznan, Poland; Piotr Makaruk, Zylia - Poznan, Poland; Jakub Zamojski, Zylia sp. z.o.o. - Poznan, Poland
This paper describes a measurements methodology of spherical microphone array designed and developed for the purpose of soundfield recording. Presented work mainly focuses on the practical aspects of the microphone array impulse response measurements in anechoic environment. The main assumption is that proper acquisition of impulse response coefficients provides crucial information about characteristics of microphones and acoustic shadow of the sphere. Registered impulse responses are further used for generating beam patterns and building a Higher Order Ambisonics microphone.
Engineering Brief 226 (Download now)

EB6-8 Sound Field Recording Using Wireless Digital Distributed Microphone ArrayMarzena Malczewska, Zylia sp. z.o.o. - Poznan, Poland; Andrzej Ruminski, Zylia sp. z.o.o. - Poznan, Poland; Piotr Szczechowiak, Zylia sp. z.o.o. - Poznan, Poland; Tomasz Zernicki, Zylia sp. z o.o. - Poznan, Poland
This paper presents development challenges when building a Wireless Acoustic Sensor Network (WASN) using common IoT devices (Beagle Bone Black). Such system can be used for sound field recording, audio object separation, tracking, etc. In our scenario we focus on recording multiple sound sources in case of a mobile recording studio. Major challenges are related to audio streaming from multiple sensors. Therefore, this paper is focused on analyzing a set of parameters including synchronization accuracy, end to end latency, packet loss, and audio compression efficiency. Experimental results have shown that it is possible to achieve synchronization at the level of micro seconds as well as end-to-end latency below 10 ms using the Opus codec.
Engineering Brief 227 (Download now)


EB7 - Posters 2

Sunday, November 1, 12:00 pm — 1:30 pm (S-Foyer 1)

EB7-1 Measuring Speech Intelligibility Loss in Single-Driver Panel LoudspeakersDavid Anderson, University of Rochester - Rochester, NY, USA; Mark F. Bocko, University of Rochester - Rochester, NY, USA
The impulse response of a panel loudspeaker with a single moving-coil driver contains ringing due to the resonant frequencies, but the implication of this type of response for intelligible reproduction of speech signals is the subject of some debate. The impulse responses of three examples of such loudspeakers of various sizes and materials were measured in an anechoic environment and compared to that of a conventional speaker. Reverberation effects are clear and calculation of the Speech Transmission Index (STI) confirms a loss of intelligibility; the STI values of the plate loudspeakers are 6% to 13% lower than that of the conventional speaker. Spectrograms of reproduced speech by each plate also show a considerable loss of detail.
Engineering Brief 228 (Download now)

EB7-2 Vibrational Analysis of Vintage Planar LoudspeakersMichael Heilemann, University of Rochester - Rochester, NY, USA; David Anderson, University of Rochester - Rochester, NY, USA; Mark F. Bocko, University of Rochester - Rochester, NY, USA
Recently, there has been a strong interest in the development of flat-panel loudspeakers. The Yamaha JA4001 and the Poly-planar P20 represent two early attempts at commercializing the technology. The responses of both loudspeakers were analyzed using a laser vibrometer. The scans for each panel depict sharp peaks in the frequency response, which correspond to resonant modes. The presence of additional modes is similar to the effect of cone breakup in traditional loudspeakers. Impulse response measurements show that low-frequency modes are highly reverberant. Studying these early planar loudspeakers can provide valuable insight for the further development of such technology.
Engineering Brief 229 (Download now)

EB7-3 A Database of Loudspeaker Polar Radiation MeasurementsJoseph G. Tylka, 3D3A Lab, Princeton University - Princeton, NJ, USA; Rahulram Sridhar, 3D3A Lab, Princeton University - Princeton, NJ, USA; Edgar Choueiri, Princeton University - Princeton, NJ, USA
Anechoic directivity data for a variety of loudspeakers have been measured and compiled into a freely available online database, which may be used to evaluate these loudspeakers based on their directivities. The measurements are illustrated through four types of plots (frequency response, polar, contour, and waterfall) and are also given as raw impulse responses. Two sets of directivity metrics are defined and are used to rank the loudspeakers. The first set consists of full and partial directivity indices that isolate sections of the loudspeaker’s radiation pattern (e.g., forward radiation alone) and quantify its directivity over those sections. The second set quantifies the extent to which the loudspeaker exhibits constant directivity. Measurements are taken, in an anechoic chamber, along horizontal and vertical orbits with a (nominal) radius of 1.6 m and an angular resolution of five degrees.
Engineering Brief 230 (Download now)

EB7-4 Single-Channel Sound Source Separation Using NMF with Sparseness ConstraintsShijia Geng, University Of Miami - Miami, FL, USA; Colby N. Leider, University of Miami - Coral Gables, FL, USA
While challenging, sound source separation is a task that has many practical applications in audio signal processing. In this paper three sound files with two sources in each were separated using the non-negative matrix factorization (NMF) approach, with and without sparseness constraints. The results showed that adding sparseness constraints had no effect when separating drums and bass guitar but had better performances when separating piano and drums, and piano and bass guitar.
Engineering Brief 231 (Download now)

EB7-5 Noise Robust End-Point Detection Algorithm Using Human Auditory and Pronunciation CharacteristicsJae-hoon Jeong, Samsung Electronics Co. Ltd. - Suwon, Korea; Min Seok Kwon; Seungyeol Lee, Samsung Electronics Co. Ltd. - Suwon-si, Gyeonggi-do, Korea; Young Woo Lee; Haruyuki Mori; Namgook Cho; Jae Won Lee, DMC R&D Center, Samsung Electronics Co. - Suwon, Gyeonggi-do, Korea
A noise robust end point detection algorithm is proposed that could be used in real environment speech recognition. Inaccurate end point detection brings not only speech recognition performance reduction but also users’ tiredness. EPD algorithms based on energy level change or speech presence probability are vulnerable to high energy noises. After reducing much noise by auditory filter, one of human speech pronunciation characteristic, syllabic rate is used for checking if there is still speech component or not. The proposed algorithm shows much better performance in real environments like TV sound noise, café noise, etc.
Engineering Brief 233 (Download now)

EB7-6 Evaluation of Separation Techniques for Musical Instrument Recordings Using Microphone Array in a Rehearsal RoomTomasz Zernicki, Zylia sp. z o.o. - Poznan, Poland; Lukasz Januszkiewicz, Zylia Sp. z o.o. - Poznan, Poland; Marcin Chryszczanowicz, Zylia sp. z.o.o. - Poznan, Poland; Piotr Makaruk, Zylia - Poznan, Poland; Jakub Zamojski, Zylia sp. z.o.o. - Poznan, Poland
This paper describes the comparison of two different approaches for fast and simple sound tracks separation of multiple musical instrument records. An uniform circular microphone array is used in a rehearsal room for recording of musical instruments being played simultaneously. A beamforming algorithm and additional signal post-processing is used to separate the individual instrument tracks. The separated tracks are compared to tracks recorded with dedicated highly directive microphone (shotgun). The objective evaluation of results is made by calculation of signal-to-interference ratio (SIR). Additionally the subjective test are performed where listeners had to asses the quality in terms of level of interference signals.
Engineering Brief 234 (Download now)

EB7-7 Monitoring and Authoring of 3D Immersive Next Generation Audio FormatsPeter Pörs, Junger Audio GmbH - Berlin, Germany
The next generation immersive audio formats will require changes in the audio production workflow. Monitoring the audio along with authoring and verifying of dynamic metadata will become a new challenge. New procedures for managing object based encoded content the same way as for personalization of services through the selection of alternative audio objects (such as commentator languages) needs to be established. Loudness control during production and the loudness definition for the final output formats are other topics to consider. A Monitoring & Authoring Unit must be compatible with upcoming immersive multichannel 3D audio formats and should offer a platform to host all the emerging immersive 3D audio encoding formats from different vendors.
Engineering Brief 235 (Download now)


Return to Engineering Briefs

EXHIBITION HOURS October 30th   10am - 6pm October 31st   10am - 6pm November 1st   10am - 4pm
REGISTRATION DESK October 28th   3pm - 7pm October 29th   8am - 6pm October 30th   8am - 6pm October 31st   8am - 6pm November 1st   8am - 4pm
TECHNICAL PROGRAM October 29th   9am - 7pm October 30th   9am - 7pm October 31st   9am - 7pm November 1st   9am - 6pm
AES - Audio Engineering Society