In This Section
Journal of the Audio Engineering Society
The Journal of the Audio Engineering Society — the official publication of the AES — is the only peer-reviewed journal devoted exclusively to audio technology. Published 10 times each year, it is available to all AES members and subscribers.
The Journal contains state-of-the-art technical papers and engineering reports; feature articles covering timely topics; pre and post reports of AES conventions and other society activities; news from AES sections around the world; Standards and Education Committee work; membership news, patents, new products, and newsworthy developments in the field of audio.
2015 October - Volume 63 Number 10
When listeners localize in the median plane (vertical), binaural cues are absent because the sound in the two ears is the same; median plane localization depends solely on spectral cues. In order to analyze the localization of band-limited stimuli in vertical stereophony, listening tests were conducted using seven octave bands of pink noise centered at frequencies from 125 to 8000 Hz as well as broadband pink noise. Experimental results showed that localization is generally governed by the so-called “pitch-height” effect, with the high-frequency stimuli generally being localized significantly higher than the low-frequency stimuli for all conditions. The relationship between pitch and height was found to be nonlinear. As frequency increased, subjective judgments appeared to become more erratic because of interchannel time differences.
Effect of Microphone Number and Positioning on the Average of Frequency Responses in Cinema Calibration
When measuring the response of a loudspeaker by averaging multiple points in a room, the results typically vary according to the number of microphones employed and their positions. This report focuses on the application to cinema calibration, where one critical goal is to achieve consistent perceived loudness and frequency response between dubbing stages, where content is produced, and the various theaters where it is exhibited. It is shown that averaging converges to a compromise response over the relevant listening area at a rate inverse to the square root of the number of microphones employed. Experimental results confirm the predicted scaling of the deviations, and quantify their magnitude in typical rooms.
Exponential swept-sine signals are very often used to analyze nonlinear audio systems. A reexamination of this methodology shows that a synchronization procedure is necessary for the proper analysis of higher harmonics. An analytical expression of spectra of the swept-sine signal is derived and used in the deconvolution of the impulse response. Matlab code for generation of the synchronized swept-sine, deconvolution, and separation of the impulse responses is given. This report provides a discussion of some application issues and an illustrative example of harmonic analysis of current distortion of a woofer. An analysis of the higher harmonics of the current distortion of a woofer is compared using both the synchronized and the non-synchronized swept-sine signals.
When validating systems that use headphones to synthesize virtual sound sources, a direct comparison between virtual and real sources is sometimes needed but the method can be difficult to implement. Often, the listener must wear the headphones throughout the experiment, which will affect the sound transmission from the external loudspeakers to the ears. An analysis of the physical measurements highlighted the that headphones cause a measurable spectral error in HRTF. A maximum spectral ILD distortion of 26.52 dB was found for the close-back headphones. In a localization study, head movement data was used to obtain judgement profiles that showed participants took 0.2 s longer to reach their final judgements and used 0.1 more head-turns. The authors recommend care when choosing headphones for scenarios in which a listener is presented with external acoustic sources. Results for different headphone designs highlight that the use of electrostatic transducers could help maintain natural acoustical perception.
3D multichannel audio systems employ additional elevated loudspeakers in order to provide listeners with a vertical dimension to their auditory experience. Listening tests were conducted to evaluate the feasibility of a novel vertical upmixing technique called “perceptual band allocation (PBA),” which is based on a psychoacoustic principle of vertical sound localization, the “pitch height” effect. The practical feasibility of the method was investigated using 4-channel ambience signals recorded in a reverberant concert hall using the Hamasaki-Square microphone technique. Results showed that the PBA-upmixed 3D stimuli were significantly stronger than or similar to 9-channel 3D stimuli in 3D listener-envelopment (LEV), depending on the sound source and the crossover frequency of PBA. They also significantly produced greater 3D LEV than the 7-channel 3D stimuli. For the preference tests, the PBA stimuli were significantly preferred over the original 9-channel stimuli.
Standards and Information Documents
AES Standards Committee News
Tom Kite; synchronization of digital audio; measurement and equalization of sound systems in rooms
Sound reinforcement professionals gathered in Montreal during the summer of 2015 to hear about the latest developments in system engineering and technology.
In audio, high-resolution sound should be natural, resembling real life and many of the terms we use to qualify it, such as clarity, focus, transparency, and definition are borrowed from vision. If sound is natural, objects should have clear locations (position and distance) and separate readily into perceptual streams, particularly where environmental reverberation causes multiple arrivals closely separated in time—temporal resolution of microstructure in sound being analogous to spatial resolution in vision.
AES Officers 2015/2016
Review of Sustaining Members
140th Convention, Paris, Call for Papers
2016 Sound Field Control Conference, Guildford, Call for Papers