AES E-Library

AES E-Library Search Results

Search Results (Displaying 1-10 of 40 matches) New Search
Sort by:
                 Records Per Page:

Bulk download - click topic to download Zip archive of all papers related to that topic:   Applications in audio    Architectural Acoustics    Audio and Education    Audio quality    Audio Signal Processing    Evaluation of spatial audio    Forensic audio    Listening tests and case-studies    Multichannel and spatial audio processing and applications    Spatial audio applications    Spatial audio applicatons    Transducers   

 

An Open Source Turntable for Electro-Acoustical Devices Characterization

This work introduces an Open Source turntable for the measurement of electro-acoustical devices. The idea is to provide an inexpensive and highly customizable device that can be adjusted according to specific measurement needs. Development of such turntable devices in the past required significant i n vestment. Specific mechanical and motor control design skills were needed, leading to both costly and time-consuming processes. Recent developments in mechatronics and 3D printing allow to design and build a cost-effective solution.

Author:
Affiliation:
AES Convention: eBrief: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


An Open Source Turntable for Electro-Acoustical Devices Characterization

Document Thumbnail

This work introduces an Open Source turntable for the measurement of electro-acoustical devices. The idea is to provide an inexpensive and highly customizable device that can be adjusted according to specific measurement needs. Development of such turntable devices in the past required significant i n vestment. Specific mechanical and motor control design skills were needed, leading to both costly and time-consuming processes. Recent developments in mechatronics and 3D printing allow to design and build a cost-effective solution.

Author:
Affiliation:
AES Convention: eBrief: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Recreating complex soundscapes for audio quality evaluation

Document Thumbnail

A method for recreating complex soundscapes for tasks related to audio quality evaluation is presented. This approach uses an ambisonics-inspired basis for recreating dynamic noise in a system compatible with ETSI standard EG 202 396-1 for background noise reproduction. Recordings were captured with a spherical 32-microphone array and processed to match the two-dimensional four-loudspeaker array by creating four directional beams, each feeding an individual channel. As a result, a spatial background noise ambience is recreated, preserving the transient characteristics of the original recording.

Authors:
Affiliation:
AES Convention: eBrief: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Recreating complex soundscapes for audio quality evaluation

A method for recreating complex soundscapes for tasks related to audio quality evaluation is presented. This approach uses an ambisonics-inspired basis for recreating dynamic noise in a system compatible with ETSI standard EG 202 396-1 for background noise reproduction. Recordings were captured with a spherical 32-microphone array and processed to match the two-dimensional four-loudspeaker array by creating four directional beams, each feeding an individual channel. As a result, a spatial background noise ambience is recreated, preserving the transient characteristics of the original recording.

Authors:
Affiliation:
AES Convention: eBrief: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Quantifying Localization Potential using Interaural Transfer Function

The ability to generate appropriate auditory localization cues is an important requisite of spatial audio rendering technology that contributes to the plausibility of virtual sounds presented to a user, especially in XR applications (VR/AR/MR). Algorithmic approaches have been proposed to quantify such technologies’ ability to reproduce interaural level difference (ILD) cues through regression and statistical methods, providing a useful standardization and automation method to estimate the localization accuracy potential of a given spatial audio rendering engine. Previous approaches are extended to include interaural time difference (ITD) cues as part the perceptual transform through the use of the interaural transfer function (ITF). The extended algorithmic approach of quantifying localization accuracy may provide an adequate substitute for critical listening studies as an evaluation method. However, this approach has not yet been validated through comparison with localization listening studies. A review of listening tests are reviewed in conclusion to increase confidence in presented methods of algorithmically quantifying localization accuracy potential of a spatial audio rendering engine.

Authors:
Affiliations:
AES Convention: eBrief: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Quantifying Localization Potential using Interaural Transfer Function

Document Thumbnail

The ability to generate appropriate auditory localization cues is an important requisite of spatial audio rendering technology that contributes to the plausibility of virtual sounds presented to a user, especially in XR applications (VR/AR/MR). Algorithmic approaches have been proposed to quantify such technologies’ ability to reproduce interaural level difference (ILD) cues through regression and statistical methods, providing a useful standardization and automation method to estimate the localization accuracy potential of a given spatial audio rendering engine. Previous approaches are extended to include interaural time difference (ITD) cues as part the perceptual transform through the use of the interaural transfer function (ITF). The extended algorithmic approach of quantifying localization accuracy may provide an adequate substitute for critical listening studies as an evaluation method. However, this approach has not yet been validated through comparison with localization listening studies. A review of listening tests are reviewed in conclusion to increase confidence in presented methods of algorithmically quantifying localization accuracy potential of a spatial audio rendering engine.

Authors:
Affiliations:
AES Convention: eBrief: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


MPEG-H Audio production workflows for a Next Generation Audio Experience in Broadcast, Streaming and Music

Document Thumbnail

MPEG-H Audio is a Next Generation Audio (NGA) system offering a new audio experience for various applications: Object-based immersive sound delivers a new degree of realism and artistic freedom for immersive music applications, such as the 360 Reality Audio music service. Advanced interactivity options enable improved personalization and accessibility. Solutions exist, to create object-based features from legacy material, e.g., deep-learning-based dialogue enhancement. 'Universal delivery' allows for optimal rendering of a production over all kinds of devices and various ways of distribution like broadcast or streaming. All these new features are achieved by adding metadata to the audio, which is defined during production and offers content providers flexible control of interaction and rendering options. Thus, new possibilities are introduced, but also new requirements during the production process are imposed. This paper provides an overview of production scenarios using MPEG-H Audio along with examples of state-of-the-art NGA production workflows. Special attention is given to immersive music and broadcast applications as well as accessibility features.

Open Access

Open
Access

Authors:
Affiliation:
AES Convention: eBrief: Permalink
Publication Date:
Subject:


Download Now (1.9 MB)

This paper is Open Access which means you can download it for free.

Start a discussion about this paper!


MPEG-H Audio production workflows for a Next Generation Audio Experience in Broadcast, Streaming and Music

MPEG-H Audio is a Next Generation Audio (NGA) system offering a new audio experience for various applications: Object-based immersive sound delivers a new degree of realism and artistic freedom for immersive music applications, such as the 360 Reality Audio music service. Advanced interactivity options enable improved personalization and accessibility. Solutions exist, to create object-based features from legacy material, e.g., deep-learning-based dialogue enhancement. 'Universal delivery' allows for optimal rendering of a production over all kinds of devices and various ways of distribution like broadcast or streaming. All these new features are achieved by adding metadata to the audio, which is defined during production and offers content providers flexible control of interaction and rendering options. Thus, new possibilities are introduced, but also new requirements during the production process are imposed. This paper provides an overview of production scenarios using MPEG-H Audio along with examples of state-of-the-art NGA production workflows. Special attention is given to immersive music and broadcast applications as well as accessibility features.

Authors:
Affiliation:
AES Convention: eBrief: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Automatic Classification of Enclosure-Types for Electrodynamic Loudspeakers

This article deals with the realization of an automated classification of loudspeaker enclosures. The acoustic load of the enclosure is reflected in the electrical impedance of the loudspeaker and is hence detectable from the point of view of the power amplifier. In order to classify the enclosures of passive one-way speakers, an artificial neural network is trained with synthetic impedance spectra based on equivalent electrical circuit models. The generalization capability is validated with measured test sets of closed, vented, band-pass and transmission-line enclosures. The resulting classification procedure works well within a synthetic test set. However, a good generalization to the measured test data requires further investigations to achieve better separation between the different vented enclosure types.

Authors:
Affiliations:
AES Convention: eBrief: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Automatic Classification of Enclosure-Types for Electrodynamic Loudspeakers

Document Thumbnail

This article deals with the realization of an automated classification of loudspeaker enclosures. The acoustic load of the enclosure is reflected in the electrical impedance of the loudspeaker and is hence detectable from the point of view of the power amplifier. In order to classify the enclosures of passive one-way speakers, an artificial neural network is trained with synthetic impedance spectra based on equivalent electrical circuit models. The generalization capability is validated with measured test sets of closed, vented, band-pass and transmission-line enclosures. The resulting classification procedure works well within a synthetic test set. However, a good generalization to the measured test data requires further investigations to achieve better separation between the different vented enclosure types.

Authors:
Affiliations:
AES Convention: eBrief: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


                 Search Results (Displaying 1-10 of 40 matches)
AES - Audio Engineering Society