AES E-Library

AES E-Library Search Results

Search Results (Displaying 9 matches) New Search
Sort by:

Bulk download: Download Zip archive of all papers from this conference

 

Preliminary Investigations Into Binaural Cue Enhancement for Height Perception in Transaural Systems

Document Thumbnail

In this paper, we investigate the perception of height cues in motion-tracked transaural reproduction. 10 subjects are asked to localise sounding objects with height in a two-loudspeaker head-tracked transaural reproduction setup. Features of the HRTF for height perception are tested with generic HRTFs from a KEMAR binaural mannequin and spectral enhancement through modelling and subsequent exaggeration of HRTF cues for height perception. Results illustrate the applicability of HRTF cue exaggeration in transaural systems for height perception with non-individualised HRTFs.

Authors:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


An Algorithmic Approach to the Manipulation of B-Format Impulse Responses for Sound Source Rotation

Document Thumbnail

In video games the source and receiver are constantly moving through the environment, therefore, there is a need for dynamic reproduction of the acoustic conditions for an enclosed space, based on the source and receiver positions. This paper will present an impulse response manipulation algorithm for rotational movement of a sound source for a constant position in space. A B-Format impulse response manipulation algorithm has been developed, allowing for the rotation of the source along the azimuth. The method developed alters the amplitude of individual discrete reflections based on their point of origin and the directivity pattern of the source. The algorithm requires initial analysis of the impulse responses to locate individual reflections, their arrival directions and their point of origin. The analysis is achieved using, intensity analysis to calculate the direction of arrival, circular variance and local maxima to locate individual reflections and then a ray tracer to retrace the reflections. The room acoustic parameters of the manipulated impulse responses were objectively analysed and compared against reference impulse responses. Initial testing of the manipulated impulse responses showed that the current iteration of the algorithm requires further refinement before perceptual testing and real-time implementation.

Authors:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Audio Commons: Bringing Creative Commons Audio Content to the Creative Industries

Document Thumbnail

Significant amounts of user-generated audio content, such as sound effects, musical samples and music pieces, are uploaded to online repositories and made available under open licenses. Moreover, a constantly increasing amount of multimedia content, originally released with traditional licenses, is becoming public domain as its license expires. Nevertheless, the creative industries are not yet using much of all this content in their media productions. There is still a lack of familiarity and understanding of the legal context of all this open content, but there are also problems related with its accessibility. A big percentage of this content remains unreachable either because it is not published online or because it is not well organised and annotated. In this paper we present the Audio Commons Initiative, which is aimed at promoting the use of open audio content and at developing technologies with which to support the ecosystem composed by content repositories, production tools and users. These technologies should enable the reuse of this audio material, facilitating its integration in the production workflows used by the creative industries. This is a position paper in which we describe the core ideas behind this initiative and outline the ways in which we plan to address the challenges it poses.

Authors:
Affiliations:
AES Conference:
Paper Number: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Safe and Sound Drive: Sound Based Gamification of User Interfaces in Cars

Document Thumbnail

The Safe and Sound Drive project concerns the design of an audio-only serious game for cars that will help drivers to increase eco-driving skills, lower fuel consumption and encourage safe and environmentally friendly approaches to driving. Methods and procedures for the design of sounds for audio-only user interfaces are reviewed and discussed, and design work and preliminary results from user studies of prototypes of the audio interface are presented. Contextual Inquiry Interviews with three participants using the audio interface in a car while driving on a test track showed that opinions about beeps and audio signals vary among subjects. Music and podcast based contents were generally well received. Alteration of media content, e.g. by actively adjusting BPM, volume, spectral balance, or music mix could form working mechanisms for providing game related cues to the driver.

Authors:
Affiliations:
AES Conference:
Paper Number: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Lateral Listener Movement on the Horizontal Plane: Sensing Motion Through Binaural Simulation

Document Thumbnail

An experiment was conducted to better understand first-person motion as perceived by a listener when moving between two virtual sound sources in an auditory virtual environment (AVE). It was hypothesized that audio simulations using binaural cross-fading between two separate sound source locations could represent a sensation of motion for the listener that is equivalent to real world motion. To test the hypothesis, a motion apparatus was designed to move a head and torso simulator (HATS) between two matched loudspeaker locations while recording various stimulus signals (music, pink noise, and speech) within a semi-anechoic chamber. Synchronized simulations were then created and referenced to video. In two separate, double blind MUSHRA-style listening tests (with and without visual reference), 61 trained binaural listeners evaluated the sensation of motion among real and simulated conditions. Results showed that the listeners rated the simulation as presenting the greatest sensation of motion among all test conditions.

Authors:
Affiliations:
AES Conference:
Paper Number: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Ear Shape Modeling for 3D Audio and Acoustic Virtual Reality: The Shape-Based Average HRTF

Document Thumbnail

In this paper, we present a method for modeling human ear shapes, and particularly, a method for obtaining a generic non-individualized head-related transfer function (HRTF), based on the arithmetic mean of human ear shapes. The average HRTF is calculated from this average human ear shape with the boundary element method (BEM). The obtained average HRTF is evaluated by subjective experiments, revealing improved localization precision over a HRTF calculated from the shape of a mannequin head. Our approach does not require any measurements of HRTFs of the listener or selections of fitting HRTFs from a predefined database, and thus it can be practically utilized in any 3D audio or acoustic virtual reality application which makes use of HRTFs, such as virtual auditory displays or virtual 3D audio rendering in 3D gaming.

Authors:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Modal Synthesis of Weapon Sounds

Document Thumbnail

Sound synthesis can be used as an effective tool in sound design. This paper presents an interactive model that synthesizes high quality, impact-based combat weapons and gunfire sound effects. A procedural audio approach was taken to compute the model. The model was devised by extracting the frequency peaks of the sound source. Sound variations were then created in real-time using additive synthesis and amplitude envelope generation. A subtractive method was implemented to recreate the signal envelope and residual background noise. Existing work is improved through the use of procedural audio methodologies and application of audio effects. Finally, a perceptual evaluation was undertaken by comparing the synthesis engine to some of the analyzed recorded samples. In 4 out of 7 cases, the synthesis engine generated sounds that were indistinguishable, in terms of perceived realism, from recorded samples.

Authors:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


Feature Based Impact Sound Synthesis of Rigid Bodies Using Linear Modal Analysis for Virtual Reality Applications

Document Thumbnail

This paper investigates an approach for synthesizing sounds of rigid body interactions using linear modal synthesis (LMS). We propose a technique based on feature extraction from one recorded audio clip to estimate perceptually satisfactory material parameters of virtual objects for real-time sound rendering. In this study, the significant features from one recorded audio are extracted by computing high level power spectrogram that is based on short time Fourier transform analysis with optimal windowed function. Based on these reference features, the material parameters of intrinsic quality are computed for interactive virtual objects in graphical environments. A tetrahedralize finite element method (FEM) is employed to achieve Eigen values decomposition during modal analysis process. Residual compensation is also implemented to optimize the perceptual differences between the synthesized and the real sounds, and to include the non-harmonic components in the synthesized audio in order to achieve perceptually high quality sound. Furthermore, the computed parameters for material objects of one geometry can be transferred to different geometries and shapes of the same material objects, whereas, the synthesized sound varies as the shapes of the objects change. The results of the estimated parameters as well as a comparison of real sound and the synthesized sound are presented. The potential applications of our methodology are synthesis of real time contact sound events for games and interactive virtual graphical animations and providing extended authoring capabilities.

Authors:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


A Synthesis Model for Mammalian Vocalization Sound Effects

Document Thumbnail

In this paper, potential synthesis techniques for mammalian vocalisation sound effects are analysed. Physically-inspired synthesis models are devised based on human speech synthesis techniques and research into the biology of a mammalian vocal system. The benefits and challenges of physically-inspired synthesis models are assessed alongside a signal-based alternative which recreates the perceptual aspects of the signal through subtractive synthesis. Nonlinear aspects of mammalian vocalisation are recreated using frequency modulation techniques, and Linear Prediction is used to map mammalian vocal tract configurations to waveguide filter coefficients. It is shown through the use of subjective listening tests that such models can be effective in reproducing harsh, spectrally dense sounds such as a lion’s roar, and can result in life-like articulation.

Author:
Affiliation:
AES Conference:
Paper Number: Permalink
Publication Date:
Subject:

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!

This paper costs $33 for non-members and is free for AES members and E-Library subscribers.

Start a discussion about this paper!


AES - Audio Engineering Society