AES E-Library

AES E-Library

User-guided Rendering of Audio Objects Using an Interactive Genetic Algorithm

Document Thumbnail

One of the advantages of object-based audio/broadcast over traditional channel-based delivery is that it allows for the rendering of personalized content when delivered to the listeners. The methods by which personalization are achieved often require an in-depth understanding of the problem domain. This paper describes the design and evaluation of an interactive audio renderer, which is used to optimize an audio mix based on the feedback of the listener. A panel of 14 trained participants was recruited to try the system. When using the proposed system in a simple music mixing task, participants were able to create a range of mixes of audio objects comparable to those made using the conventional fader-based system. This suggests that the system is not an obstacle to the creation of desired content, and does not impose noticeable limits on what content can be created. Evaluation using the System Usability Scale showed a low level of physical and mental burden and so is predicted that the system would be suitable for a variety of applications where physical interaction is to be kept low, such as an interface for users with vision and/or mobility impairments.

Open Access

Open
Access

Authors:
Affiliation:
JAES Volume 67 Issue 7/8 pp. 522-530; July 2019
Publication Date:
Permalink: https://www.aes.org/e-lib/browse.cfm?elib=20490


Download Now (466 KB)

This paper is Open Access which means you can download it for free.

Learn more about the AES E-Library

E-Library Location:

DOI:

Start a discussion about this paper!


AES - Audio Engineering Society