Events

56th Conference Papers

PAPER SESSION: GENERATIVE MUSIC SYSTEMS
 
barelyMusician: An Adaptive Music Engine for Video Games
—Alper Gungormusler, Natasa Paterson-Paulberg, Mads Haahr, Trinity College Dublin, Dublin, Ireland
Aural feedback plays a crucial part in the field of interactive entertainment when delivering the desired experience to the audience, particularly in video games. It is, however, not yet fully explored in the industry, specifically in terms of interactivity of musical elements. In this paper we present barelyMusician, an extensible adaptive music engine that offers a new set of features for interactive music generation and performance for highly interactive applications. barelyMusician is a comprehensive music composition tool, capable of generating and manipulating audio samples and musical parameters in real-time in order to create smooth transitions between musical patterns that portray varying emotional states and moods that may be evident during gameplay. The paper presents the underlying approach, features and user interface, as well as a preliminary evaluation through demonstrators.
 
Dynamic Game Soundtrack Generation in Response to a Continuously Varying Emotional Trajectory
—Duncan Williams*, Alexis Kirke*, Joel Eaton*, Eduardo Miranda*, Ian Daly**, James Hallowell**, Etienne Roesch**, Faustina Hwang**, Slawomir Nasuto**
*Plymouth University, Plymouth, UK
**University of Reading, Reading, UK
Dynamic soundtrack creation presents various practical and aesthetic challenges to composers working with games. This paper presents an implementation of a system addressing some of these challenges with an affectively-driven music generation algorithm based on a second order Markov-model. The system can respond in real-time to emotional trajectories derived from two-dimensions of affect on the circumplex model (arousal and valence), which are mapped to five musical parameters. A transition matrix is employed to vary the generated output in continuous response to the affective state intended by the gameplay.
 
Veemix: Integration of Musical User-Generated Content in Games
—Karen Collins*, Alexander Hodge*, Ruth Dockwray**, Bill Kapralos***
*University of Waterloo, Waterloo, ON, Canada
**University of Chester, Chester, UK
***University of Ontario Institute of Technology, Oshawa, ON, Canada
Musical user-generated content (UGC) in games is usually disconnected from gameplay, running in the background rather than being integrated into the game. As a result, players may lose their emotional connection to the game’s narrative or action, disrupting immersion and reducing enjoyment. Here we describe a system for integrating user playlists, what we have termed musical UGC, into games. The system, Veemix, allows for the simultaneous integration of music into games using keyword tags and social ranking, while allowing for the collection and storage of semantic data about the music that can be used for music information retrieval purposes. We outline two iterations of the system in the form of a Unity plugin for iOS using streaming and user device music.
PAPER SESSION: GAME MUSIC SYSTEMS
The Generative Music of SIM Cell
—Leonard Paul, School of Video Game Audio, Vancouver, BC, Canada
The entire musical score for the educational game Sim Cell was produced using a generative music system utilizing the open source visual programming language Pure Data (PD). When combined with synthesis, generative music allows for the creation of rich and highly adaptive compositions utilizing a small amount of storage space. Synthesis allows for a flexible rendering resolution of the audio that can permit alternate forms of audio rendering methods in the future. The details of the system are explored and described to help aid others wishing to utilize generative music systems in their own titles.
 
Extreme Ninjas Use Windows, Not Doors: Addressing Video Game Fidelity Though Ludo-Narrative Music in the Stealth Genre
—Richard Stevens, Dave Raybould, Danny McDermott, Leeds Beckett University, Leeds, UK
A significant factor in the aesthetics of video games is the need to compensate for a lack of, or poor fidelity of, sensory information that would be present in the physical world. Although dialogue, sound, and music do play a ludic role, by providing information to compensate for this, in general there remains an over reliance on visual UI (User Interface) that has to fight for attention within an already overwhelmed sensory channel. Through a methodical analysis of the functions of audio in the stealth genre this paper identifies the limitations of current binary threshold approaches to audio feedback and puts forward music as a potential vehicle for providing richer data to the player. Music is accepted as a continuous audio presence and is able to provide information to help to prevent player failure, rather than sound effects or dialogue that often serve simply as a notification of failure.
 
Implementation and Evaluation of Dynamic Level of Audio Detail
—Gabriel Durr, Lys Peixoto, Marcelo Souza, Raisa Tanoue, Joshua D. Reiss, Queen Mary University, London, London, UK
Sound synthesis involves creating a desired sound using software or algorithms and analyzing the digital signal processing involved in the creation of the sound, rather than recording it. However, synthesis techniques are often too computationally complex for use in many scenarios. This project aims to implement and assess sound synthesis models with dynamic Level of Audio Detail (LOAD). The manipulations consist of modifying existing models to achieve a more or less complex implementation while still retaining the perceptual characteristics of the sound. The models implemented consist of sine waves, noise sources and filters that reproduce the desired sound, which could then be enhanced or reduced to provide dynamic LOAD. These different levels were then analyzed in the time-frequency domain, and computational time and floating point operations were assessed as a function of the LOAD.
 
 
PAPER SESSION: SPATIAL AUDIO IN GAMES
Challenges of the Headphone Mix in Games
—Aristotel Digenis, FreeStyleGames (Activision), Leamington Spa, UK
Accurate spatial audio representation is vital in video games. A significant number of gamers experience audio using headphones, and this number is likely to grow. The majority of games do not offer headphone-specific mixes, but the recent popularity of surround headphones and headphone virtualization features on AV receivers has helped with this. With all the spatial information (including vertical cues) available to games, game audio engines are in a good position to provide a better binaural mix. However such a binaural mix can be at conflict with the player’s use of external audio equipment, which may not be aware that the signal received, is already virtualized. This articles aims to raise awareness of this risk.
 
Efficient Compact Representation of Head Related Transfer Functions for Portable Game Audio
—Joseph Sinker, Jamie Angus, University of Salford, Salford, UK
These days many games are played using portable devices and headphones on which spatial binaural audio can be
conveniently presented. One way of converting from conventional loudspeaker formats to binaural format is through the use of Head Related Transfer Functions (HRTFs), but head-tracking is also necessary to obtain a satisfactory externalization of the simulated sound field. Typically a large HRTF dataset is required in order to provide enough measurements for a continuous auditory space to be achieved through simple linear interpolation. This paper describes the use of alternative representations of an HRTF dataset using orthogonal basis functions and further parametric techniques often associated with speech processing. This allows both convenient and unambiguous interpolation and a significant reduction in the number of stored measurements required to generate a continuous auditory space. It is possible that these techniques may also be useful in developing efficient schemes of custom HRTF capture.
 
Fear and Localization: Emotional Fine-Tuning Utilizing
Multiple Source Directions
—Samuel Hughes, Gavin Kearney, University of York, Yorkshire, UK
It has been suggested in the literature that the degree to which an auditory presentation can create an emotional response can be influenced not only by manipulation of the temporal and spectral features of the sound but also where the sound is played in terms of its spatial location. In this paper we specifically focus on fear and how the emotional response is changed based on where a sound is played from within a 3D environment. It is shown though subjective analysis that it is possible to enhance the perception of fear based on the direction of the source presentation, although such presentations must be appropriately contextualized. This study intends to supply implications for the implementation of emotional fine-tuning in video games and interactive media.
 
PAPER SESSION: EDUCATION AND TRAINING
Designing Next-Gen Academic Curricula for Game-Centric Procedural Audio and Music
—Rob Hamilton, Stanford University, CCRMA, Stanford, CA, USA
The use of procedural technologies for the generation and control of real-time game music and audio systems has in recent times become both more possible and prevalent. Increased industry exploration and adoption of real-time audio engines like libPD coupled with the maturity of abstract audio languages such as FAUST are driving new interactive musical possibilities. As such a distinct need is emerging for educators to codify next-generation techniques and tools into coherent curricula in early support of future generations of sound designers and composers. This paper details a multi-tiered set of technologies and work flows appropriate for the introduction and exploration of beginner, intermediate, and advanced procedural audio and music techniques. Specific systems and work flows for rapid game-audio prototyping, real-time generative audio and music systems, as well as performance optimization through low-level code generation will be discussed.
 
AES - Audio Engineering Society