Events

Demonstrations

Home | Call for Papers | Program | Venue | Demonstrations | Committee | Sponsorship
Registration and Accommodation

Late-breaking demo session

Late-breaking demos will be accepted based on abstract proposals to be submitted via email to: [email protected] until June, 16th 2017. As with regular papers, your abstract should contain max. 120 words. Accepted abstracts will later be published on the conference website. Demo presenters will have plenty of opportunity to show their research via technical demonstrations, posters or both. 

In the following table, we list the late-breaking demo contributions that we received:

Authors Title Abstract
Brecht De Man The Mix Evaluation Browser Mixing multitrack music is a complex, multidimensional, and in the end very subjective process. Concerned with the understanding of its principles, analysis of human mixes is an increasingly popular field of study. However, research in this area is markedly challenged by lack of relevant and high quality data. Built on previous work to address this issue, the Mix Evaluation Browser is an open tool in the form of a JavaScript application through which several mixes of a set of songs can be accessed. It visualises both numerical and text evaluations on the mixes, with tags denoting the instrument, type of processing, and negative or positive intent. Hosted on the Open Multitrack Testbed, the available audio can be auditioned and downloaded from within the interface.
Alexander Adami, Jürgen Herre Semantic Decomposition of Applause-Like Signals and Applications Applause sounds result from the superposition of many individual people clapping their hands. Applause can be considered as a mixture of individually perceivable transient foreground events and a more noise-like background. Due to the high number of densely packed transient events, applause and comparable sounds (like rain drops, many galloping horses, etc.) form a special signal class which often needs a dedicated processing to cope with the impulsiveness of these sounds. This demonstration presents a semantic decomposition method of applause sounds into a foreground component corresponding to individually perceivable transient events and a residual more noise-like background component. A selection of applications of this generic decomposition is discussed including measurement of applause characteristics, blind upmix of applause signals and applause restoration / perceptual coding enhancement. Sound examples illustrate the capabilities of the scheme.
Anna M. Kruspe, Jakob Abeßer Automatic lyrics alignment and retrieval from singing audio Automatic alignment between vocal recordings and lyrics can be applied in karaoke systems and for indexing of large datasets based on lyrical content and specific keywords. However, existing techniques from speech recognition cannot simply be transferred to singing since it exhibits a larger pitch range, a higher variation of pronunciation and phoneme durations, as well as additional background music. The proposed system allows to align an audio recording to a given song lyrics using Dynamic Time Warping (DTW) between a phoneme posteriorgram extracted from the audio recording and a binary phoneme template extracted from the lyrics. In a lyrics retrieval scenario, a matching score obtained from the DTW alignment is used to identify the best matching lyrics within a given selection.
Nicholas Jillings Automatic channel routing using musical instrument linked data Audio production encompasses more than just mixing a series of input channels. Most sessions involve tagging tracks, applying audio effects, and configuring routing patterns to build sub-mixes. Grouping tracks together gives the engineer more control over a group of instruments, and allows the group to be processed simultaneously using audio effects. Knowing which tracks should be grouped together is not always clear as this involves subjective decisions from the engineer in response to a number of external cues, such as the instrument or the musical content. This study introduces a novel way to automatically route a set of tracks through groups and subgroups in the mix. It uses openly available linked databases to infer the relationship between instrument objects in a DAW session, utilising graph theory and hierarchical clustering to obtain the groups. This can be used in any intelligent production environment to configure the session’s routing parameters.
Christof Weiß Computational Analysis of Harmonic Structures - A Preliminary Study of Richard Wagner's "Ring des Nibelungen" We present preliminary results of a research project dealing with audio-based methods for coarse-scale harmony analysis. We apply these methods to Richard Wagner’s tetralogy "Der Ring des Nibelungen," comprising up to 15 hours of music. Many studies are treating leitmotifs as the musical surface of the "Ring" whereas little is known about its large-scale tonal conception. To investigate such structures, we extract chroma features smoothed over windows of twelve measures length using manual measure annotations. Via template matching, we compute likelihoods for the twelve diatonic scales, which we visualize over the circle of fifths. We visually compare these analyses for three different interpretations using a specific color scheme. The resulting cross-version plots provide an overview of the "Ring's" tonal conception.

  

Poster sessions

Authors of papers allocated to a poster session are encouraged to demonstrate their work (if applicable). Demos associated with posters will be available during the poster session only.

AES - Audio Engineering Society