In This Section
- New "AES Portal" in Process of Being Rolled Out
- Take a moment to create your new Login today!
- AES 2016 Election Results Announced
- The AES has released the list of winning candidates from the balloting in the 2016 Audio Engineering Society international elections
- AES Opens Early Registration & Housing Options for AES Los Angeles, September 29 — October 2
- Use promo code AES141WEB at checkout for FREE Exhibit-Plus badge
- Research Finds Audible Differences with High-Resolution Audio
- Listeners can hear a difference between standard audio and better-than-CD quality, known as high-resolution audio
Detailed time-table for the tutorial day
Please refer to the following table for the tutorial day.
Please note that the program is subject to changes without notice!
For the main conference programme visit this page.
|Sunday, January 26: Tutorial day|
|10:00-10:30||REGISTRATION and coffee|
|10:30-11:00||Tutorial session 1: Chris Cannam and Mark Plumbley||Reusable Software: Introduction and overview|
|11:30-11:50||COFFEE BREAK and discussion of the forthcoming exercise|
|11:50-12:30||Tutorial session 1: Tutorial exercise|
|12:30-13:00||Tutorial session 1: Chris Cannam||Reusable Software: Review and wrap-up|
|14:00-14:30||Tutorial session 2: Michael Terrell||Semantic audio and music production: Modelling music production sound features|
|15:30-16:30||Tutorial session 2: Michael Terrell||Semantic audio and music production: Mapping Sound Features to Mixing Controls. Putting it all together - Mixing System Demonstration|
Tutorials will be held at the MAT Lab, Engineering Building of Queen Mary University of London, Mile End Campus (please look for the glass door entrance of the Engineering Building on Mile End road). Please see venues page for travel information and the campus map for more information about how to find the school.
Tutorial 1: "Reusable software and reproducibility in music research"
by Chris Cannam, Mark Plumbley, SoundSoftware.ac.uk
The need to develop and reuse software to process data is almost universal in audio and music research. Many methods are developed in tandem with software implementations, and some of them are too complex or too fundamentally software-based to be reproduced readily from a published paper alone. For this reason, it is helpful for sustainable research to have software and data published along with papers. In practice, non-publication of code and data is still the norm and research software is commonly lost in the years following publication of the associated methods.
The tutorial will rapidly cover the use of version control software, code hosting facilities, sharing and review of code within a team, aspects of testing and provenance, and software licensing for publication. The focus will be on using widely-available free tools such as Python, Mercurial and BitBucket. This tutorial will be of immediate practical interest to researchers within the community, and will also be highly relevant to research supervisors and research group leaders with an interest in policy and guidance.
Tutorial 2: "Semantic Audio and Music Production"
by Michael Terrell, Lasse Vetter, Mix Elephant, London, UK
The range of applications for semantic audio technology has grown rapidly in recent years, particularly in the fields of music production and consumption. Meta data is used to tag and categorise content (e.g. genre, emotion), it is used select music and so automatically generate playlists, and more recently has been used to do music production tasks, i.e. to mix multitrack music projects. The difference with music production applications, and in particular mixing, is the fact that we are not simply extracting meta data from the audio, but are processing the audio to change the meta data features of the mix to adhere to predefined objectives. There are two key aspects to this work, i) the development of meta-data “features” that describe the objectives of music production tasks, and ii) a means to manipulate the control parameters on the mixing device to realise these objectives. This tutorial will provide an overview of the state of the art in this field, which is more commonly referred to as “automatic mixing”. It will discuss current features and models that are used to describe music production tasks, and will give an overview of psychophysical methods that can be used to generate new features. Furthermore, via the use of practical examples, an introduction to numerical optimisation - a critical tool when mapping features to mixing controls - will be provided.