In This Section
- 135th AES Convention Hits A Five-Year High
- Convention takes a bite out of the Big Apple and reminds the industry that “If It’s About Audio, It’s At AES”
- AES 2013 Election Results
- The results are in!
- Time to Vote: 2013 AES Elections
- Deadline is Friday, July 12th
- Recordings from AES Rome Jazz Concert Now Available
- Listen to the Greg Burk Jazz Trio in ImmersAV
Main Conference Programme
The three (+1) day conference programme (27-29 January, 2014) includes oral and poster sessions to be held at the Barbican Centre in a convenient central London location, social events, and a technical tour at the BBC in London (participant numbers will be limited and subject to registration on a first come first served basis). An additional tutorial day (26 January, 2014) will be held at Queen Mary University of London which will be free to attend for all delegates.
Keynote and Invited speakers
Meinard Müller (International Audio Laboratories Erlangen, Germany)
Gaël Richard (TELECOM ParisTech and CNRS, France)
Gerhard Widmer (Department of Computational Perception, Johannes Kepler University, Linz, Austria)
Tuomas Eerola (Department of Music, Durham University, UK)
Yves Raimond (BBC R&D, London, UK)
Xavier Serra (Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain)
Jay LeBoeuf (Strategic Technology Director, iZotope Inc.)
Up-to-date list of the invited talks, and more detals of the main conference programme will be continuously published here.
Two special sessions are currently being organised on "Semantic Audio Organization and Retrieval – Integrating User and Audio Information" chaired by Jan Larsen and Mark Plumbley, and "Intelligent Audio Production" chaired by Josh Reiss and Bryan Pardo. Special session papers are fully peer-reviewed, but the submission deadline will be after the regular paper deadline.
Semantic Audio Organization and Retrieval – Integrating User and Audio Information
Higher-level semantic representations are essential for organization, search, retrieval, and discovery of audio and music. In order to mitigate semantic ambiguity and ensure an actionable representation, the formulation and implementation of an integrated framework is required. The framework should ideally consider all available information concerning the content and context of audio objects, as well as information about users’ context, demographics, usage activity, content description, such as e.g. tags or scores. The objective of the special session is to address current modelling, retrieval and interfacing trends from experts in the field. If you're interested in submitting a paper to this session, please contact Jan Larsen, and please see invitation.
Special Session on Intelligent Audio Production
The aim of this session is to bring together, for the first time, the disparate community working in this field. The session will consider ways in which semantic information can be used to create intelligent systems that are capable of performing audio production tasks which are typically done manually by a sound engineer. Particular attention is paid to the psychoacoustics and knowledge engineering needed to devise such systems. Different signal processing and machine learning approaches will be considered, as well as intelligent user interface designs that enable their use. The state of the art and future directions for intelligent audio production will also be discussed.
An additional tutorial day (26 January, 2014) will be held at Queen Mary University of London on effective research practices (e.g. the use of version control and unit testing in audio research), Intelligent audio production and automatic music mixing, and selected topics TBC (e.g. Semantic Web technologies for audio and/or Sparse Representation). Confirmed tutorials:
Tutorial 1: "Reusable software and reproducibility in music research"
by Chris Cannam, Luis Figueira and Mark Plumbley
The need to develop and reuse software to process data is almost universal in audio and music research. Many methods are developed in tandem with software implementations, and some of them are too complex or too fundamentally software-based to be reproduced readily from a published paper alone. For this reason, it is helpful for sustainable research to have software and data published along with papers. In practice, non-publication of code and data is still the norm and research software is commonly lost in the years following publication of the associated methods.
The tutorial will rapidly cover the use of version control software, code hosting facilities, aspects of testing and provenance, and software licensing for publication. During the session a live coding example of a music analysis algorithm, using a test driven development methodology, will be delivered. This will be done using common available tools, such as Python, Mercurial and BitBucket. Participants will be invited to code along. This tutorial will be of immediate practical interest to researchers within the community, and will also be highly relevant to research supervisors and research group leaders with an interest in policy and guidance.
Tutorial 2: "Intelligent Systems for Sound Engineering"
by Josh Reiss
This Tutorial will provide an introduction and overview of intelligent technologies and concepts for production of audio content. It will focus on adaptive and autonomous approaches, informed by semantics, that can be used to create intelligent systems capable of performing audio production and sound engineering tasks. Fundamental concepts, especially in the emerging fields of cross-adaptive audio effects and multitrack signal processing, will be described, along with demonstrations of technologies based on these concepts.
More details about the main conference prgramme and tutorials will be published here soon.