In This Section
- 135th AES Convention Hits A Five-Year High
- Convention takes a bite out of the Big Apple and reminds the industry that “If It’s About Audio, It’s At AES”
- AES 2013 Election Results
- The results are in!
- Time to Vote: 2013 AES Elections
- Deadline is Friday, July 12th
- Recordings from AES Rome Jazz Concert Now Available
- Listen to the Greg Burk Jazz Trio in ImmersAV
Main Conference Programme
The three (+1) day conference programme (26-29 January, 2014) includes oral and poster sessions to be held at the Barbican Centre in a convenient central London location, social events, and a technical tour at the BBC in London (participant numbers will be limited and subject to registration on a first come first served basis). A tutorial day (26 January, 2014) will be held at Queen Mary University of London which will be free to attend for all delegates.
Keynote and Invited speakers
The programme highlights 3 keynote and 4 invited talks from world leading researchers in the field of Semantic Audio as well as from industry. Please see the abstracts of the talks and a short bio of the speakers using the links below.
Up-to-date list of the invited talks, and more details of the main conference programme will be continuously published here.
A technical tour will take place at the new BBC Broadcasting House on Wednesday 29 January, (from 3pm-5pm approximately). The tour is free for conference participants but you will need to register upon arrival.
Broadcasting House is the new state of the art multimedia broadcasting centre in the heart of London. This world-class facility hosts several radio and television networks, including BBC Radio 1 and BBC World Service, hosts some 6000 staff and serves a world wide audience of 241 million people. This is the iconic new home for the BBC’s network and global services in Television, Radio, News and Online. New Broadcasting House heralds a simpler, more integrated digital service for audiences, and a simpler, more creative environment for staff.
The tour will provide an introduction to several live media production as well as post production facilities and provide an overview of the productiuon and broadcast workflows within the BBC. Participation is free but places will be limited to 40 delegates available on a fist come first served basis. Groups will be taken from the conference venue, however, if you need to travel on your own, please find travel infrmation to BBC Broadcasting House on the venues page.
Two special sessions are organised on "Semantic Audio Organization and Retrieval – Integrating User and Audio Information" chaired by Jan Larsen and Mark Plumbley, and "Intelligent Audio Production" chaired by Josh Reiss and Bryan Pardo. Special session papers are fully peer-reviewed and form parts of the conference proceedings.
Semantic Audio Organization and Retrieval – Integrating User and Audio Information
Higher-level semantic representations are essential for organization, search, retrieval, and discovery of audio and music. In order to mitigate semantic ambiguity and ensure an actionable representation, the formulation and implementation of an integrated framework is required. The framework should ideally consider all available information concerning the content and context of audio objects, as well as information about users’ context, demographics, usage activity, content description, such as e.g. tags or scores. The objective of the special session is to address current modelling, retrieval and interfacing trends from experts in the field.
Special Session on Intelligent Audio Production
The aim of this session is to bring together, for the first time, the disparate community working in this field. The session will consider ways in which semantic information can be used to create intelligent systems that are capable of performing audio production tasks which are typically done manually by a sound engineer. Particular attention is paid to the psychoacoustics and knowledge engineering needed to devise such systems. Different signal processing and machine learning approaches will be considered, as well as intelligent user interface designs that enable their use. The state of the art and future directions for intelligent audio production will also be discussed.
An additional tutorial day (26 January, 2014) will be held at Queen Mary University of London on effective research practices (e.g. the use of version control and unit testing in audio research), Intelligent audio production and automatic music mixing, and selected topics TBC (e.g. Semantic Web technologies for audio and/or Sparse Representation). Confirmed tutorials:
Tutorial 1: "Reusable software and reproducibility in music research"
by Chris Cannam, Luis Figueira and Mark Plumbley
The need to develop and reuse software to process data is almost universal in audio and music research. Many methods are developed in tandem with software implementations, and some of them are too complex or too fundamentally software-based to be reproduced readily from a published paper alone. For this reason, it is helpful for sustainable research to have software and data published along with papers. In practice, non-publication of code and data is still the norm and research software is commonly lost in the years following publication of the associated methods.
The tutorial will rapidly cover the use of version control software, code hosting facilities, aspects of testing and provenance, and software licensing for publication. During the session a live coding example of a music analysis algorithm, using a test driven development methodology, will be delivered. This will be done using common available tools, such as Python, Mercurial and BitBucket. Participants will be invited to code along. This tutorial will be of immediate practical interest to researchers within the community, and will also be highly relevant to research supervisors and research group leaders with an interest in policy and guidance.
Tutorial 2: "Intelligent Systems for Sound Engineering"
by Josh Reiss
This Tutorial will provide an introduction and overview of intelligent technologies and concepts for production of audio content. It will focus on adaptive and autonomous approaches, informed by semantics, that can be used to create intelligent systems capable of performing audio production and sound engineering tasks. Fundamental concepts, especially in the emerging fields of cross-adaptive audio effects and multitrack signal processing, will be described, along with demonstrations of technologies based on these concepts.
The conference will provide a great opportunity to network and meet other researchers as well as delegates from industry. Our social programmes will provide the right athmosphere to relax or to facilitate discussion and include (optional) visits to London Pubs.
The Gala Dinner will take place on Tuesday evening (28 January) from approximately 18.30-22.30. Venue TBC.
More details about the main conference prgramme and tutorials will be published here soon.