In This Section
- Networks - High-performance streaming audio-over-IP interoperability; AES67-xxxx DRAFT REVISION proposed for comment
- Universal jack for 6,35 mm plugs; AES-14id-2010 proposed for reaffirmation
- Measurement of digital audio equipment; AES17 draft revision proposed for comment
- Spatial acoustic data file format; AES69-2015 published
AES Standards News Blog
AES31-2-2012, AES standard on network and file transfer of audio – Audio-file transfer and exchange – File format for transferring digital audio data between systems of different type and manufacture, has been published.
The Broadcast Wave Format is a file format for audio data. It can be used for the seamless exchange of audio material between (i) different broadcast environments and (ii) equipment based on different computer platforms.
As well as the audio data, a BWF file (BWFF) contains the minimum information - or metadata - that is considered necessary for all broadcast applications. The Broadcast Wave Format is based on the Microsoft WAVE audio file format. This specification adds a “Broadcast Audio Extension” chunk to the basic WAVE format.
This revision of AES31-2-2006 incorporates the content of Amendment 1 (2008) that specifies an optional Extended Broadcast Wave Format (BWF-E) file format, designed to be a compatible extension of the Broadcast Wave Format (BWF) for audio file sizes larger than a conventional Wave file. It extends the maximum size capabilities of the RIFF/WAVE format by increasing its address space to 64 bits where necessary. BWF-E is also designed to be mutually compatible with the EBU T3306 "RF64" extended format.
This revision additionally packages a set of machine-readable loudness metadata into the BWF file. This is compatible with EBU v2 broadcast wave files.
Posted: Thursday, January 24, 2013
AES55-2012, AES standard for digital audio engineering - Carriage of MPEG Surround in an AES3 bitstream, has been published.
MPEG-D is an ISO/IEC standard describing MPEG Surround that extends mono or stereo audio towards multiple channels. The mono or stereo audio channels represent a downmix of the original multi-channel audio that is generated by the MPEG Surround encoder. In addition the MPEG Surround encoder generates spatial side information (MPEG Surround data). An MPEG Surround decoder is able to combine this information with the downmix to result in a multi-channel audio signal. In this way backward compatibility to mono and stereo systems is achieved.
More recently, MPEG-D has been revised to include MPEG SAOC (Spatial Audio Object Coding) that uses the same method to convey the related side information over PCM.
This standard specifies how MPEG Surround or MPEG SAOC shall be carried within an AES3 bitstream where the downmix channels remain in the linear PCM domain and the MPEG Surround or MPEG SAOC data is embedded into the least-significant bits of the PCM audio data.
Posted: Thursday, January 24, 2013
AES64-2012, AES standard for audio applications of networks - Command, control, and connection management for integrated media, has been published.
This standard for networked command, control, and connection management for integrated media is an IP-based peer-to-peer network protocol, in which any device on the network may initiate or accept control, monitoring, and connection management commands. The AES64 protocol has been developed around three important concepts: structuring, joining, and indexing. Every parameter is part of a structure, and control is possible at any of the levels of the structure, allowing for control over sets of parameters. Parameters can be joined into groups, thereby enabling control over many disparate parameters from a single control source. Every parameter has an index associated with it and, once discovered, this index provides a low bandwidth alternative to parameter control.
Posted: Monday, January 14, 2013