Sections

AES Section Meeting Reports

New York - February 10, 2009

Meeting Topic:

Moderator Name:

Speaker Name:

Meeting Location:

Summary

Traditional audio routing, both inside a single facility and from one facility to another, has been poised to make a fundamental shift for a few years now. Like the fundamental audio industry shifts before this one, other developments had to take place to make the shift both possible and viable. Those developments now appear to be in place, and the rollout is either starting to or is achieving critical mass. The evening's presenters were Chris Regan, VP of Sales and Operations for APT in North America, and Howard Mullinack of Wheatstone.
Chris Regan focused on the point-to-point transmission of audio between different facilities via an IP network. In the case of IP networks, the links for audio purposes now have, in general, a lower cost to install and operate than single T-1 circuit. The IP network can be scaled as necessary, and does not require licensing. A dedicated link such as a T-1 is an option, as it is always on and can be uncontested. The other option would be to use MPLS, or Multi-Protocol Label Switching. MPLS offers bandwidth reservation, service guarantees, and will assign each network packet a label directing it to the preferred route to take over the network. Chris astutely pointed out that the bandwidth allocation is too variable on the public internet, though it can be used as an STL (studio to transmitter link) backup. If a private link is the better option, but you cannot get a wired link to your location, what do you do? Go wireless! The increase in microwave link deployments has dropped the cost of such services, and it also adds an IP connection to your remote site. If your remote site is a transmitter, and you need a PDF manual to troubleshoot something, you can get it with a wireless IP link. Without it, it's a long drive back to the office, or a long phone call to someone who has that PDF.
Chris also addressed how the audio is sent over an IP network. It could be delivered via Unicast (point-to-point), or via multicast, which involves sending the packets to a multicast router that replicates them to their destinations. After choosing the broadcast method, the question then becomes which protocol to use. TCP, being a send-and-receive handshake protocol, is not really suited for audio in these applications. The preferred methods are RTP (Real-Time Transport Protocol) and UDP (User Datagram Protocol). RTP uses timestamps and sequence numbering for reassembling out-of-order packets at the receiver, and UDP does not require a handshake. The result is less overhead and a way of reassembling out-of-order packets. Large sized packets will reduce network jitter and bandwidth, while increasing delay and creating a greater potential for packet loss. Smaller packet sizes will increase network jitter and bandwidth requirements, while reducing delay and protecting against packet loss. Jitter can be countered by using buffers in the transmission. The delay will be affected by the choice of audio compression, though typical delays are 10-30 milliseconds. Chris also noted that packet loss is inevitable. The IP link quality will go a long way in determining how many packets are dropped, and how often drops occur. Testing of a link is best done over the course of a week before going live with it to have the best idea of when traffic is at a high point and what the effects will be. Using QoS (Quality of Service) can ensure that the audio packets have a higher priority on the network. Working on the Service Level Agreement (SLA) with the IP link provider will go a long way in establishing what is expected and considered as acceptable service for both parties.
Howard Mullinack focused his presentation on Wheatstone's E-Square system, which is a system for routing audio around within a single facility. The E-Square system is built with several different modules, called squares, that occupy 1RU and serve various purposes. Each I/O square is 16x16, and can be configured as 16 mono, 8 stereo, or any combination that adds up to 16 (6 stereo and 4 mono, for example). The I/O connections are via DB-25 and RJ-45 connectors. Each square has two (2) 8x2 stereo virtual utility mixers, allowing for the mixing of any system sources and effectively having an output available anywhere on the hardware squares. The mix engine is housed in a separate E-Square. There is also a digital snake that has 16 audio channels that are either all digital, all analog, or half-and-half. The system can be synchronized externally, and will fail-over to internal sync if the external sync, or the square that is receiving that sync, goes offline.
A PC (sorry, no Vista) can be hooked into the system and would be called an IP Square. A Glass-E virtual console acts as a display of what a console connected to an E-Square system looks like on a computer display, and it works in virtualization such as Parallels (no endorsement implied). Each square in the E-Square system has a copy of the configuration of the entire system. Any E-Square can fail, and any other E-Square can take over as the master square. E-Squares that are on the edges of a system can operate on their own if they are disconnected from the rest of the E-Square system. The latency is about 2.5 milliseconds, and the system uses RTP/UDP, which is also what APT recommends. The E-Square system requires Gigabit Ethernet, and though you could have it on the same Ethernet network as your typical IP traffic, it is not advised to do so.
Will the day arrive where a single system or manufacturer routes audio over IP both within a facility and between remote locations? That cannot be accurately predicted, but it can be accurately stated that audio over IP is poised to make a mainstream leap both within a facility and in point-to-point applications.

Report by Jonathan S. Abrams.

More About New York Section

AES - Audio Engineering Society