Sections

AES Section Meeting Reports

Toronto - November 26, 2012

Meeting Topic:

Moderator Name:

Speaker Name:

Other business or activities at the meeting:

Meeting Location:

Summary

This evening's meeting was in two sections separated by a social break. The first half was the annual review of papers with Dr. John Vanderkooy; and the second was exhibit and workshop review presentations with Bob Breen, Alan Clayton, Jeff Bamford, and Ron Lynch.

Rob introduced the evening's moderator, Sy Potma, who began by providing John Vanderkooy's biography. Towards the end of the reading, noting John's current interests in active acoustic absorbers and the acoustics of the trumpet, Sy asked if there'll be anything discussed about trumpets, John said "No, not tonight"!!

Dr. Vanderkooy noted that as many as 100 or more papers are produced at each convention. What he presents are a reflection of what interests him and he hoped that the audience would feel likewise. He offered reviews from both 132nd and 133rd conferences. He mentioned he did not attend the 132nd conference in Budapest.

His presentation was accompanied visually by slides of the pdf's of the various papers he reviewed. One oddity he noted about the slides for the Budapest conference was some of the paper's numbers appeared only as four dots perhaps due to "some strange Hungarian font", though he wasn't sure about that.

Before he continued, the first paper he reviewed came from an October 2011 Melbourne AES section meeting entitled "Evaluation of Vibrating Sound Transducers with Glass Membrane Based on Measurements and Numerical Simulations". John initially thought this was interesting but eventually called it "a crock! — there's enough to talk about" referring to the papers he was about to review.

John's presentation lasted about an hour. From the Budapest conference, the papers entitled "Towards An Unbiased Standard In Testing Laptop PC Audio Quality" (No. 8682) and "Some New Evidence That Teenagers and College Students May Prefer Accurate Sound Reproduction" (No. 8683) generated some laughs when he first displayed them.

Similar reaction occurred with the paper entitled "From Short-To-Long Range Signal Tunnelling" which John called "ridiculous" and not an audio paper! It attempts to show people trying to get signals that are faster than the speed of light by using tunnelling. "It's garbage!!" He said "you can't do it because what happens is if you actually look at the dispersion relationships, while you can certainly cause super-luminal things in certain frequency ranges, as soon as you want to make a signal envelope, you'll find it's always delayed and always lower than the speed of light...They'll find out the hard way".

A paper John was really keen on, however, was entitled "How The Ear Really Works". It was a summary of neuro-physicist Jim Fulton's research presented by Dr. Rodney Staples to the Melbourne Section in 2011 (before the Budapest conference). John highlighted this with a series of pictorial slides. "He has a model of the ear that is really quite remarkable." A lot of things that didn't make sense before do now. "It's really a step forward". He noted the website neuronresearch.net/hearing where one can find more information. Dr. Staples is self published, bypassing standard journaling.

When introducing papers from the 133rd conference, John noted that while he's presenting what he likes, he stressed "you really have to look at the whole convention. If you're into something you can find it there". He also added that when papers present similar topics, the AES tries to group them together to hold a session, conference or present a panel discussion so that everyone is represented.

Continuing with some background history about topics presented at conventions, he stated "we had loudspeakers, we recorded sound, and made tapes and that's it! Well: we do so much more now - we still do those things - but we do a lot more."

One paper that invoked some audience discussion was entitled "The Influence of 2D& 3D Video Playback On The Perceived Quality of Spatial Audio Rendering for Headphones". It discussed the perceived audio quality with 2D and 3D video playback. The question John focused on was "if you had 3D video representation, would surround (3D) audio sound better or not?" Jumping to the paper's conclusion he reported that the better you make the video, the worse the audio sounds, if you don't change it. John referred to this as interesting and remarkable.

An audience member then cited a 3D conference at the Toronto International Film Festival where Wim Wenders discussed his then new film done in 3D. When Wim asked about the audio, ultimately it had to remixed because the original surround mix was made during traditional 2D playback and was "not right". The audio mix has to match the 3D motion was another comment. John added he thinks it has to 'jive'.

When John mentioned his grandson doesn't like 3D movies, someone said that some people - around 5% - get headaches from 3D movies, mentioning inner-ocular disturbances.

Before continuing, John, as journal editor, wanted to remind the audience about 'engineering briefs'. They're papers where anybody can give a talk; for example: if one has found out something in a studio that's interesting, then send it in, talk about it, if it's not too commercial or 'silly', it will be accepted. His reason is: in the past many people had sometimes 'wonderful', off-the-wall ideas, and interesting to see, and John simply wanted to bring that back. They get put on the website for all members to view; and currently there are between fifty and sixty of these briefs now, and growing.

When John briefly mentioned a paper discussing what happens when you employ silicon carbide diodes and MOSFETS, an audience member asked about the conclusions of the paper: if they were superior or have greater safety. John replied there are problems associated with them because they're different, but they're generally regarded as being better. Of course, a diode will probably have a higher turn on voltage which is a bit negative for a silicon carbide diode, but they have terrific turn off times. They do things quite differently than silicon. He thinks it's worth looking at.

One paper John presented was his own engineering brief co-authored with his colleague Kevin Krauel entitled "Another View of Distortion Perception". Describing the process, they designed a circuit where you can adjust the distortion level independent of the input level. The results were quite interesting. Distortion below 1% was quite inaudible even with sine waves at frequencies of 400-500 Hz where the ear is most sensitive. With music, distortion was inaudible below 10%! John believed with more practice people could improve their detection by a factor of 2 to around 4-5%. He found it remarkable how poor the ear is at hearing distortion.

After one more paper, John concluded his presentation. Sy suggested taking a break.

Non-paper/exhibit reviews followed.

Sy introduced Robert Breen who is the AES Eastern Vice President for US and Canada. Bob noted that his duties often preclude allowing him doing "all the fun audio stuff everyone gets to do"! As a result, his overview is more general. His presentation also reflects what he found interesting and was accompanied by a slide show.

A brief discussion of future conference locations ensued with the audience.

Because of the Internet, having a "first look" or showing is almost impossible, so what's happening instead is that manufacturers are educating customers. So a "Project Studio Expo" was held which was a huge success on the trade show floor.

There was also a bit of "the more things change the more they stay the same". For example there are five companies making Neves; as well at least three attempts at a Fairchild! Also of interest was a reverb remover which Bob said sounded quite effective.

As usual, there were high profile audio engineers talking about their work, the keynote being Steve Lilywhite who talked about working with U2. The panel discussions included the "usual suspects"!

There were also many events outside the main convention panels. Bob talked about the student party co-sponsored by SPARS and the student delegate assembly. Chris Stone received a SPARS fellowship. This was held at Coast Recorders studio. Bob said it was the most interesting studio he'd ever seen because it had all brand new analog gear.

He discussed the showing of the "The Wrecking Crew" documentary.

He also gave a bit of an inside look where the Board of Governors meeting was held. The incoming president of the AES is Sean Olive.

One of the most interesting talks for Bob was one on vocal modes. That presentation included medical footage of a camera taking a picture of vocal cords vibrating while someone was singing. It was held by the Complete Vocal Institute.

He talked about setting up student sections in recording schools to encourage more new members. He's available to help set this up with anyone who's interested.

Lastly, he mentioned a regional conferences coming up in July: an educators conference for teachers being held in Nashville.

Sy introduced recent executive member Alan Clayton. This was Mr. Clayton's first presentation from the convention.

Alan brings a different view since he comes from the contracting world. He attends events such as Live Sound, things applicable to contracting, and sound system installations.

He first discussed the networked audio track, displaying the sessions that were held. In the live sound track, a workshop on practical networking for live sound he found excellent.

One area Mr. Clayton focused on was the product design track, in particular AVB networking for product designers. He feels AVB will be a game changer especially on the installation side of things. Giving a brief overview of AVB: audio video bridging is a method for transporting data over Ethernet. It refers to a collection of standards written by the IEEE. Solutions for the inevitable dropouts involved proprietary protocols. He briefly discussed the new standards, protocols and changes. Reference time clock is now the network - devices on either end are looking to the network for synchronization.

A question from John Vanderkooy referred to the constantly changing state of the Internet, and he wondered how does a system like this look after that? An audience discussion ensued: someone stated that it is probably a closed non public network just for audio. It is its own system. Applications that would use this are broadcast studios and schools. Standards are slow for being adopted because of the number of manufacturers involved. The discussion led nicely to Alan's next topic...

This was the AVNU alliance. This has been likened to USB. They're trying to do the same for audio and video equipment. Alan's display, showing about forty companies that are on line for this, was only a portion of companies signed on. What was interesting was that many car companies are on this list.

Another audience member wanted to know what the limitations and parameters were? Alan replied it varies with the speed of the network but has literally hundreds of channels on one CAT-5 network; and as speeds go up we need to look at different mediums like fibre optics.

He also mentioned the AES X192 spec which was attempting something similar: interoperability between systems. An audience member mentioned that a RAVENNA equipped box from Merging Technologies offered 256 channels of 192kHz 24 bit audio; so: nearly enough!

Alan closed by mentioning the Technical Committee for Acoustics and Sound Reinforcement's conference during the summer of 2015 tentatively to be held at McGill University.

Sy introduced Jeff Bamford.

Jeff discussed his attendance at the Dolby Labs Tour, in particular the Atmos system: it allows for 128 simultaneous and lossless audio streams. It accepts up to 64 loudspeakers with independent feeds. When Jeff mentioned that this is great news if you happen to be in the market to sell speakers and amplifiers, it generated many laughs from the audience.

Someone asked what happens if one speaker goes down. Jeff said there wasn't much discussion on that but there were tests on "system awareness".

Another member wanted to know "what are the streams?" Jeff believed the streams were similar to a mix room in a movie house and the system accommodates that mix for whatever system the destination space may have ie: six speakers or sixty. Bob Breen suggested this sounds similar to runtime mixing used in gaming systems where the rendering is done in realtime, but on a larger scale.

Another audience member mentioned the buzzwords he was hearing were: object-based and format-agnostic; creating archival materials that could be presented in any surround format. Jeff offered up the analogy that it's like giving a word document as a deliverable as opposed to a pdf.

Someone mentioned if they were mixing they would create 8 sets of metadata for different systems because someone has to make these decisions. An audience discussion then continued to the point where Sy mentioned things have to move along. Jeff felt this topic would make a great general meeting. He could foresee this getting into the home market in 3-4 years.

Jeff talked about a few more sessions he went to including: "Audio Encoding for Streaming" which focused on adaptive bit rate switching. Others were about Audio Networks with the gist being 'it's easier to train an audio guy about networking than a networking guy about audio'. Bill Whitlock gave a great talk about grounding. High Quality IP Streaming and Sound System intelligibility with Peter Mapp and Ben Kok were other sessions he attended. Alan Clayton mentioned a workshop being run by Peter Mapp in January in Nashville, in conjunction with SynAudCon.

The final session Jeff spoke about was entitled 'B16 - Streaming Experience'.

Bob Breen recalled that he first heard about runtime mixing, referring back to Atmos, at the Toronto AES conference on surround sound at the CBC many years back.

Sy introduced Ron Lynch.

Ron began by saying the show was enormous with as many as 12 events happening concurrently, not just the exhibits, but workshops and broadcasts. It was hard to choose where to go.

He thought the area of 3D audio was handled quite well. Sony offered a "fantastic" overview of how not to screw up, but not any particular products. Ron then briefly discussed different companies approaches to 3D audio. Discussing Iosono's process, an audience member stated it was very similar to a high order Ambisonics system. He also revisited the Atmos system giving a bit more in-depth explanation. Discussing Auro's approach someone from the audience asked "does this remind anybody of Ambisonics?!"

Ron next touched on the topic of loudness and provided an 'executive summary'. His general conclusions from all the presentations he attended are: Loud commercials are bad. Loud dramatic programming is bad too — if it gets too loud, it gets smashed down. Quiet dramatic programming is bad! He didn't quite get that - anything quiet gets raised - "who needs fifty foot crickets in their living room?!"

He summed it all up by saying (tongue-in-cheek) "dynamic range is just a bad topic" to great laughter from the audience.

One exception he cited was with NBC. They won't touch mixes once everything is anchored correctly. Affiliates are instructed to leave the mixes alone.

With regard to the CALM act he said "anything with dynamics equals flat sound".

Closing his presentation he spoke about the RADAR controller with touch screen, Studer mix consoles and Slate Digital's RAVEN.

Sy thanked the audience. He invited future attendees to AES conferences interested in making presentations, to let the executive committee know.

Rob asked all the presenters up to award them with AES certificates and mugs to all five presenters.

Rob thanked everyone for attending.

Written By:

More About Toronto Section

AES - Audio Engineering Society