About

Pacific Northwest AES Section Blog

Upcoming Meeting: When Timing Audio For Video Becomes Impossible aka The Impossible Will Take A Little Longer

April 25, 2018 at 7:30 pm

Location: Opus 4 Studios, Bothell, WA USA

Moderated by: Dan Mortensen

Speaker(s): Dr. Michael Matesky, Opus 4 Studios & PNW AES Committee
Grant Crawford, Costco

 Once a year, Costco holds a Manager's Conference in Seattle, WA. A segment of the conference program is a memorial to Costco employees who have passed away during the previous year. The memorial takes the form of video images of those who have passed, along with an audio track that underscores the video. The only stipulation was that the music had to be performed by Costco employees. Simple enough, eh?

This year, there were two songs to be used, Song #1, Going Home, a spiritual, and Song #2, Hallelujah, by Leonard Cohen. Both songs would play against a video track that was being created at Costco HQ in Issaquah.

Going Home was to be recorded in English, French, Spanish, and Korean. The vocal would be added as an overdub, in the home country of the language; i.e. here, France, Spain, and Korea. Grant recorded a basic piano track at Opus 4 Studios without knowing the final tempo or key, and that's what was sent around to each country. Everyone got instructions about microphone choice, processing, etc. Each country would sing the entire song through, and the production team here would pick and choose what parts to use where. Simple enough, eh? What could go wrong?

Hallelujah was to be sung by yet another singer, with a small studio band. After some searching, they found their singer here in the PNW, BUT she had NEVER been in a recording studio. The "band" was guitar, bass, synth/keyboard, and backgrond vocals. With the exception of Grant (synth/keys), everyone was an amateur, albeit a Costco employee. What could go wrong?

Finally, as mentioned before, the songs underscore a video, which was being produced elsewhere at Costco, without the video team really hearing the music AND the audio team not seeing the video. A third party was calling the shots to both, to the extent that the production was evolving somewhat independently in two different but mostly parallel universes. The upshot of this was the audio team getting requests from Corporate that at such and such time, something had to happen, and the audio track would have to morph somehow to accommodate the request. This might be as simple as a fader move, but more commonly, it resulted in arrangement changes on the fly. Ohhh, of course the deadline is tighter than a  <<pick something impossibly tight]>> . What could possibly go wrong?

Come to our April meeting and find out how Dr. Mike and Grant made the impossible into a reality. Dr. Mike has still not seen the finished film (or even any of the rough cuts). 

More Information at AES PNW Section Website


Posted: Friday, April 6, 2018

| Permalink

 

Past Event: Roland Cloud and Gig Performer | The Continuing Evolution of the Virtual Studio

February 20, 2018 at 7:30 pm

Location: The Guitar Store, 8300 Aurora Ave N. Seattle, WA 98103 USA

Moderated by: Dan Mortensen

Speaker(s): Brandon Ryan - Roland, Dr. David Jameson - Gig Performer

 Over the course of decades, the audio community has focused their attention on the tape recorder and the evolutionary results have become the modern Digital Audio Workstation (DAW). Increasingly, the common denominator in music making has been Virtual Studio Technology (VST) instruments and signal processing gear (a standard developed by Steinberg). See https://en.wikipedia.org/wiki/Virtual_Studio_Technology for more details.

Now, subscription services are starting to take over - many computer software licenses are maintained by a monthly payment. Roland Corporation, one of the best known brand names in synthesis, recently started a subscription service - Roland Cloud - which provides VST instruments modeled on (down to the resistor!) a large assortment of their classic synthesizers and other forward looking products.

One challenge Roland (and many other VST providers) have, is that in order to use these VST instruments, the subscriber has to have access and working knowledge of a DAW able to mount these instruments and use them before the instant gratification of actually playing and hearing them. The "tape recorder" has taken over the studio.

Enter Gig Performer - A VST host that provides direct and immediate access to these instruments - and any VST processor or utility. When you open Gig Performer, you see your audio and MIDI interfaces on a desktop with connection points. Right click and select the VST you wish to use, connect them up with your interfaces, and you are playing. Near instant performance gratification.

Come see us at The Guitar Store on February 21, 2018 at 7:30 for a full run down of the future of the virtual studio.

View Official Meeting Report

More Information at AES PNW Section Website


Posted: Monday, February 12, 2018

| Permalink

 

Past Event: To Window, or Not To Window...

January 30, 2018 at 7:00 pm

Location: Digipen Institute of Technology, Redmond, Washington USA

Moderated by: Dan Mortensen

Speaker(s): James J. (JJ) Johnston - retired AT&T/Bell Labs, Microsoft, Bob Smith - Stryker/Physio Control

 To Window or Not To Window

aye, there's the rub!

 

Presented By
James J (JJ Johnston) - AES and IEEE Fellow
Bob Smith - AES PNW Section
and
The Pacific Northwest Section of the AES

 

 

Tuesday, January 30th, 2018, 7:30pm
Digipen Institute of Technology, Redmond Washington

 

Our January meeting will deal with the matter of windowing in FFT analysis.

Mr. Johnston will explain why windowing exists in FFT analysis, and then show the properties of a few windows, as well as mention when a window might not be the right tool. The talk will be primarily powerpoint, with rather a lot of graphs showing what happens when you do and don't window, why it's usually a good idea, and when it's actually not such a good idea.

Bob Smith will add clarity by discussing the practical effects of window choices on audio measurements. He will demonstrate these concepts via several live amplifier measurements.

Our Presenters

James D. (JJ) Johnston received the BSEE and MSEE degrees from Carnegie-Mellon University, Pittsburgh, PA in 1975 and 1976 respectively. JJ temporarily retired in 2002 but worked 26 years for AT&T Bell Labs and its successor AT&T Labs Research. He was one of the first investigators in the field of perceptual audio coding, one of the inventors and standardizers of MPEG 1/2 audio Layer 3 and MPEG-2 AAC, as well as the AT&T Bell Labs or AT&T Labs-Research PXFM (perceptual transform coding) and PAC (perceptual audio coding) and the ASPEC algorithm that provided the best audio quality in the MPEG-1 audio tests. Most recently he has been working in the area of auditory perception of soundfields, electronic soundfield correction, ways to capture soundfield cues and represent them, and ways to expand the limited sense of realism available in standard audio playback for both captured and synthetic performances. He was most recently employed by DTS Audio and his current status is retired. Mr. Johnston is an IEEE Fellow, an AES Fellow, a NJ Inventor of the Year, an AT&T Technical Medalist and Standards Awardee, and a co-recipient of the IEEE Donald Fink Paper Award. Mr. Johnston has presented many times for the PNW Section, most recently on the issues surrounding "Dynamic Range." In 2006, he received the James L. Flanagan Signal Processing Award from the IEEE Signal Processing Society, and presented the 2012 Heyser Lecture at the AES 133rd Convention: Audio, Radio, Acoustics and Signal Processing: the Way Forward.

Bob Smith has a BSEE from the University of Washington and has worked in the Biomedical industry for over 45 years. The last 20+ years he has spent developing acoustic research and audio engineering disciplines for Stryker / Physio Control (formerly Medtronic / Physio Control) to improve speech intelligibility for medical device voice prompting and voice recording systems in noisy environments. He is responsible for voice prompting in 30+ languages. The department now handles acoustic measurements of components such as drivers, microphone capsules and system measurements including Thiele-Small parameters, polar plots, waterfalls, frequency response, impulse response, several speech intelligibility methods, etc. When he's not playing acoustic/audio monkey for his corporate master, he runs an acoustic lab, SoundSmith Labs. From time to time, he can also be found recording local musical talents. Currently he is comparing several hardware and software acoustic / audio measurement systems to assess how much they vary and to the degree they converge on similar results. noise assessments and their effect on speech intelligibility.

View Official Meeting Report

PNW Section Website


Posted: Monday, January 8, 2018

| Permalink

 

Past Event: Distributed Mode and Balanced Mode transducers

December 5, 2017 at 7:30 pm

Location: Woodinville, WA USA

Moderated by: Dan Mortensen

Speaker(s): Marcelo Vercelli, Tim Whitwell, Tectonic Audio Labs

 Our December meeting will be held at Tectonic Audio Labs (TAL), located in Woodinville for a tour and presentation on bending wave technology as used in audio transducer and system designs. The basics of the technology will be explained. In addition, two primary embodiments will be described, including DMLs (distributed mode loudspeakers) and BMRs (balanced mode radiators), with applications of these devices.

In addition to exploring DML and BMR transducers, we'll take a look at how these devices are measured. Both of our presenters are highly skilled in acoustic and electronic measurements. TAL has an anechoic chamber, with an arced microphone array, and a battery of software-based measurement tools.

Demonstrations will be conducted of the DML sound reinforcement system and several BMR based products (commercial and consumer).

Tectonic has both a branded audio business and an OEM business.

The Tectonic branded business is primarily in Pro Audio, with new products for the contractor market coming in the near future. Currently the branded products have applications in House of Worship, sports venues, government chambers, theaters, production work, etc.

OEM applications include DMLs and BMRs. Exciters for DMLs are currently in production with Lufthansa business jets (interior panels) and Wayne/Dresser fuel pumps (audio from the plastic front panels). BMR applications are currently in automotive (Bentley Bentayga and GT), consumer AV products (e.g. Q-Acoustics Media 4 soundbar), consumer safety devices (e.g. smoke alarms) and other products.

 

Our Presenters

Marcelo Vercelli is the CTO of Tectonic Audio Labs and owner of Chameleon Labs LLC. Marcelo has been designing and manufacturing professional loudspeaker systems since 1988. He founded two companies focused in the professional audio market segment and has been awarded five U.S. patents in the area of transducer technology and loudspeaker acoustics. Over the last ten years, he has focused in the area of near-field and mid-field studio monitoring systems, having served as the Director of Engineering at Event Electronics. For the last five years he collaborated on development of bending wave technologies and associated audio system designs, including DMLs and BMRs, while serving as CTO for Tectonic Audio Labs. He has experience in the areas of product development, industrial and mechanical design, acoustic and electronic engineering as well as manufacturing engineering.

Marcelo's specialties include analog and digital audio system development, complex active loudspeaker system design, acoustic measurement and testing, transducer design, manufacturing engineering systems and strategies, and test systems for automated transducer and electronics production lines.

Tim Whitwell, MPhys Vice President Engineering at Tectonic Audio Labs. During a 20+ year career in the loudspeaker industry Tim has designed a wide range of award winning BMR and DML transducers and systems for Hi-Fi, home theatre, professional audio, TV and compact portable sector. He specializes in DML and BMR technologies and the simulation of transducers and electro-acoustic systems. Tim is the cited inventor on a number of patents in the fields of acoustics, transducer design and haptic feedback technology.

His specialties include acoustic design, simulation and measurements, loudspeaker system design and tuning, transducer design and optimization, magnetic finite-element analysis, DML (distributed mode loudspeaker) design, BMR (balanced mode radiator) design, laser vibrometry, and COMSOL Multi-Physics FEA analysis specialising in acoustic-structure interaction and electromagnetics.

View Official Meeting Report

Directions and more info


Posted: Monday, December 4, 2017

| Permalink

 

Past Event: The Brain-Music Interface: The Encephalophone

October 25, 2017 at 7:30 pm

Location: Digipen Institute of Technology, Redmond, Washington USA

Moderated by: Dan Mortensen

Speaker(s): Thomas A.S. Deuel - MD, PhD

 Can our brains be somehow connected to and directly operate a musical instrument (with the aid of some electronics)? Our October meeting explores that premise with Dr. Thomas Deuel, of DXARTS at the University of Washington. Dr. Deuel will discuss his brain-music interface: The Encephalophone.

The Encephalophone is a hands-free musical instrument. It uses EEG "brain-wave" measurements to allow users to generate music using only thought control, without movement. It is based on using Brain-Computer Interfaces (BCIs) to harness the electrical brain signals to create music in real-time using conscious "thought" control. It has been experimentally shown to work with reasonable accuracy, and is being used in clinical trials with patients with motor disability caused by stroke, MS, ALS, or spinal cord injury to enable and empower these patients to create music in real-time without needing to move.

Dr. Deuel plans to give a talk and screen video examples of the Encephalophone at work, followed by Q&A. Come and enjoy!

Maybe, just-maybe, a few folks will share a few minutes of their experience at this year's AES Convention....

Thomas A.S. Deuel, MD, PhD

Musician/Sound Artist, Neurologist, and Neuroscientist. He has been a University of Washington Acting Assistant Professor at the UW School of Music's DXARTS (Center for Digital Arts and Experimental Media). He is also Staff Physician/Neurophysiologist for Swedish Hospital, Seattle, WA. He has received several awards and grants for his work, and has given several invited talks on the Encephalophone. Some of his creative works may be found at www.deulingthumbs.com

Other Business: Student Recording Competition See: http://www.aes.org/sections/pnw/src2017/

View Official Meeting Report

PNW AES Website


Posted: Wednesday, October 11, 2017

| Permalink

 

Past Event: A Pictorial History of the Columbia Records 30th Street Studio

September 27, 2017 at 7:00 pm

Location: Redmond Washington, USA

Moderated by: Dan Mortensen

Speaker(s): Dan Mortensen - Friends of the 30th Street Studio, Dansound Inc Seattle

 A Pictorial History of the Columbia Records 30th Street Studio

 

Presented By
Dan Mortensen - Dansound Inc/Fo30St
And
The Pacific Northwest Section of the AES

 

 

Wednesday, September 27th, 7:30pm
DigiPen Institute of Technology - Michaelangelo Room

 

Directions to DigiPen

When our friend Frank Laico told us at Section Meetings between 2008 and 2012 about his career in recording, he described a wonderful yet somewhat inscrutable place called the 30th Street Studio. It was located in Manhattan NYC at 207 East 30th Street.

It was described as an abandoned Armenian Church that Columbia turned into a recording studio where marvelous sounding recordings were made for over 30 years, and which was recognized as so perfect-sounding immediately upon acquisition that the decree went out: Don't change anything in it, don't wash the floors or paint the walls or fix it up in any way. Leave it like it is!

It was also described as a gargantuan space (100' x 100' x 100') that had perfect reverberation.

We know that a remarkable number of extraordinary-sounding recordings came out of it that we still listen to today, but the particulars of its structure and spaces were lost to demolition, and as we asked more questions the details got fuzzier.

Dan Mortensen, co-moderator of our meetings with Frank, became more and more interested in those nagging details and has been pursuing research into it ever since his first meeting with Frank in December 2008. Dan has founded a group to memorialize the studio called Friends of the 30th Street Studio (Fo30St), and has held four meetings in New York since 2012 in which people who worked there or are interested in its memory gather to discuss it and to see the fruits of Dan's and other people's research and share memories.

The truth is that Frank was remembering through nearly 50 years of memories without much physical evidence, and as entertaining and plausible as the memories were they were not fully accurate. The spirit was entirely correct but not the complexity of reality.

Come join us and see and hear our current understanding of what the studio was over its 33 year life span and how it came to be. There will be lots of pictures, not a lot of music as this is about the studio and not its product or its people. The studio story is complicated enough.

More

Wikipedia 
Albums recorded @ 30th St. 
Memoir of Classic Recordings with Frank Laico 
Anatomy of a Session with Frank Laico 
An Evening with Frank Laico

AES PNW Student Recording Competition

We are putting together a competition for student recordings, similar to that at the conventions. There are several categories for recordings, the submission deadline will be sometime in November, and the winners will be announced at the December meeting. The details will be posted here (PNW Website) once things are finalized.

View Official Meeting Report

PNW Section Website


Posted: Wednesday, September 20, 2017

| Permalink

 

Past Event: 3D Sound for Video Games

June 21, 2017 at 7:00 pm

Location: Digipen, Redmond, WA USA

Moderated by: Dan Mortensen

Speaker(s): Lawrence Schwedler, Digipen

 Audio for video games has come a long way since the days of programmable sound generators in the early home consoles. Recent development of virtual and augmented reality for the consumer market has brought about rapid innovation in systems for the realtime rendering of three-dimensional, spatial audio that replicates the way sound waves interact with our ears and head, and with the environment.

This talk will include a brief history of game audio, definition and disambiguation of terminology, and an overview of some of the current methods for rendering spatial audio over headphones and loudspeakers. We will also demonstrate Dr. Edgar Choueiri's BACCH system for binaural audio over loudspeakers, as well as a VR game utilizing a commercial 3D audio plugin. 

Other Business: Our Annual Election will take place at this meeting. If you're a member, please attend to help ensure that we have a quorum.

View Official Meeting Report

PNW AES Website


Posted: Thursday, June 8, 2017

| Permalink

 

Past Event: The Harry Partch Instrumentarium

May 30, 2017 at 7:30 pm

Location: Meany Studio Theater, University of Washington, Seattle, WA USA

Moderated by: Dan Mortensen

Speaker(s): Charles Corey, Curator, University of Washington

 Harry Partch (1901-1974) was an iconoclastic American composer and instrument inventor with a passion for integrating musicians, actors, and dancers in large-scale works of total-theater. He was "seduced into carpentry" by his interest in just intonation and his need to have an orchestra tuned to this system. The instruments are more than just producers of tone, however — each one has an evocative name and dramatic physical presence, and each one puts unique physical demands on the performer. In Partch's book, Genesis of a Music, he writes that the performer of the Marimba Eroica should at times "convey the vision of Ben Hur in his chariot," while a musician playing his Kithara must not "bend at the waist, like an amateur California prune picker," but instead should move with grace and athleticism in a "functional dance."

During his lifetime, Partch composed numerous works for his instruments. Some pieces are straightforward concert music, while others include components of film, dance, or theater. He made great use of the human voice in his music, requiring instrumentalists and singers alike to 'intone' spoken words on precise pitches. His compositions dealt with subjects ranging from ancient Greek mythology and classic drama to his own life experiences as a hobo in 20th century America.

After Partch's death, his orchestra of over 50 unique instruments was left to his dedicated assistant, Danlee Mitchell. Danlee continued performances of Partch's works from his base at San Diego State University until 1990, when Dean Drummond, a former member of Partch's ensemble, became the curator of the instrumentarium. In 2013, Charles Corey, a member of Drummond's ensemble, was appointed curator, and has worked closely with Danlee Mitchell to continue the legacy of Harry Partch.

More info, including the Presenter's bio, directions, and parking info can be found at the Section Website. Pay particular attention to the parking instructions. The campus police are especially vigilant.

 Seattle Times article and media Strange musical creations #E433

View Official Meeting Report

PNW Section Website


Posted: Tuesday, April 25, 2017

| Permalink

 

Past Event: Theatrical Sound Design: Live, Recorded Playback and Generative Methods

April 12, 2017 at 7:30 pm

Moderated by: Dave Tosti-Lane

Speaker(s): Brendan Patrick Hogan, Hogansound

 Brendan Hogan's presentation will examine established methods of theatrical sound design including system design applications, surround and immersive mixing, show control methods, and soundscape design, as well as generative and reactive methods via live software synthesis. Examples of each method will be given via QLab workspaces of fully produced works, and attendees will be provided with download links to code examples. Most software applications demonstrated during the presentation are free and cross platform, or can be installed for limited-time evaluation.

Applications and key topics Brendan hopes to have time to discuss include MIDI, OSC, Haskell language, Ruby language, AppleScript (with QLab integration) and perhaps a bit of C++ and node.js.

 

About the Presenter

Brendan Patrick Hogan

Brendan Hogan's work combines practices in composition, sound and media design for theater, dance and film, as well as electronics, programming and show control systems for live and installation/immersive performance. Recent projects include over 50 productions at ACT Theater in Seattle, where he spent six seasons as the Resident Sound Designer, including music for Sugar Daddies with Sir Alan Ayckbourn; original music, sound design and accompaniment for Her Name Is Isaac with The Three Yells; sound design and original music for Red (2012 Gregory Awards winner) at Seattle Repertory Theater; sound design, original music and installation musical instrument design for Chamber Cymbeline at Seattle Shakespeare Company; sound design and show control systems for The Memorandum and The Birds (2016 Gregory Awards winner) at Strawberry Theatre Workshop; and animatronics for Sprawl at Washington Ensemble Theatre.

He is a two-time winner of Best Musical Score at the Seattle 48 Hour Film Project.

Professional Memberships:

  • United Scenic Artists USA Local 829 IATSE
  • American Federation of Musicians/Musician's Association of Seattle Local 76-493
  • In addition to theatrical work, he performs with Miss Mamie Lavona and The Bad Things.

View Official Meeting Report

More Information at AES PNW Section Website


Posted: Wednesday, April 5, 2017

| Permalink

 

Past Event: Impedance... Because Resistance is Futile!

March 20, 2017 at 7:30 pm

Location: Opus 4 Studios, Bothell, WA USA

Moderated by: Steve Macatee

Speaker(s): Steve Macatee - consultant, Dennis Noson - BRC Acousticians, Colin Isler - Rane, Dana Olson - Olson Systems LLC, Mark Rogers - Greenbusch Group

impedance: the complex combination of DC resistance and frequency dependent reactance in an AC circuit. 

reactance: the non-resistive component of impedance in an AC circuit, arising from the effect of inductance or capacitance or both and causing the resulting current to be out of phase with the electromotive force causing it. 

If that second definition appears bewildering, our presenters will offer enlightenment.

Bob Smith: 
(PNW Section Committee, Physio Control, BS Studios)

 

The topic of impedance comes up in many audio-related discussions and it's safe to say that unless you are an EE, there is a high degree of misunderstanding of this important topic, something worthy of more than just tribal knowledge.

Early on you're told to match impedances; i.e., the source impedance and load impedance must be equal. It seems easy enough to grasp. Then suddenly you're told that the load impedance needs to bridge the source impedance. What's with that? With video and RF, you find that impedance matching does matter... and even the cable has an impedance parameter. If your brain hasn't locked up yet, there is the matter of acoustical impedance.

It's all enough to give you impedance on the brain!

To match or not to match... That is Z question!

Our March meeting covers Impedance in both the acoustic and audio (electrical) contexts.

  • Steve Macatee will provide a brief introduction and definition to set the stage for the four main presenters.
  • Dennis Noson will extend the definition of impedance – a dynamic (vs. static) parameter of pressure waves in the air – to explore how this concept helps in understanding audio system speakers, singer's voices, and acoustic instruments. Using beautiful slides, colorful graphs, and bit of handwaving, the dynamism of radiated sound will be revealed, whether transduced by audio or radiated by tubes and plates – from organ pipes and piano soundboards (not that soundboard), right up to Amar Bose's presumably ideal one-eighth of a pulsing sphere.
  • Colin Isler from Rane will cover the Impedance of Audio Interfaces.
  • Our own Dana Olson will show some plots of impedance curves — perhaps even measure one actively live to show the process and to hear it sweep. Then he'll talk about what these curves mean, how they are used and what effect a 4 ohm vs 8 ohm speaker has on amplifier performance.
  • Mark Rogers will point out the pitfalls in the wiring of distributed speakers, whether wired in series or in parallel or in 25/70 V systems. Problems can manifest themselves not only as large losses, but also as frequency response anomalies that will be different at each speaker.

View Official Meeting Report

PNW Section Website


Posted: Friday, March 3, 2017

| Permalink

 

RSS News Feed

 

AES - Audio Engineering Society