Meeting Topic: Lies, Damn Lies, and Specifications
Moderator Name: David Weinberg
Speaker Name: Johathan Novick, Audio Precision
Meeting Location: Consumer Electronics Association, Washington DC
Lies, Damn Lies, and Specifications
by David J. Weinberg (Chair, AES-DC section)
Jonathan Novick is an experienced electronics engineer, an active member of the Consumer Electronics Association's (CEA) audio standards efforts, and Audio Precision's Director of Sales. He clarified that the title of his presentation (and this report) is a paraphrase of the remark popularized by Mark Twain.
The AES-DC section convened at the CEA's headquarters to hear Novick discuss his perspective on audio-equipment specifications — why specs often don't correlate with perceived performance, and what we could measure that might make specs' more meaningful.
Specs are important. Every product has them. Lots of time and money are spent measuring them. People base purchase decisions on them. And yet many people will dismiss them outright.
Novick received his "wake-up call" at JBL's Alex Voishvillo's February 2007 AES-LA presentation "Assessment of Nonlinearities in Audio — From Harmonic Distortion to Perceptual Models", during which he demonstrated high-THD signals that sounded OK and low-THD signals that sounded quite bad. This brought home Richard Heyser's claim that "If it measures well but sounds bad, ... you're measuring the wrong thing!" A corollary is that just because no standard measurement identifies a problem, it doesn't mean there isn't a problem that some people might hear.
Companies need to use design and performance specifications to quantify the fitness of a product to an application, and to ensure the consistency of its manufacture.
Consumers would like specifications and performance measurements that help them make a fair and objective comparison as they try to make an intelligent purchase.
It is well-known that marketing departments prefer to publish equipment specifications that flatter their equipment and give it an edge in comparison with competitors' products. This specsmanship is sometimes technical deception masked in objectivity, and can be defined as the art, skill and manner of enhancing product appeal through the use of misleading or incomplete data. A classic example is a power amp with a deceptively high output rating that was measured with a short pulse at a single frequency, on a single channel, with no corresponding distortion number.
Novick highlighted a currently marketed 'prosumer' amp that initially was claimed to deliver: 300Wpc (both channels driven into 8O, 1kHz EIA, 0.1%THD), 2100W (bridged mono into 8O, 1kHz EIA, 0.1%THD), and 3000W max. Simple engineering analysis shows that the bridged mono spec violates ohm's law, plus investigation reveals that the amp cannot be configured into bridged-mono mode (the manufacturer eventually dropped the bridged-mono spec from its documentation). Meanwhile, the max output spec doesn't restrict the measurement to any test conditions.
There have been various independent and government organizations which have attempted to generate meaningful specification standards that might help foster fairer comparison among manufacturers' models. These organizations included the Institute of High Fidelity (IHF), Japan Electronics and Information Technology Industries Association (JEITA), the CEA, Deutsches Institut für Normung eV (DIN), and the International Electrotechnical Commission (IEC).
They tried to adequately address such factors as ambient test conditions, device-under-test preconditioning, power-source regulation, output load conditions, and test duration, plus other test parameters such as bandwidth, type of signal detector (average, rms, peak, etc), and weighting factor (A, C, flat).
In the 1960s/'70s, over-zealous marketing led to such inflated stereo power specs that the Federal Trade Commission had to create a federal regulation regarding their use. More than three decades later, confusion reigns again, in the home theater market; this time the FTC is not stepping in.
Consumer fraud still exists, but that's not the whole problem. People frequently misinterpret specs. They'll make broad assumptions about the performance characteristics of a device based on a single-point measurement. Without an accurate understanding of various audio specs, it is easy to make incorrect assumptions about their relevancy and draw inaccurate conclusions about how closely standard measurements reflect real-world listening; this is especially true of common distortion measurements.
The underlying questions asked by some equipment designers, reviewers and consumers are: What are specs really telling me? How were the specs created and performance measured? Can I trust published specs to mean anything relating to sonic performance?
Specifications are an attempt to generate an objective abstraction of a subjective experience. Performance is measured with source signals based on simple and complex test tones that, at best, only faintly represent the music and vocal sources that will be fed to the equipment in real-world use.
Novick's interactive presentation, with sonic examples (played back over JBL Eon 210P powered speakers loaned for the meeting courtesy of Greg Lukens, Washington Professional Systems), took a look at the world of specs including the 'art' of specsmanship. Attendees got to compare real-world performance against measured specifications for a variety of circuit problems, hear the lack of correlation, and learn why.
His first examples presented hard-clipping and crossover distortion, and we found we were much more sensitive to a low percentage of crossover distortion than a high percentage of clipping distortion. He believes clipping distortion is less noticeable because in most recordings clipping is sporadic and masked by our hearing. Meanwhile, much musical information is present near the zero crossover point. He showed this with a statistical distribution of level vs time of music (which had a substantial central peak around the zero level) vs that of a sine wave (which peaked at the positive and negative maximum levels).
Universally shown THD+N graphs are in percentage vs output level, and as Novick pointed out, these graphs can be somewhat misleading. He noted that plotting the absolute level (not the percentage) of THD+N vs output level sometimes dramatizes the presence of perceptible distortions below clipping.
He continued with the recognition that not all distortion sounds alike, that graphs tell you a lot more than specs, and the admonition to measure where you listen — not just at device limits. Using a slew-rate-limiting circuit, Novick showed graphs of distortion vs test frequency and vs input level, demonstrating these last two points.
It is also important to measure across the operational bandwidth of the tested device, and to be mindful of the test analyzer configuration. For example, if the analyzer is set up with an A-weighted filter, perceptible distortion products above a few kiloHertz (which under certain conditions can cause sonic anomalies) would not be properly represented in the measurement.
Novick believes that traditional tests that have been in use over the past 50 years are only a starting point. He noted that while op amp spec packages typically include extensive specs plus more than 40 graphs of performance, audio equipment specs are typically quite sparse. We need better tests, and that as a start, twin-tone intermodulation tests and burst testing should be employed regularly. He also noted that modern audio analyzers are capable of multitone tests and visual waveform analysis. These tests can sometimes identify sonic anomalies that traditional tests can't.
There are issues with the implementation of new test specs: Who will benefit from better specs? Who will create them? Who will enforce them? And what will overcome the inertia preventing their development and application?
Government bodies and commercial trade organizations are doing some work in this area, but why isn't the AES doing more?
The bottom line: No current set of equipment specifications tells us how it sounds. The audio community would be well served to further its extensive investigations in search of equipment specifications — measured performance -- that more closely correlate with what we aurally perceive when playing music and movie recordings.
We thanked Novick for distorting our perception of specifications.
Written By: David Weinberg