INSIGHTS: Interview with Tom Holman
"Audio Guru" and president of TMH
Corporation
Interviewed by Mel Lambert in August 1996
It is no idle exaggeration to state that Tomlinson Holman's
contribution to our understanding of theoretical and practical audio has had a profound
impact on both creators and consumers alike. His distinguished career in audio, video and
film now spans over 30 years; his influence has been felt by virtually all sectors of the
motion-picture production and exhibition markets, in addition to home playback systems. As
a theoretician well grounded in the practical realities of designing and building
professional and consumer playback systems, Holman has the unique and rare ability to push
the state of the art and, in doing so, challenge entire industries to achieve new quality
standards. The legacy of his career is the creation of new markets, and the redefining of
existing markets, by establishing new quality standards and designing products which
perform to them.
All THX patents come directly from Tom Holman's experiments
and research. Serving for 15 years as corporate technical director for Lucasfilm, Ltd., he
spearheaded the conception, development, design, and implementation of the technical
infrastructure for George Lucas' Skywalker Ranch, including the Skywalker Sound
post-production complex. From 1987 thru 1992, he also designed many aspects of the
Hollywood Bowl's sound system, including electronic enhancements
Tom Holman has also been teaching film sound at the USC
School of Cinema since 1987, and is currently an Associate Professor. In late 1995 he
resigned his position with Lucasfilm, to set up and head a new company, TMH Corporation,
to research and develop new professional-audio and consumer-electronic systems. We caught
up with this audio communicator during a rare break from his teaching at USC and ongoing
research with TMH Corp.
Can it really be true that you've been involved
with the audio industry for over three decades?
I started working full time as a professional the moment I got out of college; I
have a degree in broadcasting and communications from the University of Illinois. I
started in EE, and shifted midway through undergraduate school to broadcasting.
Engineering, at that time, was a very narrowly focused view of the world. I wanted to work
at the kind of things I'm doing now- engineering for entertainment; for the arts-
but
everyone around me in engineering school seemed to be on the straight and narrow for the
corporate path, and that was something that didn't interest me.
I get the impression that you are more of a
free-ranging spirit, and need to personally enjoy all that you become involved with. You
combine inquisitiveness with the desire to communicate your discoveries. Is that a fair
assessment?
Well, yes, I do enjoy being a university professor. The biggest problem with that job
is that I came to it late in my career. You've so much shared knowledge and belief
structures with your colleagues, and suddenly you have these students who today aren't
even educated in the scientific method. Finding ways to distill [the information] down to
the student level is quite difficult.
During my career I've been in some very different areas;
while they have all been related to audio, they've ranged from film-sound production
[sound mixer at the University of Illinois Motion Picture Production Center] to
high-fidelity equipment, and back to post production and sound systems for theaters. Even
though it is eclectic, the one thing that links it together more than anything else is
inquiry. Once it becomes a boring job, I move on.
Without your focused attention to both technology and
the creative aspects, many developments wouldn't have taken place. You seem to bring an
academic structure to your deliberations, but with an innate fascination with the
applications of advanced technology to audio?
One thing that links all this has been knowledge that already exists; I go to
technical libraries frequently. It isn't so much that you have to be a genius- because I
don't think that I am- just that you do need to know what the developments are in a
variety of fields. While designing a sound system, for example, you're dealing with a lot
of different aspects, from the mechanics of how it works, to the psycho-acoustics of how
it's perceived. Experts get to be experts in fairly narrow fields; they don't have the
time to look at what's been done in parallel fields, or ones that might be relevant.
It's great fun to cross all of these borders and have a
competent conversation with the world's leading people in psycho-acoustics or loudspeaker
design. I'm not a real expert in any of these things; I'm just knowledgeable enough to
talk to people in all those areas.
You spent some time at Advent, designing loudspeakers
and amplifiers?
I came aboard in 1973 as an engineer; the chief engineer left about six months later
and I got his job. The nice thing about Advent is that at that time I could take it as far
as I could. The company was already successful in the video market; therefore the audio
electronics and loudspeakers fell to me and others, including Andy Kotsatos, who is now
president of Boston Acoustics. Those of us who worked at Advent often reminisce about the
experience; the concentration of intelligence was amazing for a few years and crossed so
many boundaries, from sales and marketing to music and engineering- a real amalgamation
of talents.
I started in the film industry [as a mixer] and went into
hi-fi because of the miserable state of sound. I would have stayed if the soundtrack
hadn't wound up on mono Academy optical.
So the sidetrack to Advent was to learn more about
electronics and loudspeaker design?
I wanted to extend what I knew. The reason they liked me over other candidates was my
hands-on approach; I beat out MIT engineering graduates because I could built systems that
worked. At the University of Illinois I did everything: production recording, editing,
mixing for film documentaries. I stayed at school five years after graduation- that's
the time I got to read everything in the library. Today people automatically become
specialized, because the fields grow; people who are in the nitty-gritty of digital audio,
for example, are unlikely to be as expert in psycho-acoustics, which is unfortunate.
So you had a broad foundation. How did the move to
Lucasfilm come about?
Well, they looked for six months for someone to be the chief engineer; the contact
was ultimately made through Dolby Laboratories, who were using my pre-amps and power amps
at their lab. I was with Advent for four years, and left [to found] Apt Corporation, a
company making pre-amplifiers and power amps. Apt got up to about 50 employees, and had
about 30% market share in pre-amps. But the high-end had a "Pre-amp of the
Month" syndrome, which was that certain units would be blessed for a certain time and
then collapse. While ours went on for eight years and are still in service today, its was
a very tough field.
Apt was a 50/50 partnership [Holman served as director of
engineering], which made it very difficult to leave, but I decided to do it because the
opportunity at Lucasfilm was so great. Here was the first movie studio to be built from
the ground up since the Thirties. We had a music-scoring stage, mix-to-picture, sound
editorial- a typical full-function film post-production facility.
Starting in 1980, I got the opportunity to examine the
whole film-making process. Should we buy dubbers? Or should we buy multitrack digital
machines? What technology should we get into? All of this was in parallel to a very
significant project that eventually became DroidWorks; ultimately, it transitioned outside
the company to make digital audio work stations. This was George's dream in 1980, to make
films digitally, save money and increase quality.
In theory, that meant that post-production, editorial and
mixing was going to be all-digital; analog was only meant to be a stop-gap measure. When
we looked at film-dubbing consoles, they weren't very good audio quality. Then we looked
at music-industry consoles, and found that those built in serial production in relatively
larger numbers were much better quality. So we took a Neve quad console, and reconfigured
it for film-sound mixing by designing a panner- we put over 3,000 man hours into that
console with multichannel panners and the monitor matrix you need for film sound, which
today are off-the-shelf items.
That year and a half that gave me the ability to look at
everything in the production and theatrical chain; to clean up parts of it, to adopt
others and, in the case of the theater loudspeaker, to start over. At that time Dolby
Stereo had made such an obvious advance in film-sound quality that the loudspeakers and
room acoustics of movie theaters needed attention. THX was the result of attempting to
improve the overall playback quality of film sound, given the Dolby advances.
Did you continue to work with Dolby on the evolution of
its 4-track 35 mm systems and 6-track, 70 mm formats?
To a certain extent. What we did on "Return of the Jedi" was change the
magnetic oxide. When we looked at the magnetic oxide they were striping on 70 mm prints in
1983, it was equivalent to Scotch 102 from the early Fifties! The problem was there was
only one vendor, but a second vendor was interested and had bought up the old MGM striping
plant. We went with them because they were striping a much better oxide. The interesting
thing was that the difference you got in headroom was 11 or 12 dB- a major improvement.
"Jedi" was pretty good sounding on 70 mm, but
there were still a few orchestral passages where you could hear IM. What was happening was
that the accumulated distortion over all those [dubbing] generations had suddenly become
audible; it wasn't audible on the [print] master but became obvious on the releases print-
it was just one generation too many. By the next year we pushed 3M into making much
better mag-film stock for the post-production generations. When we had got to
"Indiana Jones and the Temple of Doom," where was an incredible passage for IM
where the bass drum is banging with choral music above it- the [new formulation] passed
the test, and it did not intermodulate audibly.
The biggest problem in the chain was the theater
loudspeaker. The [Altec] A4 was developed in 1947, and was very good for its era. But by
the Seventies it was way out of date. It was a the chicken-and-egg situation, with
Hollywood using it because it had 80% market share in theaters. I set out to design a more
compact, better sounding systems.
A large amount of what makes [Lucasfilm's THX System] work
was researched in the library by patching together various developments- spotting the
right ones, discarding the wrong ones, and stitching it all together into a comprehensive
system. THX's stock in trade was to get the room acoustics right, and then apply a good
sound system. Then you make translation from the dubbing stage into the theater sound far
better.
In other words, get the room sounding right, and then
put in a system that will work correctly in that environment?
Right. There were some obvious observations. I took home a copy of "Star
wars" and listened on a wide-range system that went down really low. There were
rumbles on the soundtrack that would cut in and out from one shot to the next- the
mixers would never have left that there if they'd heard it at the dubbing stage! So the
first THX design was merely an attempt to build a system for the dubbing stage.
Was there an attempt to keep the design relatively
simply? It's a pretty basic, two-way system, with a patented active crossover design?
Yes, it's all in the two-way active crossover. As a matter of fact, I did look at
three-way designs rather extensively- the later-generation Apogee Sound design for
smaller rooms uses a three-way configuration- but a two-way was serviceable. I couldn't
figure out a way to get the three-way to work as well- I couldn't get a match in
amplitude, phase and directivity plus timing in that environment. Given the screen and its
action, the top octave response of modern compression drivers is ragged but is there. It
would be nice to have some device that worked as a piston at those frequencies- you'd
get more consistent results- but I still don't know of one.
But isn't the key aspect of the THX design that it uses
both electronic and acoustic functions at the crossover point? Using the propagation
properties of the LF drivers and the 90-by-40-degree MF/HF horn to dramatically smooth the
transition between the two component's frequency ranges?
It's not really a new idea; Don Keele wrote about it in an AES pre-print, and it was
used in the standard JBL studio monitor with a bi-radial horn. JBL built that horn to
match the directivity of the 15-inch driver crossover. The directivity index increases the
amount of the direct sound to the user- which helps us localize sounds- compared to
the reflected sound. What people don't realize about film-sound systems is that we are our
own worst enemies in terms of intelligibility! We put up all this interfering music and
effects in front of the dialog, which gets buried. If you don't deliver it with a
sufficient direct-to-reflected ratio, and an adequately low background-noise level, you
are going to loose it in the mix.
It's not only a directivity match, but the phase alignment
at the crossover point. To be in-phase at the crossover, Linkwitz-Riley filters depend on
the acoustical response and not just the electrical response. The system's electrical
response is buttered up in order to make it come out such that acoustically it is
fourth-order, which means tailoring the crossover to the exact drivers. And that's what
hardly anyone does.
You have also emphasized the importance of using a hard
defined center channel while mixing for film and video; and ensuring that three identical,
front-channel loudspeaker systems are used for domestic playback.
Of course that's not a new concept; it started in 1932. The biggest problem is that
even though center-channel loud speakers are selling in large numbers for the home now,
people think it's the dialog channel- that it has different requirements than the
others. The truth is that it should be perfectly matched to the other channels, because
sounds move. What most consumers don't understand is that the music elements are made
three-channel, and ought to be reproduced via a three-channel systems.
You also make a good case for using more than the
traditional four LCRS channels for playback.
And of course that's what was found in the Thirties, where three front-oriented
channels were used for orchestral music playing in concert halls; interestingly they
really had a surround channel. If you look at the drawings, they used microphones placed
close to the close to the orchestra, and picked up relatively little hall sound. (Today,
if we were doing a two-channel recording there would be mikes capturing the hall sounds.)
What they did was to play it back in another hall. They now
had the direct sound of the orchestra picked up from the original hall and created the
reverberation in the receiving hall. This was so much smarter in 1932 than what we do
today to trick the consumer into thinking they need added reflections in their playback
equipment using DSP algorithms, which is foolish.
You're no longer associated with Lucasfilm. Tell me
about your new focus these days. How did TMH Corporation come about?
I was an employee of Lucasfilm until 1987, and then a consultant until a couple of
years ago when my company became a contractor. The employment and consulting was exclusive
to Lucasfilm until about a year and a half ago. But it was a pretty tough life commuting
every week from Marin to Southern California; being down here [at the USC Campus] three to
four days a week and up there for many years got to be quite a drag.
What was behind the foundation of TMH? To give you a
vehicle to explore new professional and consumer electronic designs?
I felt it was constraining the have to justify everything to George's executives and
so forth. Now I am working in a number of fields, including multi-channel audio. I'm
making professional products that probably wouldn't offer big enough markets to interest
companies like Lucasfilm- one thing we're making is a multi-channel panner. And there
are other products I plan to make in that series which will be relatively simple and
inexpensive. People have already asked me for a six-channel fader.
Are you going to be handling manufacturing of these
products?
While we won't physically manufacture them, we will certainly QC and pack them. The
THX crossover is made from a kit of parts that are QC'd from Lucasfilm; an outside firm
stuffs the boards, wave-solders the components, and then Lucasfilm tests them. TMH would
also do that. If I'm going to put my name on it I'm going to have to test it myself.
You have two other partners in TMH Corporation: Fritz
Koenig and Ross Hering. What is their particular expertise?
Fritz and I have been working together since 1980. He ran Apt in my absence and then
came to work at Lucasfilm in the Theater Alignment Program [TAP], where he became the
database expert. We were using big UNIX database to nail down the parameters of theaters.
Ross was the third employee of TAP, and later ran the laser disc program at Lucasfilm; he
has good contacts in the studios and is well respected for his managerial skills.
One of your major projects is the Microtheater
monitoring system. How did that come about?
The Microtheater desktop-based dubbing project is a monitor system that allows true
film mixing in a DAW environment. C41, as it is called, will let you make competent temp
dubs, pre-mixes, print masters, etc. In this way, facilities can do a lot of things that
traditionally require dubbing stages; time required on the stage can be spent instead in
editing rooms. We have some tricks that get rid of the room-acoustics problems of small
rooms, and how to scale the sound from one environment to another.
Today in Hollywood it's largely perceived that digital
audio work stations are for editorial work and not for mixing. But, as these DAWs become
more sophisticated, with control surfaces and more automation, it's clear that they are
capable of doing more jobs- especially when they have a multi-channel panner. So the
direction we're headed in is for the sound designer editor/mixer to produce more finished
goods in the editing room, and being able to do that [simultaneously] in multiple rooms.
Because of the nature of the system in smaller rooms, you
have to employ a phantom center. You asked me earlier about a dedicated center channel.
I've come full circle; now I have a phantom center. If you're doing 4-channel work, C41
uses left and right satellites, a subwoofer and one surround; two surrounds if you're
handling five-channel dubs. It's currently mostly analog electronics, although there is
some digital in it. We intend to incorporate more DSP as time goes by.
One of the jobs it's doing is lengthening films for
television, by extending ambiance and dialog, for instance. A [Studer Editech] Dyaxis
workstation is used in that case to edit the transitions and perform scene extensions;
getting everything to match across the scene extensions is quite a trick. C41 is the
monitoring system. We also have a system working with a [Digidesign] Pro Tools five-channel
system with multi-channel panning; from what we have heard this week, [the results] are
translating well to the dubbing stages.
The C41 Microtheater monitoring system will be leased; it
comes complete with everything you need, and TMH will handle final tuning and adjustment
of the playback environment. Obviously, we can't fix background noise and other big
problems- like the editor next door with a louder sound system than you- but we do fix
the standing waves and so forth in small-room acoustics. We have very tight control over
its installation because we're supplying the hardware. There are various application areas
for C41: post production, DDD Compact Disc and CD-ROM mastering, for example.
One of my problems might be convincing accountants and so
forth that these far-out ideas I have are practical. We have a way around that. The
National Science Foundation has just made USC The Engineering Research Center for
Multimedia- we beat out 140 other schools. It's about $12 million of Federal Government
money, which has opened the flood gates on donations from industry totaling about $35
million for the first 5 years through USC. So we can do the more far-out research in
cooperation with the USC engineering faculty.
You and John Eargle recently organized IAMM '96,
"International Alliance for Multi-Channel Music," a two-day colloquium that
looked the basics of multichannel audio, hardware requirements for studio and consumer
users. At the time, I quoted you as saying that "The challenge for the music and
recording industries is to define the best audio-only uses for such emerging technology
[as DVD], and to prepare the infrastructure required to service and expand the existing
market for multi-channel music."
There was some time pressure to get something to happen, because the theory was that
people in smoke-filled back rooms were going to make a decision on the audio-format
Digital Versatile Disc. We wanted to be certain that all opinions were aired and given an
overview, including the use of 5.1-channel formats with split surrounds, and more.
I may be seen as tilting at windmills when I ask for more
than 5.1 channels on a new Audio-DVD format, but really I'm not. What I'm trying to do
there is to produce some kind of future-proofing of this disc medium. There are three
things that determine the bit rate on an audio disc: sample rate, word length and number
of channels.
While we could go on all day about sample rate and word
length, the truth of it is that they will make a very small difference to most people
compared to what 16-bit and 44.1 kHz can achieve if done properly. What will make a bigger
difference than anything else is the number of audio channels; how many we have and how
they're disposed. As I have pointed out in various articles, three or five up-front
channels and split surrounds can achieve wonderful results for music, while overhead
channels- where available- can add enhanced realism.
From a psycho-acoustic point of view- that your
perception is better in some parts of space than others- you should put more channels in
the front hemisphere than in back. (Not that you don't want them in the back; it's just
more important in the front sector because you have better resolution in that sector.)
There's a lot of evidence which says that more channels would be nice- dating back from
the work done at Bell Labs in the Thirties on up through [the late] Michael Gerzon's work.
But vinyl, FM broadcasting and the Compact Disc only offer two channels, so we have become
used to two-channel stereophony. But now there are maybe 20 million home theaters around
the world that can play back at least four-channel [Dolby ProLogic] soundtracks; and we
now have a greater potential of using such 5.1-capable systems with audio-DVD.
How would you categorize the improvement in replay
quality moving from two-channel stereo to "5.1" formats?
It is plainly audible to the casual observer. Most people who get a surround-sound
system tend to exaggerate the surrounds- we are in the "ping-pong" era. Yet
when it's done right, the [surrounds] are probably unnoticeable a good deal of the time.
As you add more channels, everyone can hear a difference
from one to two; no question. Under the right comparisons, everyone can hear the
transition from two to five but I don't know where we reach the point of diminishing
returns. It's clearly above five channels. In terms of sample rate and word length,
increases here only satisfy a vanishingly small part of the audience. Going from 48 kHz to
96 kHz is so infinitesimal that it's simply not worth the bits, compared to adding more
channels.
And what about enhanced bit resolution, from 16- to
maybe 20- or 24-bit?
Well, the difficulty with that is that we need extra bit capacity for the recording
and mastering medium, compared to the release medium. In the early days, we had a
two-channel, 16-bit Soundstream recorder and a two-channel, 16-bit releases; no problem.
The first time you use a 16-bit multitrack recorder you've got a problem [because] you
can't have a 16-bit result if you add channels together. The professional always has to
have greater word length than the release format. So it's kind of crazy to jump the
release format to 24-bits because you'd have to increase the professional requirement very
much greater than that.
Recent papers have shown that 16 bits are inadequate
because the noise level is audible in the frequency range around 2 kHz, where we are most
sensitive. When the replay level for zero dB FS is 120 dB SPL, okay. But how many people
play their systems aligned so that 0 dB FS = 120 dB SPL? So if you turn it down from there
and you have 100 dB- those are pretty loud levels. Not levels you use in studios but
levels civilians use. Then that 16-bit noise is already below minimum audible threshold.
I think there will be a mass market in five-channel
material simply because you've got millions of people asking: "How do I light up
these other loudspeakers?" And not being satisfied necessarily with the results
they're getting now.
What would you conclude from March's IAMM '96
conference? Was it timely?
Oh, absolutely. The interest level was higher than we thought. We really struck pay
dirt in the idea of it not being an engineering conference. We had people from many
countries and from parallel industries, but they were all talking about multi-channel
formats. George Massenburg [engineer/producer and president of GML Laboratories] was one
of our advisors, so he got comp'ed. But he sent us a check afterwards because he said it
was one the of best events he'd ever been to! Probably the most important session was the
business discussion, including representatives from [record] labels. The sessions was
scheduled to go 90 minutes, take a lunch break and then another 90 minutes- in the end,
they went for six hours straight and had their lunch brought in!
Do you plan to organize another one?
Yes, we are currently formalizing that. We think it will bracket the AES Conference;
we're not trying to compete with AES in any way, but make it easy for foreign guests. One
of the most important things we did were multi-channel demos with multiple replay formats.
John [Eargle] and I also co-chair the AES Subcommittee looking at new multichannel DVD,
amongst other considerations. We had another big meeting at the recent AES Convention [in
Copenhagen], at which the European contingent raised a lot of [reactions] to the proposal
that had come out of the L.A. group. The fact that DVD is behind schedule is actually very
helpful, since the pressure is not quite as much as it was. It will be several years
before we'll see an audio-only disc.
How will today's audio graduates fare in the future
world? What sort of skills are they going to need? Are we training them in the right
direction?
They're much better trained than they were in my day, because there wasn't any
training in my day. I left engineering in my junior year, so I spent nearly six years
doing the kind of training that today you might be able to get in a concentrated course
at, for example, The University of Miami, because it's focused. Our film graduates [from
USC's School of Cinema-Television] are very different because they are going to film
makers.
It is a very complex environment, and we do things because
we think we can make a contribution. We think that we have expertise in the area to do it,
and so there are many factors involved. I want to do it because I enjoy it, because I find
it interesting, and because I will find out something I didn't know before.
Tom Holman's Nineties Manifestation: TMH Corporation
Tom Holman's latest focus is TMH Corporation, a start-up technology firm that
creates, manufactures, and licenses products for the entertainment industry and related
consumer markets. "A core asset of the new company," Holman offers, "is a
comprehensive system-design philosophy that links artists' original conception with its
final presentation. Initial products are targeted for producers of entertainment software,
whose needs for sophisticated and refined production tools is ever increasing."
TMH Corp. will examine market opportunities created by the
converging cinema, computer and communications industries. "We are actively
developing multi-channel sound products for a variety of professional and consumer
applications. High-definition video systems are also under investigation. In addition, TMH
consults for digital production studios, exhibition, consumer electronics companies, and
high-tech companies wishing to apply digital technology to entertainment."
Offices are located in Marin County, CA, and University of
Southern California, Los Angeles, TMH was the first partner of USC's Egg Company 2 (EC 2),
a multimedia laboratory recently opened by the University's Annenberg Center for
Communication. EC 2 is designed to incubate digital business development and to assist
high-tech entrepreneurs with taking their products out of the lab and into the
marketplace.
Company founders include Friederich (Fritz) Koenig, who
first worked with Holman in 1980 as GM of Apt Corporation; the duo were instrumental in
Lucasfilm's Theater Alignment Program. S. Ross Hering is the former director of business
development for Lucasfilm's THX Division, and worked closely with Holman while expanding
TAP and in developing the THX Laser Disc Program.
©2023 Media&Marketing. All Rights Reserved. Last revised:
02.20.23
|