Mel Lambert Reports

 

Audio Futures: Metadata Can Be the Key to Successful Integration?

Las Vegas, Monday April 7, 2003: Analog was (relatively) easy. Select an appropriate connector (pin #2 hot?) matched to a suitable operating level, and we're off to the races. Digital audio isn't so straightforward. These days, with conflicting data-compression and multichannel delivery schemes, the signal's bit/word-clock frequency tells us little about the sampling rate or bit depth. Fold in the possible use of single-bit Direct Stream Digital (DSD) as an alternate to traditional PCM schemes, and we cannot even be sure of the encoding domain.
   The solution, of course, is staring us in the face. Metadata - literally "Information about Information" - can identify not only the key bit stream parameters, but also provide alphanumeric file names, file locations, production details and so on. Dolby has developed a system-wide metadata scheme that enables Dolby E and Dolby Digital encoders to control downstream decoders and extract the appropriate discrete digital signals for production and/or transmission. (These are both data-rate reduction technologies that use metadata carried within the bit stream to describe the encoded audio and conveying information that precisely controls downstream encoders and decoders. Normally, encoded audio and metadata are carried together on two regular digital audio channels; metadata can also be carried as a serial data stream between equipment.)
   More accurately, metadata can be defined as structured data about digital (and companion non-digital) resources that might be used to support a wide range of broadcast operations, including resource descriptions and discovery, management of information resources (including rights management) and long-term preservation. In the context of digital resources, there exists a wide variety of metadata formats. To manage directories of companion metadata for acquisition and storage, Dublin Core Metadata Initiative (DCMI) represents an open forum engaged in the development of standards to support a broad range of broadcast and production scenarios. One of the specific purposes of Dublin Core, which defines a number of metadata elements for simple resource discovery, is to support cross-domain resource discovery - in other words, to serve as an intermediary between existing community- or application-specific formats.
   MPEG-7's Multimedia Content Description Interface standard for audio-visual resources offers several descriptive elements, ranging from low-level signal features (such as like colors, shapes and sound characteristics) to high-level structural information. And Material Exchange Format (MXF) defines a number of data structures for networked and archived media. (SMPTE continues to evaluate MXF as a potential metadata standard.)
   In contrast to early uses of metadata to describe databases, these days the term has broadened to include any descriptive information about audio, video and textual elements. But the broadcast industry still faces a major challenge to develop and incorporate standard descriptors and data formats. (Libraries, for example, have developed the MARC formats to encode metadata defined in cataloguing rules and defined descriptive standards in the International Standard Bibliographic Description/ISBD. Other industries have defined metadata standards based on Standard Generalized Markup Language/SGML or the more familiar Extensible Markup Language/XML).
   As will be readily appreciated, metadata can be used to help administer and manage resources by tracking information about their acquisition and current location. And since the creation and maintenance of metadata is a vital factor in the long-term preservation management of digital resources and preserving the context and authenticity of source and intermediate elements.
   Something tells me that an inanimate knowledge of metadata formats, their use in-sequence handling and subsequent end-resource implementation is going to be essential for audio professionals involved in acquisition, production and transmission of broadcast materials. Efficient utilization of metadata is having a profound influence on our industry.
>>Dolby Metadata [PDF] >>Dublin Core Metadata Initiative  >>MPEG-7 Multimedia Content

Assignable Digital Consoles:
Go Deep or Go Long?

Las Vegas, Tuesday April 8, 2003: Although the broadcast and post production industries have fully embraced digital mixing technologies, there still remains some crucial operational considerations. With assignability one of the key advantages that digital topologies can offer over analog variants - instead of the signal to be modified having to flow through the physical control element, we can lay out the control surface to suite the operator's requirements - just how the designer decides to map the knobs and buttons becomes critically important.
   Setting aside for one moment the logical sequence of controls from the top to the bottom of a channel strip (we normally expect to see controls we adjust infrequency, such as line/mic gain and phantom power, farthest from the operator, and frequently adjusted elements - like the channel fader - to be right in front of us) how much simultaneous control should we be offered?
   After all, it would be possible to operate an entire multichannel on-air or production console from just one fully assignable fader or rotary control. But that would be absurd. Instead, we need to consider what might be the lower bound limit of controls to which we might reasonably need to have simultaneous access, and those that can be safely placed on hidden layers and brought to the surface when needed. (Using, of course, an intuitive labeling scheme and one-button layer access, so that nothing is too far away from the operator's direct control.) Place too few controls on the surface and we run the risk of spending too much of our time delving for the right fader, switch or knob; too many controls and we defeat the purpose of assignabliity.
   One good example of this "Less is More ... Sometimes" philosophy is evidenced in the new C100 Broadcast Console from SSL that has been designed specifically for use within critical on-air production environments, such as news and sports. Its small footprint and assignable control surface enables fast handling of source and destination routing and mixing to stereo or 5.1-channel DTV surround sound. Operators are presented with immediate access to all channel controls through a Master Channel, with options to define additional "soft" controls according to the input source. (An example: the provision of dedicated access to mic gain for a live source, or stereo balance trim on a VTR return.)
   While the C100's compact control surface removes much of the complexity of a large-format mixing console, its designers have not forced an unusable degree of assignabliity. The basic layout can be as small as 24 on-surface faders and associated assignable controls, to as many as 96; 48 faders might represent a "typical" layout. The C100 can be configured for 32, 64, 96 or 128 input channels, routing to dedicated mix-minus, program, group, utility and auxiliary.
   An innovative Control Linking feature enables a range of configuration functions to be linked to a specific input or output, such as fader-start GPIs for remote audio and video transports. In addition, channels may be defined as mono, stereo or 5.1, with single fader control, with one-button "unfolding" of an LCRS1S2 component mix for on-the-fly level trim. And an Audio-follow-Video (AFV) function allows the console's audio levels and transitions to be initiated from a vision mixer.
   So, those customers that remain unfamiliar with the virtues of assignabliity can select a surface that better reflects their requirements for many on-surface controls. While operators that are comfortable with the dramatic advantages of assignabliity - the ability to remain nailed in the sweet spot and bring controls to the central positions, rather than moving off access to the relevant function being one overwhelming advantage - can opt for a more compact layout.
   And it remains a familiar tale from broadcasters. For early purchases, a cautious approach dictates a console surface that offers more rather than less controls; subsequent installations often favor a more radical approach that can take advantage of smaller, more compact topologies. (And which, as a bonus, cause less intrusion with control-room acoustics - an important parameter when mixing in surround sound, where symmetry can raise a number of design compromises.)
>>SSL's new C100 Digital Broadcast Console.

Audio Integration: Just because
you can ... should you?

Las Vegas, Wednesday April 9, 2003: I give fair warning: what I am about to offer will be considered by many to be pure heresy. But please hear me out before making a final decision. I consider Apple's Final Cut Pro to be one of the most creative applications I have ever used. In terms of offering more creative features in a fully integrated, user-friendly package, it's definitely a winner. But, in terms of expanded audio features, I wonder if the recently announced Final Cut Pro 4.0 is heading in the right direction?
   I'd be the first to admit that, usually, bigger is definitely better. However, I am concerned that by offering automated multichannel mixing, plus a plethora of music-related functionality, we aren't missing unencumbered sign of the wood for the trees. Cutting to the chase: even thought it might seem appropriate to provide videographers with such production power, is this necessarily the right way to go?
   Without doubt, there are individuals who immediately appreciate the interrelation between sound a picture; how creative editing of the images can be augmented by the creative use of sound textures. And how equalization, dynamics control and ambiences can be used to modify the emotional impact of a carefully crafted soundtrack. But is this appreciation to be found in the majority of video professionals that usually come from a picture- or graphics-oriented background? Because, just as Abode Photoshop or After Effects in the hands of a color-blind operator can result in some exceedingly bizarre results, it concerns me that a large palette of sound modification tools can hinder rather than assist anybody new to the world of multichannel sound.
   After all, there are historical reasons why, during the post production process, the images went in one direction and the audio down another. And not just because the information was carried on different media, interlocked with one another via edge numbers and sprockets, or timecode. I would contend that there is a sensitivity required to edit visuals that is completely divergent from the appreciation necessary to design sound for a video or film project.
   So, in addition to providing a full spectrum of sound-manipulation tools, as is the case with Final Cut Pro 4, I would look for the ability to offload the basic edit sequence via an open-platform exchange format so that an experienced sound designer and/or audio mixer can bring a dedicated skill set to the project. Sound and picture can then be recombined within Final Cut Pro prior to an authoring process that results in the final release print or video program. (And Apple's support of Advanced Authoring Format/AAF consortium would suggest that the company is cognizant of the collaborative nature of multimedia projects, so that images, text, visuals and sound can be easily exchanged between disparate programs.)
   Now we have a choice of two roads between acquisition and delivery. Truly talented videographers can use Final Cut Pro to prepare a fully-fashioned result with sound to match the images. Individuals whose skill experience favors image over sonics can use Final Cut Pro to cut and blend the picture while sound can be prepared using a variety of current-generation digital audio workstations. (Including, let's whisper it, Apple's recently acquired Emagic Logic series.) Maybe in the not to distant future, given sufficient exposure to Apple Final Cut Pro 4's audio mixing, processing and sound generation capabilities (and we are told that a variety of plug-ins are under development), we might see the emergence of sound-centric individuals that will give up, let's say, Digidesign's Pro Tools for an application that is guaranteed to offer true 100% compatibility with their video brethren. And one that enables concurrent film and video production from source to final result. Ah, yes, we live in truly interesting times.
>>Apple Computer's new Final Cut Pro 4.0.

Collaboration Versus Competition: Can We Just Learn to Get Along with Each Other?

Las Vegas, Thursday April 10, 2003: Returning to my theme from the opening day of the 2003 NAB Convention, if I have learned one important thing from this year's gathering, it is that the so-called digital revolution coursing through our industry is going to be a salutatory experience. The pros are obvious: enhanced audio quality with unlimited data file transfer and signal processing capabilities, plus dramatic operational flexibility through full assignability of control surfaces. The cons: getting our heads around the dazzling options offered by integrated data transfer and storage. Because, lest we miss the strikingly obvious, manipulating digitized audio involves not only in-progress bit streams but also the massive amounts of stored data that our new digital-radio and DTV infrastructures will be generating.
   I contrast to analog technologies, where were reluctant to record anything simply because what we get back will be a degraded version of what we committed to the storage media, digitized data is immune from such indignities. But it's not totally all good news, simply because - to use a much-quoted allegory - the digitized data files lack the simple equivalent of a tape box, the label of which comprises a human-readable indication of the material contained within. How are we to determine what exactly the digitized file is meant to represent if we lack any knowledge about its original or intended purpose? With increased sharing of data files via SANs and related topologies, is it any wonder that implementations of a fully integrated digital asset management scheme continue to cause a great deal of sleepless nights.
   Of course, this is precisely the role to be played by metadata. But who or what is going to generate the necessary standardized descriptors that we need to identify unambiguously the precise nature of the constituent data files? Cutting to the chase, I will admit that the most productive two hours I spent at this year's convention was in the Professional MPEG Forum/AAF Association's "Interoperability Center" in the South Hall. Here were gathered representatives from a dozen or so software developers and hardware manufacturers demonstrating practical realizations of Material Exchange Format (MXF) and the exciting capabilities of Advanced Authoring Format. MXF offers real promise as the digital equivalent of our familiar tape box - structural metadata and descriptive metadata that handles identification of component audio, video and multimedia files - while AAF extends the basic MXF structure with the inclusion of work-in-progress project data.
   From what I saw, MXF is reaching critical mass in terms of its offering a simple yet feature-rich scheme for interchange of audio-visual material with associated metadata for editing, server-to-server transfers over LAN or WAN systems, archiving, content distribution and asset management.
   Jazzed by my newly extended knowledge, I returned to the exhibit floor seeking out real world examples of AAF to enable projects to be exchanged between digital audio workstations, a capability that would enable a sound designer to use Application "A" to develop textured soundtrack elements, and then return them to the supervising sound editor, who might favor the creative functionality of Application "B." (As the AAF Association states in its various white papers, "AAF simplifies project management, saves time and preserves valuable metadata that in the past was typically lost when transferring program material between applications.")
   Unfortunately, in terms of practical reality, it would seem that AAF is still not yet ready for prime time. I came away from this fact-finding mission mildly disappointed. While market leader Digidesign is planning to offer AAF import/import for the Pro Tools 6.1 digital audio workstation via its DigiTranslator option, few other DAW manufacturers appear to be as focused in their pursuit of a universal Digital Rosetta Stone. (For the record, a new DV Toolkit utility for Pro Tools LE also enables AAF exchange with other applications, including Avid Xpress DV.)
   So good news from Pro Tools users working in a collaborative broadcast or post environment, but the competition needs to come to the party. With good reason we should be cautious of monopolies. Yet something tells me that the next half-year will see a high level of catch up - and it did not go unnoticed that earlier this week that Apple Corp. confirmed its ongoing commitment to AAF and all of its implications.
>>Professional MPEG Forum  >>AAF Association

The contents of this news feature is exclusive to and the sole property of Media&Marketing 2017. All rights reserved.
This material cannot be reproduced in whole or in part without written permission of Media&Marketing.


2017 Media&Marketing. All Rights Reserved. Last revised: 01/20/2009