Thursday, May 31, 2007

An Indepth Review on Avid Liquid 7.1 ( Really informative & a lot of good tips are provided )



What would it mean to your HDV editing workflow if you could insert multiple video formats into a Timeline? I found one answer to this question when I spent several weeks working with Avid's Liquid 7.1.

The question is a natural follow-on to most of what has been covered in the last few months in HDV@Work. When we looked in depth at interlaced HD and how it was processed by progressive displays, the issue at heart was converting one HD format to another. In the last few issues of HDV@Work, we explicitly examined the topic of media conversion when I dealt with the use of legacy video. Nevertheless, we never really addressed how various media could be brought together.

If one is working with Final Cut Pro, the solution is to convert the different video formats to the format you will use for editing the majority of your source material. With Liquid, the solution is far simpler -- simply edit with little, or no, regard to format. Right now, I have two HD sequences on my PC. One sequence was created as 720p30 HDV while the other sequence was created as 1080i60 HDV.

The 720p30 sequence contains: 720p30, 720p24 (yes, 24p), 1080i60, 16:9 DV, 4:3 DV, and 4:3 MPEG-1 shot with a cellphone in Malaysia. (I included the latter because it showed Muslim women and their kids enjoying a Christmas exhibit at a shopping mall in Kuala Lumpur at the same point in time when such exhibits were considered by some in America as “offensive” to non-Christians.) The screenshots below show how I customized Liquid's toolbars to meet my editing needs.


A 720p sequence in Avid's Liquid 7.1

The 1080i60 sequence begins with multiple Sony Z1 clips followed by the entire 720p sequence. I simply dragged the 720p sequence from Liquid’s Sequence Rack (bin) into the 1080i sequence.

My goal is to record the first sequence to HD1 (720p) and the second to HD2 (1080i), plus create widescreen DVDs and widescreen DV tapes from both sequences. Everything is ready for export.

Of course, we are way ahead of ourselves in this odyssey. We need to go back to NAB, where I saw an excellent demo of Liquid. I wound up watching the demo as a way to relax after learning Avid and Apple were not yet supporting 720p24 or 1080 24F. The demo quickly took my mind off 24fps. It was clear that Liquid had undergone a beneficial revision of its human interface.

Over almost a decade, I have reviewed what is now Liquid four times. My first review opened with this paragraph. “It's only fitting the FAST 601 is the first MPEG-2 system to undergo an in-depth review in Video Systems, as it is the first MPEG-2 system brought to market. Being first to market is always a mixed blessing. If the market is waiting eagerly for the new technology, being first is great. But when the market doesn't understand the new technology and so hasn't yet come to desire it, being first is problematic.”

So problematic, that by my third review I wrote, “Edition is the name Pinnacle has given to a version of purple -- which is the DV version of an NLE developed, several years ago, for blue. While blue was a work in progress from FAST of Germany, the NLE application developed for blue was shipped on FAST’s 601. (The 601 was a 50Mbps MPEG-2 NLE system that had the misfortune to be released at the height of DV-mania.)” Pinnacle had bought FAST and got 601, purple (the NLE software for the 601 and blue), and blue itself. Pinnacle named them Liquid. (Avid now owns Pinnacle Systems.)

It is ironic that years after FAST developed an MPEG-2 NLE, many of the other NLE suppliers are working feverishly to do the same. To date, this has meant simply hacking support for specific flavors of HDV. Had these companies developed an MPEG-2 engine that worked like a hardware codec -- as did FAST -- all the flavors of HDV would inherently have been supported.

Jumping back a few years, my second review covered Sony’s ill-fated ES-3 NLE. I began the review by saying, “The entire ES-3 is built for Sony by FAST of Germany. The software is identical to that on the FAST Silver (a.k.a. 601). Except for the InnoVision-designed CG, there are no pulldown menus. One either clicks on FAST-designed command icons (many of which you'll never have seen before) or right-clicks on an object to get a list of commands relevant to the object. Sounds neat, but often it is not. For example, after logging clips, I wanted to perform a batch-digitize. I could find no icon that 'looked like' it might do the job. Help told me to 'choose the command Batch Digitize from the Content area shortcut menu.' Duh! So I started right-clicking on every object and surface until I got the correct pop-up menu.”

Avid now refers to this GUI as Classic, and it is still available. For those purchasing Liquid today, the new GUI features a set of menus and is vastly easier to learn.

In my fourth review of Liquid, I described a powerful enhancement. I said, “Liquid Edition PRO significantly enhances the Edition rendering engine to support realtime preview.” As video frames were decompressed, the CPU rendered simple effects. Edition PRO shipped with a PCI card that contained analog I/O plus a GPU chip to co-render effects.

Pinnacle’s documentation (with the word Pinnacle replaced by the word Avid) now covers seven classes of effects: classic clip effects and classic transitions (none of which are realtime, but which can be background-rendered), CPU clip effects, CPU transition effects, GPU clip effects, GPU transition effects, and special effects. Special effects can use the CPU and/or GPU. (The GPU is no longer on a PCI card as the software uses your PC’s graphics processing unit.) The latter five effect classes have the potential to provide realtime previews, but also can be rendered in the background prior to export to tape.

What makes Liquid’s effects set difficult to comprehend is that the same filter can be available in multiple classes. Unfortunately, these filters may, depending on their class, have slightly different names and available features, use different algorithms (thereby offering potentially different quality levels), and may preclude, should you use them, certain downstream workflows. For example, the powerful realtime Color Correction function does not support keyframes. If you need keyframes, you will need to use the far less capable CC.

In the next few installments, I will share what I learned about working with Liquid 7.1. I will also cover topics that emerged precisely because of Liquid’s many strengths.

For those who decide to use Liquid, I will provide the specifics of my HDV workflow. And, of course, I’ll let you know how the HDV and DV exports looked -- and how well DVD creation worked.

In the last installment, I asked “What would it mean to your HDV editing workflow if you could insert multiple video formats into a Timeline?” We then began to examine this question by looking at Avid’s Liquid 7.1. I remarked that I created two HDV sequences using Liquid.

One sequence was created as HDV 720p30 and the other sequence was created as HDV 1080i60. The 720p30 sequence contained 720p30, 720p24, 1080i60, 16:9 DV, 4:3 DV, and QCIF MPEG-1. The 1080i60 sequence began with multiple Sony Z1 clips followed by the entire 720p sequence. I had simply dragged the 720p sequence from Liquid’s Sequence Rack (Bin) into the 1080i sequence.

In this installment, we will look at how I acquired the various source material used in these sequences. I captured, via FireWire, the 720p30, 480p60, 1080i60, 4:3 DV, and 16:9 DV. The QCIF MPEG-1 was imported from a file transferred from my cellphone, and the 720p24 was located on a Focus DR-HD100 hard drive.

Let’s look at how I captured these different formats. If you're working with Final Cut Pro, choose a Capture Preset for each video format you want to capture. If you have 720p24 video, you are currently out of luck because Apple has not yet provided a native HDV preset for that.

Liquid operates very differently. You begin by selecting the Rack you want to hold the captured clips and then select either the EZ Capture Tool or the Logging Tool. As you might guess, the latter tool has the capability to capture a marked video segment, log a segment, or batch-capture the logged segments.

Tip 1: Because the first-generation HDV 720p camcorder does not have timecode embedded in the M2V data stream, the tapes should not be used with the Logging Tool.

Unless one’s PC is storage-limited, there is really no need for the Logging tool, as the EZ Capture Tool will do all that is necessary to capture an entire tape. For this reason, I used the EZ Capture Tool for all my FireWire input. (Including 1394 capture from the Focus DR-HD100.)

When you bring up the EZ Capture Tool you can use the VTR control to locate the point(s) at which you want the capture to begin. Note I did not select a Capture Preset. Why not? From my point of view, the real question is why do I need to do so with other NLEs? One should not need to do so, given FireWire’s inherent functionality. When a device is connected, software interrogates the device. The device responds with its brand and model number. From this information the software can alter its communication protocol, if necessary, to match the device.

Tip 2: You do need to select the device brand when using the Logging Tool in Liquid. You also have the option of “Live 1394 Capture,” which allows you to add Time-of-Day Timecode to the recording.

The software also checks whether the device is set to send MPEG-2. (HDV devices have a switch that is set to either DV or MPEG-2.) So there is no need for a human to tell the software this basic information. But what about the type of DV or HDV?

Both DV and HDV bit streams carry detailed information about the nature of data carried by the streams. This information tells the software the frame size, aspect ratio, timecode base, frame rate, and interlaced/progressive status. Using this information, the software creates a header for the data that will be moved to a disk file during capture. Thus, once again, there is no need for a human to tell the computer what the computer can learn for itself.

Although 720p24 is not officially supported in 7.1, the software can discover that the timecode base is 59.94fps. The Repeat Flags carried in the data stream are stored to disk along with the M2V data.

Tip 3: Although first-generation JVC HDV devices are capable of outputting 24p via FireWire, Liquid erroneously rejects this data. Of course, since 24p is officially unsupported, this could be fixed in 7.2. Therefore, only (second-generation) JVC ProHD equipment can currently, and unofficially, be used to playback 24p.

Importing video is just as simple. It made no difference whether I selected a file on my PC’s hard drive or on the Focus drive. During import, the software “auto-sensed” the nature of the video. When you import from a mounted drive, such as the Focus HDD, you are given three options: Linked, Copy, and Move. The first option supports the DR-HD100’s DTE capability as all files remain on the Focus HDD. The second and third options copy or move the files to your hard disk, respectively.

Note that there is no need for video to be “converted” during import. A Liquid Timeline can hold most any type of video: DV and DVCPRO (both can be input and output via FireWire, including transfers from P2-based camcorders), DVCPRO 50 (file import and export only), XDCAM and XDCAM HD (using FireWire-based File Access Mode via Liquid’s XReceive/Xsend capability), MPEG I-frame (IMX), MPEG IBP including HDV, Windows Media, DivX, and MPEG-4, as well as SD and HD 4:2:2 uncompressed video. While SD MPEG-4 is supported, it appears HD is not. Likewise, neither DVCPRO HD nor any of the new AVC codecs are supported for either import or capture. (It would be ideal if the DVCPRO HD codec were available for FireWire export.)

Liquid Pro, which includes an analog SD breakout box for I/O, adds support for composite, Y/C, and SD component I/O. This BOB connects via a USB port so it can be used with a laptop. It is ideal for those who need to work with Beta SP — especially for the export of SD versions of HD productions.

Liquid Chrome employs a PCI card and an SDI/HD-SDI BOB that supports four embedded and AES/EBU digital audio channels. Using HD 4:2:2 uncompressed video, this option provides support for HDCAM, DVCPRO 50, and DVCPRO HD.

After capture or import, Scene Detection can be employed to divide a clip into subclips. This feature worked perfectly with both DV and HDV.

Tip 4: Be sure to request Master subclips and specify handles of at least one frame. By providing handles, Liquid’s Consolidate capability can be used when a project is complete. This function will retain only the video used by the project while removing all unused media. The result can be stored to DVD or, in the future, to Blu-ray discs.

In the next installment, we will explore how all these various formats can be used in a sequence and how a very wide range of productions can be automatically generated from a mixed-format sequence.

In the first installment, I asked “What would it mean to your HDV editing workflow if you could insert multiple video formats into a timeline?” We then began to examine this question by looking at Avid’s Liquid 7.1, with which I had created two HDV sequences. (See Part 2 here.)

One sequence was HDV 720p30 and the other sequence was HDV 1080i60. The 720p30 sequence contained 720p30, 720p24, 1080i60, 16:9 DV, 4:3 DV, and QCIF MPEG-1. The 1080i60 sequence began with multiple 1080i clips followed by the entire 720p sequence. In this installment, we will look at how I created these sequences.

For those of you who have edited DV exclusively, the obvious question is what kind of sequence can hold multiple formats. You have learned, often the hard way, that bad things happen when your sequence preset does not exactly match your DV source in every detail. Obviously, this rule cannot be applicable to an NLE that supports multiple formats in a sequence.

To understand why this rule no longer holds, I’ll ask a rhetorical question. Why do we need to define the nature of a sequence? The answer is that the NLE needs, at minimum, two items of information: image aspect ratio and pixel aspect ratio. The first is needed to define the aspect ratio of the recorder monitor created on our computer’s monitor. Note, the player monitor’s aspect ratio(s) is defined by the source format(s). Pixel aspect ratio is required when conversions are made between square and non-square pixel video.

You will note that I did not define frame rate to be one of the necessary sequence parameters — and it’s not. It is possible for a timeline to be defined in terms of time. In such a timeline, a second is a second. Liquid operates using this system. The playback engine notes a source’s frame rate and displays frames at exactly the rate necessary for one second’s worth of video to be presented in one second. Likewise, a source’s audio sampling rate determines how many samples must be output per second. This is how multiple frame and sample rates can be mixed within a sequence.

Unfortunately, Liquid does not take this capability to its logical conclusion—a timeline where timecode is fully replaced by “time.” With time notation, a gap from 5.25 minutes to 8.50 minutes would require 3.25 minutes of source. It would make no difference what frame rate were used by the source.

With this understanding, let’s create a new sequence in Liquid. We can give the sequence a unique name — or let Liquid auto-increment a name. We can then select a Layout Template. (I will cover this time-saving feature at the end of this Installment.)

Now, if we like, we can choose from a list of presets that currently includes this subset of DV and HDV that's relevant to Region 60: NTSC 4:3, NTSC 16:9, HDV 720/29.97p, and HDV 1080/59.94i. Obviously, if I were working with Sony HDV, I would simply select “HDV 1080/59.94i.” This would automatically define frame rate, resolution, interlace field order, and M2V codec.

When a frame of video is needed by the NLE’s display or effects engine, the appropriate codec is employed to decompress (DVC) or decode (MPEG) to uncompressed YUV data. For display on a computer monitor, the YUV data must be converted to RGB data. However, if your computer has a board that outputs analog component or HD-SDI, the data are not converted and so remain in YUV colorspace.

Depending which NLE you use, its effects engine works either in YUV or RGB colorspace. (One exception is Premiere when using CineForm YUV effects rather than Premiere’s RGB effects.) There are two advantages of working with only with YUV data. First, color errors can result from the conversion to and from YUV and RGB colorspaces. Second, video clipping can occur when YUV data are converted to RGB data.

A recorded video signal often is higher than 100IRE — yielding digital YUV values greater than “240.” (Legal digital values range from “16” for 0IRE to “240” for 100IRE.) Moreover, if you alter your camera’s Pedestal setting, recorded data may be lower than “16.” When YUV data are converted to RGB data, the YUV values between “16” and “240” are mapped to RGB values from “0” to 255.” Thus, YUV values below “16” or above “240” are truncated and valuable information is lost. (With FCP, you must enable “Super White” to prevent truncation.)

While Liquid’s realtime effects work in RGB colorspace, you can avoid clipping by adjusting White and/or Black levels using the Color Corrector. Liquid has several CC functions — one that supports both Primary (realtime) and Secondary (non-realtime) correction as well as a vectorscope and waveform monitor. (This CC has White and Black level controls as well as a Soft Clip function.) Unfortunately, this corrector does not support key-frames. The alternate, Base, CC function does support key-frames — although regrettably, only realtime Primary correction is supported and scopes are not available. Both correctors map YUV 0-255 to RGB colorspace that employs 16-bits per-component. Thus, no input signal truncation occurs when using these correctors.

Tip 5: For work that must meet broadcast standards, Liquid’s Soft Clip function, that limits video to 100IRE, should be applied to each clip. The Soft Clipping feature is found within Liquid’s Color Correction Legalizer. While you must enable the CCIR-601 function, you should not enable the “Work only in CCIR-601” mode. Now, while watching Liquid’s waveform monitor, dial in the amount of soft clipping you want. You should also enable the “75% color” function that limits color intensity.

The three 16-bit components are scaled to 24-bits for display during correction — as well as after processing. If the processed data are fed to an RGB realtime effect, the signal below 0IRE and above 100IRE will be truncated. However, when sent to a YUV-based effect or compressed, encoded, written as uncompressed video to a file, or output — no truncation will be performed. Color correcting HD naturally brings up the question of how you can monitor your efforts. I have connected the DVI output from my Dell laptop to the HDMI input of my Sony 1280x720, LCD HDTV. Now either Liquid’s Player or Recorder window is displayed on the HDTV. Naturally, you will need to calibrate an HDTV to obtain an accurate image. (In fact, any monitor should be calibrated.)

Liquid’s “VGA Second Head” option outputs video in RGB colorspace. Whenever RGB data are viewed, there may be slight color differences between the display and the YUV video. However, since many folks editing HDV will be using consumer HDTVs for monitoring, I’m not convinced the colorspace error is a serious issue for them.

Tip 6: The Liquid Second Head function does not use a VGA driver’s “overlay” capability. Rather, it uses the VGA driver’s second monitor capability. (Do not enable the “Desktop across both monitors” mode.) Under Edit > Control Panel > Site > Video Outputs > VGA Second Head — define “use monitor setup” and “custom framerate = 60.” Whether you use your HDTV’s DVI/HDMI or RGB input, you’ll have to discover a setting for your PC’s graphics driver that fills, as much as possible, your monitor’s screen. This is a topic for another day.

Templates provide efficient ways of working on different projects. For example, I’ve defined 24p and 30p 720 HDV as well as 1080i HDV master sequences. Each sequence includes leader, bars, tone, and trailer. After defining these sequences, I saved them as templates. (I defined one as the Default Template.) Now, when I create a new project sequence — one that will contain the final sequences—I simply choose the appropriate template. The new sequence is immediately ready for use.

In the next installment, I will look at the exact role codec choice plays in a sequence preset. We will confront one of most fundamental debates in working with HDV — should we edit it “natively” or by using an “intermediate” codec. Be prepared to see this technical debate — which is the subject of much marketing hand-waving and editor hand-wringing — from a different perspective.

In the fall, I will focus specifically on this issue by working with Edius 4 from Canopus/Grass Valley. Edius has the ability edit using native HDV as well as an HD intermediate codec, HQ. Like Liquid, it supports multi-format timelines, including 720p24.

In future installments, I will cover auto-handling different aspect ratios for NTSC tape, widescreen DVD, and HD timelines; Dolby 5.1 sound (a value-added function for DVDs and a “must-have” for high-def DVDs); HDV record-to-tape (what is “Smart GOP” MPEG-2 editing and when does it reduce export time?); plus HD-to-SD DVD production.

In this installment, we will look at the exact role that codec choice plays in a sequence preset. We are going to confront one of most fundamental debates in working with HDV — should we edit it “natively” or by using an “intermediate” codec?

HDV can be input as compressed digital data (via FireWire), uncompressed digital data (via HD-SDI), or uncompressed analog data (via analog component). PCI cards such as those sold by Blackmagic and AJA support the latter two paths. The following Table shows these three paths.


Compressed Digital Data

Uncompressed Digital Data

Uncompressed Analog Data

Path

FireWire

HD-SDI

Analog Component

Process



A/D Converter

Form

IBP-frame & MP2

Uncompressed Digital Data

Process


Decode

Audio


Digital 48kHz

Video


Digital YUV

Process


4:2:0 (only MPEG-2) or 4:2:0 to 4:2:2

No Codec


Uncompressed

MPEG-2


I-frame

MPEG-2


IBP-frame (ready for DVD)

DVC


DVCPRO HD (downsamples resolution)

JPEG


PhotoJPEG (draft only)

CineForm


Wavelet

Canopus


HQ HD

Avid


DNxHD

Bits

8-bit

8-bit or 10-bits (not MPEG-2)

Sampling

4:2:0

4:2:0 (only MPEG-2) or 4:2:2:

Type

Native

Intermediate

Storage

Disk File






Note 1: No matter the connection path, recorded HDV is 8-bit, 4:2:0 sampled data.

Note 2: Green indicates highest quality; Blue indicates high quality; Orange indicates moderate quality; Red indicates low quality. Yellow indicates a direct path.

As the chart indicates, there are many intermediate codecs available. None of them increase the quality of HDV — in fact, with the exception of uncompressed video, all require an additional compression step after HDV has been decoded. Moreover, intermediate codecs (with the exception of IBP-frame MPEG-2) are less storage-efficient than is HDV.

While some consider interpolating 4:2:0 sampling to 4:2:2 sampling an increase in quality, this conversion can also be performed by an NLE when HDV special effects are rendered. Likewise, placing 8-bit data within a 10-bit word does not increase image quality.

Using an intermediate codec, however, does offer four advantages:

Ø During the transcode, it is possible for bad frames and broken timecode to be repaired.
Ø When HDV, an interframe codec, is transcoded to I-frame MPEG-2, the intraframe format can be decoded efficiently, thereby improving performance. Apple’s AIC codec is an example of an I-frame MPEG-2 format.
Ø All intraframe, intermediate codecs can be decompressed with great efficiency. This enables more streams of video to be mixed together. However, because each frame requires greater data, total disk throughput will be much larger than what an equal number of HDV streams would require.
Ø Many projects require video be processed sequentially by several applications. With the exception of PhotoJPEG, intraframe, intermediate formats are more robust than HDV. Obviously, uncompressed video is the least fragile of these codecs. However, Avid, Canopus, and CineForm codecs are all ideal for tasks that require multiple compression cycles.

When a frame of video is needed by an NLE’s display or effects engine, the appropriate codec is automatically employed to decompress an intermediate format or decode MPEG-2 to uncompressed YUV data. (If the source is already uncompressed, it is used directly.) There is a little-mentioned process involved in generating the YUV digital video. HDV’s 4:2:0 sampling is interpolated to 4:2:2 or 4:4:4. For an NLE that uses the YUV color space, this process occurs after the decompression or decode. For an NLE that uses the RGB color space, this process occurs during the conversion from YUV to RGB. Because HDV employs symmetric color sampling, interpolation is straightforward.

For display on a computer monitor, YUV is converted to RGB. However, if your computer has a board that outputs analog component or HD-SDI, the data are not converted, thus remaining in YUV colorspace.

When multiple streams of video are combined (mixed) by a special effects engine, the digital video streams, either RGB or YUV, are processed as either 4:2:2 or 4:4:4 video. The output is, of course, also 4:2:2 or 4:4:4 video. This video, either RGB or YUV, is then processed appropriately for internal and/or external display.

Depending on the number of streams, the nature of the codec(s) employed, the number and complexity of the effects applied, and both the computing power and disk throughput of your computer, the output of the display engine or effects engine may be previewed:

Ø only after some, or all, of the effects have been rendered.

Ø at less than full frame rate
Ø at near full frame rate
Ø at full frame rate

These performance differences must considered when:

A) The editor plays back the timeline
B) The timeline is exported to tape

When exporting HDV, realtime export is not possible. However, using today’s powerful computers, realtime timeline playback is sometimes achievable.

Preview render files are the key to full-frame-rate playback when realtime previews are either not possible or when preview playback performance is poor. Preview files can be generated three ways:

1. Rendered results are compressed and stored in a file. This is the optimal codec option because DVC-based codecs require have moderate computation and disk requirements.

2. Rendered results are simply stored in a file. Obviously, this option requires no computation and so is very fast. Unfortunately, with HD, both disk throughput and storage requirements are huge.

3. Rendered results are encoded and stored in a file. Native MPEG-2 encoding is compute-intensive, although both disk throughput and storage requirements are minimal.

When you specify a Sequence codec, you are specifying the codec that will be used when a preview file is generated. This codec is typically also the default export codec.

Tip 7: Liquid supports the options 2 and 3 for preview files: previews using the “2vuy” uncompressed, 4:2:2 codec and previews using the MP@H-14 HDV codec.

So does it make a difference whether a native or an intermediate codec is employed? The answer is “it can, but need not.” To understand this answer, we must consider is how an NLE utilizes preview files. Here again, there are three options.

1. A preview file is decompressed/decoded and played back at full frame rate.

2. A preview file is decompressed/decoded and processed by additional special effects.

3. A preview file is decompressed/decoded and mixed, using special effects, with additional layers of video.

NLEs such as Liquid and FCP never utilize the latter two options. Preview files are never used for anything except playback.

Native HDV preview files — having gone through two cycles of encoding and decoding — will suffer more than will high-quality intermediate codecs. An uncompressed preview file — assuming your system can handle it — offers little loss in quality as there has been only one MPEG-2 encode and decode cycle. The other high-quality intermediate codecs have undergone one MPEG-2 encode/decode cycle and one compression/decompression cycle.

Obviously, those NLEs that (to save render time) use preview files for further computation (options 2 and 3) definitely are sensitive to whether preview files are native or not. The use of a native codec can result in a severe loss in quality. However, by deleting preview files whenever you need to view a Sequence at maximum quality, the NLE will generate new frames from the source(s).

Exactly the same logic applies to exporting a Sequence. FCP and Liquid never use preview files during export. Both NLEs generate every new frame from scratch. Moreover, those NLEs that default to using preview files during export can be forced to not do so.

With FCP and Liquid, when one uses realtime previews, there is no quality difference between native and intermediate editing. However, when you need to render previews, the use of an intermediate codec does offer slightly greater preview quality.

With other NLEs, performance is the primary advantage of editing using an intermediate codec. Performance is increased in two areas. First, given adequate disk throughput and an effects engine engineered for the intermediate codec, more streams can be handled in realtime. Second, preview files, which took time to render, can be reused. And, when these files are reused, quality is minimally impacted.

Now that we understand how multiple SD and HD formats are processed within a Timeline, in the next installment I’ll show you how Liquid can automatically handle different aspect-ratio video in the same Sequence.

In the previous installment, we examined one of the fundamental debates in working with HDV: should we edit it natively or by using an intermediate codec? After covering this topic, we now almost fully understand how a timeline can hold multiple formats. However, we have not considered one issue. How can multiple aspect ratios be used in the same timeline? Let’s make this an even harder question. How can multiple aspect ratios automatically be used in the same timeline?

In fact, let’s push this to question to the extreme. How can multiple aspect ratios automatically be used in multiple timelines that have different aspect ratios? For example, I have two HD sequences on my PC. One sequence was created as HDV 720p30, and the other sequence was created as HDV 1080i60. The 720p30 sequence contains 720p30, 720p24 (yes, 24p), 1080i60, 16:9 DV, 4:3 DV, and 4:3 MPEG-1 shot with a cell phone. The 1080i60 sequence begins with multiple Sony Z1 clips followed by the entire 720p sequence. I simply dragged the 720p sequence from Liquid’s Sequence Rack (Bin) into the 1080i sequence.

All these, of course, are 16:9 sequences. I also have two 4:3 DV sequences – one that will become an anamorphic DVD and another that will become a letterboxed DV tape. And, while I am not fond of converting 16:9 to pan-and-scan SD video, later in this edition we'll learn how to this, too.

The first step, of course, is to capture all of our source materials — whether 16:9 or 4:3. When all of our clips have been captured, we will perform a magic step. A step that must be done before clips are placed into a sequence. However, I’m not going to reveal the secret yet.

The next step is to create the sequences we need. For each sequence, we can choose from a list of DV and HDV presets: NTSC 4:3, NTSC 16:9, HDV 720p29.97, and HDV 1080i59.94. I could also select 720p23.98 from the Region 60 presets and then select the M2V codec to create a 720p24 HDV sequence. In either case, the resolution for the Sequence will now be defined.

When we define a new sequence, we also define the image aspect ratio and pixel aspect ratio. The pixel aspect ratio will be used when conversions are made between square and non-square pixel video.

My 720p and 1080i sequences have square-pixel and non-square-pixel aspect ratios, respectively. My NTSC 4:3 DV and NTSC 16:9 DV sequences, however, both have a resolution of 720x480 and a non-square-pixel aspect ratio. (The DV format uses a WIDE flag to indicate 16:9 anamorphic video.)

As each clip is placed into a sequence, we would like the NLE to automatically handle it correctly. When the clips have the same pixel aspect ratio, the issue is simply how to match resolutions. When, however, the clips have different pixel aspect ratios, the issue is more complex. Pixel aspect ratios and resolutions for both clip and sequence must both be considered. However, we do not have to do the “considering,” as Liquid will do it for us.

When all your clips have been captured and graphics created, select all of them. Now, right-click and select Properties… and then click on the V-tab. You will have access to two pull-down menus as shown below.


As shown below, by leaving the selection at “Use original” we the use the information from the source clip.

Now, make the selections shown below.

Once this has been done, the following is automatically done:

1. 16:9 to/from 16:9 — uniform scale based upon “X”
2. 16:9 to 4:3 — uniform scale based upon “Y” yielding an anamorphic image
3. 16:9 to 4:3 WIDE — uniform scale based upon “X” yielding a letterboxed image
4. 4:3 to 16:9 — uniform scale based upon “Y” yielding a horizontally centered image.

When we want to place HD clips into a 4:3 sequence and we do not want either an anamorphic or a letterboxed image, we must take a different tack. After all your clips have been captured and graphics created, select all of them. Now, right-click and select Properties… and then click on the V-tab. Next make these selections.

When these clips are used in HD sequences, the following is automatically done:

1. 16:9 to/from 16:9 — uniform scale based upon “Y”
2. 4:3 to 16:9 — uniform scale based upon “Y” yielding a horizontally centered image.

When an HD sequence is moved into a 4:3 sequence, a “center-cut” 4:3 image is created automatically.

To implement Pan-and-Scan, first define all clips as “Fit X.” These can be used in an HD sequence. Then drag the HD sequence into a 4:3 sequence. Next, drag the 2D EDITOR GPU filter onto the clip(s). For 720p, set Horizontal Size to 200%. Now you can keyframe Horizontal Position between +50 (full left) and -50 (full right) to create pans over time—as shown below.

Now that we understand how to work with multi-format timelines, there are two additional topics to consider. First, while stereo audio has been acceptable for standard-def DVDs, as the world moves to high-definition DVDs in 2007, stereo is not going to cut it. Productions must have 5.1 surround sound. Liquid provides a near complete 5.1 audio creation capability.

Second, exporting MPEG-2 is a very time-intensive process as we all know. Liquid provides “smart GOP splicing” technology. Can this save export time? To answer that, we need to understand how this technology works.

Both topics will be covered in upcoming issues of HDV@Work. While this series is not a review of Liquid, it does seem appropriate to make a few comments about my experience editing with Liquid. I have been reviewing NLEs since the days of SuperMac's DigitalFilm, the VideoVision Studio from Radius, and Premiere 3.0. From me, Liquid earns a “my favorite NLE, but …” rating.

The “but” stems from: (1) the lack of keyframing in the main color corrector; (2) the lack of up-to-date, complete documentation (for example, the keyframeable color corrector is not documented); (3) audio editing problems; (4) lack of true 16:9 DVD support; (5) and too many design inconsistencies/bugs. Unfortunately, these kinds of issues involve the type of investment that companies avoid because they do not add sizzle that can be marketed. Nevertheless, in my opinion, the only new function Liquid requires, beyond P2 DVCPRO HD support, is integrated support for burning high-definition DVDs. (However, in a future installment I’ll show you how the latter limitation can be overcome for under $50.)

My “best” rating comes from: (1) Liquid’s "any-in/any-out" capability; (2) integration of almost every production task into a single application; (3) Liquid’s reliability when editing video; (4) display presets that work as well with a laptop as with multiple displays; and (5) an interface that is a true joy to use. The heart of this interface is an extensive, customizable set of toolbars that enables Liquid to function almost “mode-free.”

What are modes and why do I, and others, consider them an invention of the devil? First, it is important to distinguish between two kinds of modes. When you select a group of cells in a spreadsheet and enter Chart Mode, you are linking data to code that uses the data to perform an entirely different function. This is like Trim Mode in an NLE.

The type of mode with which I am concerned made its appearance with the first primitive text processors. These applications had no GUI. Everything was controlled from a keyboard. To insert characters into existing text, you first pressed the INS key to turn-on Insert Mode. (If you have a PC, you likely still have this key on your keyboard.) Once you entered Insert Mode, the keyboard continued to insert text.

When you wanted to replace existing characters, you pressed the INS key again to enter Overwrite Mode. Once in Overwrite Mode, the keyboard continued to overwrite text. Thus, whenever a fast touch typist became distracted after making a correction, they could wipe out vast amounts of text before looking up and seeing the OVR indicator was on! (Just for fun, next time you are using Microsoft Word, press the INS key and/or click the OVR button in Word’s status bar.)

The first Avid was designed in the DOS days and not surprisingly has Extract/Splice-in and Lift/Overwrite modes. And while Apple boasts that FCP does not have a “modal timeline," it does use a modal toolset. (Premiere likewise has a model toolset.) Simply put, a modal interface is one where what the computer does upon your pressing a key or clicking a button depends on what mode the interface is in. It's up to you to remain aware of the mode.

A non-modal interface does only what you directly command. You need only decide what you want to do and press a key or click a button to accomplish the task. You need not consider what mode is currently active. For example, the more modern Edius 4 interface has toolbar buttons for ripple and non-ripple deletes and ripple and non-ripple inserts. Interestingly, since PC keyboards offer both del and backspace keys, Liquid’s programmers missed the opportunity to support a mode-free non-ripple (del) and ripple (backspace) delete command.

Update: A luma (Y) signal ranges from 16 to 235. The Cb and Cr components range from -112 to +112, each of which is offset by 128, yielding a range of 16 to 240. This explains why Avid supplied both “235” and “240” values to me in describing its “2vuy” 4:2:2 uncompressed codec.

Wednesday, May 30, 2007

Avid Liquid Preview




( Article taken from New Zealand Video News )

“Avid Liquid 7 pre-view”


I went to Nelson at Imaging Technology for a preview of Avid Liquid 7. Now, those of you in the South Island will have seen this launch by this time or had the opportunity to go – and I hope you did whereas North Islanders have got a few more days to make sure that they get along to Imaging Technology and actually see Liquid 7 in action.

Ed: Nelson, what do you think makes this version of Liquid better than all the previous ones?

Nelson: The first thing I want to mention is the ability to let the user now have full control of the file management. Liquid used to be a full network solution handling all the media management, but with more than 100,000 users around the world, we have more and more users requesting the ability to control the files, name the files and put the files wherever they want them to be.

Ed: Just like in the old days with Premiere 6.5?

Nelson: Yes. You have absolute control, you choose the formats you want to digitise into, which include AVI, MPEG and DIVX. Select the bitrate which affect the quality & size of the captured file. Give it a name and basically capture into any folder and later on, manipulate it in File Explorer.

Ed: Does this detract from its ability to be a network solution if you do this or choose to do this?

Nelson: No, if the user decides to go this way, it’s basically deciding to let Liquid function as a stand alone system, the user will then have full control of the file management. If Liquid is still deployed as a network solution, the traditional way of logging the clips and letting Liquid manage the media will still be the perferred option. The Avid media management system will allow up to 22 users to work on the same project and accessing the media server all at the same time. Hence it is vital that media management system takes over the naming, positioning & storing of all the media.

Ed: Fantastic. Now the other thing we must mention straight away is that, with the proliferation of high def formats around, Liquid 7 has definitely got the capability of doing something special with the new JVC HDV camera footage?

Nelson: Just this morning, Bryce from Gencom brought over his camera and we have digitised some shots in HDV 720 24p, 720 30p and 720 25p and they all work seamlessly on the Liquid timeline. In fact, you can just mix and match all of them in the timeline and decide whether you want to up-sample into another different format like the 1080 format, or down-sample into standard definition or convert it into NTSC from PAL or 24 frames to give it a film alike look.

Ed: Liquid is actually capturing in each native format though?

Nelson: Yes, Liquid is capturing in its native format. There is no transcoding during capturing or printing back to the source, every thing is in it’s native format and that is why there is no generation loss in Liquid. Basically, Liquid is allowing users to capture and edit all kinds of formats natively, including the JVC HDV1 and Sony HDV2 formats and the MXF files from the Sony XDCAM and Panasonic P2 cameras.

Ed: And dissolves and all sorts of transitions and everything else works seamlessly on it?

Nelson: Yes, we’ve been putting page curls, we’ve been putting cross dissolves, we’ve been adding base colour corrections, filters and they all works in real time.

Ed: So you haven’t managed to kill it yet?

Nelson: Not yet!

Ed: That’s pretty good, because I have heard from a number of people that they have had trouble editing the JVC version of HDV so we have living proof here of an editing solution, Liquid 7, which is working seamlessly with it.

Now, we’ve also tried some footage from a P2 camera, the recently introduced HVX-200 and how have we found that P2 working Nelson?

Nelson: It’s brilliant. Just like the last time when we launched Liquid 6 with P2, you just literally slot the P2 card into the reader at the back of the computer and immediately Liquid picks it up. What we are seeing so far is that Liquid does not even need to copy any of the files onto the hard disc. We are actually editing the files straight from the P2 card and doing all the transitions and the filters. Even with1 main video track and 2 picture in picture running at the same time, we have no problems with that at all.

Previously in Liquid 6, for those who wanted to capture and edit MXF files from both the P2 and XDCAM, you had to pay an additional of $1,995+gst. Now these broadcasts features have been integrated into Liquid 7 together with all those assisting formats that allow you to work with standard definition and high definition.

Ed: But we have to say at this point that we’re doing this with the standard definition footage off the P2 card – that’s in a whole variety of standard def, but we have been finding some difficulties with the high def footage?

Nelson: Yes, we have no problem with the DVCPRO-25 or DVCPRO-50 in standard def. We are having slight difficulties dealing with the DVCPRO-100 and the HD format. We’re looking forward to see Avid working closely with Panasonic to get these drivers or the next codec being written up so that Liquid can work with them seamlessly, but in the meantime, Liquid is still supporting the DVCPRO-100 footage, user can import & place the clips on the timeline, it’s just that we have to render the clips first before it will play smoothly. Hence there is no stopping of users who wish to incorporate some DVCPRO-100 or HD clips into their exisiting working projects with Liquid.

Ed: Certainly there is a capability within Liquid to handle 1080-50i and other uncompressed high def formats, but it’s just a case of the codecs not in there yet?

Nelson: Yes, absolutely. We’ve been dealing with the Sony 1080i, we’ve been using other formats like JVC and it all works smoothly. Liquid is seeing & allowing the DVCPRO-100 or HD clips to be placed on the timeline like the rest. So what we need to do is just a closer relationship between the engineers from Avid and Panasonic.

Ed: So that’s something that’s coming up in the hopefully not too distant future?

Nelson: I hope so. I will get the Avid engineers to get back to me as soon as possible regarding this issue.

Ed: Excellent – but in the meantime, if you for some reason have any of this footage, Liquid’s pretty good at rendering it into something that you can use?

Nelson: Yes, just put it on the timeline in the latest version of Liquid. All this while, Liquid has the unique capability of doing background rendering while user is editing. In Liquid 7, you can even have the flexibility to select a portion of it to be rendered. For example, if you just want to have some DVCPRO-100 clips on certain segment of your project, you can just put them onto the timeline and render that segment of the the files you want.

Ed: So you can convert it to something that you’re using on your timeline?

Nelson: Yes, not a problem.

Ed: That’s that feature – what else? To me, the interface looks pretty much the same, but what’s another feature of Liquid 7 that people should look for?

Nelson: There are actually a lot of features – we’re talking about real time capturing into DIVX.

Ed: What’s DIVX, I’m not familiar with that?

Nelson: The DIVX format is another codec again, highly compressed video files, but the ability to preserve great quality even though it has been compressed.

Ed: What do you use them for?

Nelson: There’s a lot of DVD players out there that are able to play back DIVX discs. You can actually have the entire DVD encoded into one CD and still preserve almost 90 percent of the quality on the normal CD-R in DIVX codec. So in Liquid, we are now able to capture files into DIVX format in real time. Additionally we are also now able to deal with the Windows Media 9 HD formats – we can edit them and mix that up with other formats.

Ed: With this DIVX, can you also make your project in your normal AVI format and then export it as a DIVX?

Nelson: Yes, you’re definitely able to do that. There is an increase in the output options & the entire process is also simplified. Under the “Export to File” option, you can output into MPEG-1, MPEG-2, MPEG-4, Windows Media, Quicktime, DIVX, Real Media, Flash and very interestingly we have the inclusion of two other formats that is the Play Station portable version and also the Video I-Pod. So, if you guys have all the gear, Liquid is there ready to export your videos.

Ed: Wow, that is certainly quite a major step forward because it now gives you the opportunity to produce a whole range of formats for your clients to view in any way they like.

Nelson: At the present moment I cannot find any other editing platform that can provide formats for the Sony Play Station portable. This is a very interesting feature. Same thing for the Video I-Pod. It makes things so easy for the I-Pod user now. This is another big step for Liquid’s move toward being an fully open format editing platform, allowing users to put all kinds of different formats on the timeline and exporting to any output format.

Ed: I guess it also means that you could render a reasonably long clip into something that you could send over the Internet as a proof?

Nelson: Absolutely. The MPEG-1, the Real Media, the Flash format – they are all very good formats for streaming video over the Internet.

Ed: Now we’re going to talk about smart sound – I understand there’s an additional improvement here?

Nelson: Yes. On top of the 5.1 surround sound that Liquid 6 had already been offering, in Liquid 7, users will be able to have access to the 26 different types of tunes and templates which allow you to create royalty-free music clips for your projects. What you need to do is just complete your projects and do a mark in and mark out, or basically key in your duration, and the smart sound tool will generate a royalty-free music based on the tune that you decide and laid out exactly to the point that you finished your project.

Ed: Are there more things that this Liquid 7 does?

Nelson: Definitely. We were talking about the time warping just now. Time warping has been upgraded to a real time warping. This means if you’re going to slo-mo your clips you won’t need to render them any more, there will be a real time preview. Of course we have 50 more new effects taken from another software called Commotion.

Ed: They’re not just Hollywood effects are they?

Nelson: No, not Hollywood effects but just to name a few : The Particle effects give the users the ability to create clouds or a fire or explosion scene over the exisiting video. There is also Image Stabilising filter, Smart Blur and Soft Focus filters & and lot more.

Ed: And really – I mean you don’t get a set of Ginzu knives with it, but that’s a pretty good selling point. Unfortunately I have to say there hasn’t been any improvement in the titling programme has there Nelson?

Nelson: The titling programme is still the same – Title Deko. Basically it depends on user feedback. The more feedback we get from the users, the higher it will be on the Avid priority-list!

Ed: Well I’ve certainly tried my dammedest to get them to do something about that titling programme, but hopefully next version?

Nelson: During NAB – talk to the guys at NAB.

Ed: Oh, so I have to go now do I?

Nelson: You can send me Grant.

Ed: Yeah, right.

So those are the key features from Nelson on the Avid Liquid 7. The first time that the Pinnacle name has gone and Avid is on the box so you know that it’s an Avid, but it doesn’t look like an Avid. To find out more, make sure you attend the Show in Auckland and, for those in the South Island who didn’t get to see it, you can always go and talk to Mike Symes at AVA and he’ll be able to show you Liquid 7 I’m sure.

In a box: Contact Imaging Technology etc etc.