Chapter 9: The Final Product: Postproduction
Before starting postproduction, you want to study any shortcuts that can make the postproduction process more creative and efficient
The producer’s job is to know as much as possible about everyone else’s job:
What they do
The tools of their trade
Their rates
The facilities in which they work
The subtleties of their art
In producing for TV and new media, the producer usually supervises the project from beginning to end, including the entire postproduction process
A more complex project might require a postproduction supervisor who acts as the producer of postproduction. They work closely with the producer to maintain the vision of the project and supervise all phases of postproduction including:
Editing
Mixing
Graphics design
Final composite
Delivery of the final master
The postproduction supervisor keeps track of:
All the footage that has been shot as well as:
All the numbered and organized tape reels or storage devices
Screening logs
Dubs
Other log sheets
Other visual images, such as:
Stock footage
Archival footage
Animation
Graphics
Artwork
Copies of any related legal release forms
All audio elements such as:
Dialogue
Background audio
Special effects
Original and/or stock music
Cue sheets
The need to legally protect your project never stops, it continues way past postproduction
You want an editor who has a keen sense of storytelling. You want them to be familiar with the best editing system for your project, someone who has kept up with its technical compatibilities and system nuances. They must be able to deliver a color-corrected, audio-balanced final product that’s technically up to specification. You want them, ideally, to have experience in cutting a show that’s similar to yours, and you want them to be the kind of person with whom you can spend long hours or days at a time
You can save money and time in postproduction when you:
Organize your tapes or storage devices and location logs
Screen and log your footage
Organize editing elements including footage, audio, and graphics
Write a paper cut for the edit session
When labeling your tapes or memory cards, design an easy system for naming each one
In a studio setting with several cameras, match the camera number with a tape number
It helps when you label each tape cassette (or disk, memory card, etc.), including:
The tape number (Tape 1, Tape 2, etc.)
The location where it was shot (Studio B, in Central Park, etc.)
The date of the shoot
The audio tracks (Track 1 is the lav, Track 2 is the boom, etc.)
The camera it was shot with (Camera 1, Camera 2, etc.) in multicamera shoots
Tape log:
The producer keeps track of the footage that’s been shot in a tape log
The tape log provides a fast way to find your footage
Film-to-Tape:
Any footage shot on film must first be transferred to digital video before it can be edited in a nonlinear editing (NLE) system
The film-to-tape transfer is a complicated and costly procedure in which the film is converted to video via a telecine machine, also called a film chain, that scans each frame and converts it into a video signal
During the film-to-tape transfer process, the image and production sound is transferred, and if needed, the film can also be color-corrected
Sometimes, complications can arise from the difference in frame rates between film (24 fps) and video (30 fps) as well as audio syncing. If you plan to shoot in film and transfer it to video, discuss the film-to-tape process with the editor, and research the resources available on the subject
Tape-to-Film:
Producers occasionally need a 35-mm film print of their video work
Alternative Sources: Stock and Archival Footage:
Stock footage facilities license a wide range of high-quality footage that’s been shot all over the globe by professionals who sell clearance rights to producers
This footage is high-quality and is often shot on 35-mm film and transferred to video, or shot on high definition, or onto 2K or 4K digital files
Stock Footage Search:
Go online, and search “stock footage” facilities. In most cases, you can see all their footage online - it’ll be watermarked in some way so it can’t be stolen. After you’ve made your choices, and picked the footage you want, you’ll then negotiate a fee for the rights to use it in your project
Stock Footage Fees:
Double-check the clearances on any copyrights and trademarks
In some cases, stock footage may require you to obtain releases from any talent or people on screen
Music or narration that is mixed into footage needs to be cleared
The factors that influence the license fee are:
The amount of time for which you want the rights
The territories
Any special advertising or promotional uses
The total number of runs
Use in new media formats
Archival Footage:
The archivists will research, gather, and/or clear the rights for historical footage
As with the stock footage, fees vary and are dependent on their use
Public Domain Footage:
When the copyright has elapsed on the footage, it is no longer owned by anyone and its rights are in the public domain (PD). You can use it freely, without paying for clearances or royalty fees
A cost-effective method of preparing for editing is to screen and log your footage before the edit session. From these log notes, you can construct a paper cut or an editing storyboard. It’s like a shooting script for your editor and the sound designer; it gives them a clear outline of what scenes appear in what order, and where each shot can be found. The paper cut lists time code (TC) locations and descriptions of selected edits, as well as notes about graphics and audio, and the order in which footage appears in the script
Ideally, you want to transfer your footage for screening to DVD (or an FTP site) with a matching time code. This means that the TC on your original footage is exactly the same as on your screening cassette. It’s called visible time code, also vizcode, or VTC, and it is displayed in a small box on the bottom or top of the screen
Screening Log:
If you’re logging dialogue, either scripted or unscripted, you might type each word verbatim for an exact transcription. Or, type just the keywords and mark irrelevant sections with an ellipsis (…)
Often, the tapes are transcribed by a professional transcriber who makes a note of the TC at regular intervals, usually every 30 to 60 seconds
Your tape log details the:
Tape number
TC numbers for the in-point and the out-point of the scene
Shot’s angle (MS, etc.)
Brief description of the scene
The Scope of the Log:
Your log sheet might include any of these elements:
Studio or location footage
Stock footage
Archival footage
Graphics
Animation
Audio tracks
Additional audio components
Not every producer has the ability to “visualize” what shots cut well with other shots. But you know what the primary scenes are, and their sequence in the script. Because you’ve most likely shot your footage out of order from what appears in the script, you’ll include all the reel numbers and TCs onto your paper cut, in the order in which they’ll appear in the final edited product
In writing your paper cut, you’ll find these terms helpful:
Shot:
A single uninterrupted videotaped segment which is the primary element of a scene
Scene:
A dramatic or comedic piece consisting of one or more shots
Generally, a scene takes place in one time period, involves the same characters, and is in the same setting
Sequence:
A progression of one or more scenes that share the narrative momentum and energy connected by emotional and narrative energy
Each editor has their own strengths and styles of cutting
An experienced editor can take disparate shots and elements and weave them together, creating a seamless flow. As creative artists, they can “paint” a mood with pacing, place a perspective on the action, and signal conflict or comedy
A technically adept editor can design special effects or transitions between scenes, color-correct the footage, and make sure your project conforms to broadcast standards
The producer’s role with the editor is highly collaborative. You want to give the editor specific targets for the project
With the user-friendly, inexpensive, creative, and evolving NLE systems an entire project can be edited on a laptop
Many producers do their own rough cut first, working out some of the more obvious problems, then bring that rough cut for the editor to fine-tune and take to the next level. But, not every producer has the technical savvy or creative eye to be a good editor. You want your project to reflect your vision, and to adhere to all broadcast standards so it can be aired or connected to other platforms, and also have the technical capacity to be dubbed with no loss of generations, or quality
You want to create an environment in which the work can get done:
When you’re in the edit room, the editor needs to concentrate, so keep phone calls and distracting conversations to a minimum
Discourage people from crowding into the space
When possible, encourage creative leeway with different shots or new ideas
Make sure they get a genuine “thank you” along with plenty of food, water, and coffee during the edit sessions
To find an editor you can:
Talk to other producers, directors, and writers about editors they’ve worked with
Call regional or local television stations who may “hire out” their editors and facilities for outside work. If not, ask if they can recommend local freelance editors and/or facilities
Check with local high schools and colleges that have editing equipment for their students. Often, their student editors can be hired for low-budget projects or can work for academic credit
The rapid evolution of post-production technology has brought editing, sound mixing, and graphics into the digital domain
Every six months, new equipment and software floods the marketplace; a system that is state-of-the-art this year is either upgraded or replaced next year
These systems work on the same basic principle as editing on film with an NLE system, pieces of footage can be digitally “spliced” together out of order, just like film editing. Film editing has always been nonlinear, done with tape and scissors, and its pieces cut and taped together by hand
Before nonlinear editing, video editing was linear - electronically edited in an “always moving forward” direction. The traditional way of editing video has been to edit in the chronological or lineal order that shots appeared in the piece. Now, editing with digital equipment is done in a cut-and-paste mode, just as with film, except it’s edited electronically rather than manually
The popular NLE systems all work on similar principles. When you can learn one system, it’s only a matter of nuance to find the right buttons in the right place on another system
Final Cut Pro and Avid are the systems currently used by most professionals. They offer high-quality options for finishing, are updated consistently, and support more plug-ins
All professional-quality cameras now shoot a digital signal
A few holdout producers in the news or unscripted programming still shoot in Beta - it’s a tried-and-true standard. It can be downloaded into an NLE via a component signal, or through a Digibeta with an analog board that can process the analog signal to a component digital path
On the other end of the technological spectrum are newer cameras like the Red One. It doesn’t use tape or a disk, but records images and audio onto a digital file, in this case, files up to 4K
Compression relates to digital video, and simply means that the video signal is compressed to reduce the need for extra storage, as well as transmission space and costs
Compression techniques involve removing redundant data, or data that is less critical to the viewer’s eye
The more the digital signal is compressed, the more distorted the image’s details. You can see this effect in pirated copies of DVDs when the picture dissolves or fades to black - the sharpness of the image disintegrates and the pixels become larger. You can see this same effect on your NLE at a low resolution, also called low rez
During the shoot, the DP or camera operator might ask if the footage needs to be shot with a TC setting that’s either a drop frame (DF) or nondrop frame (NDF)
Because video runs at 29.97 fps and not 30 fps, nondrop frame footage has a: 0.3-frame discrepancy. By the end of a one-hour show, there are 3.6 extra seconds to account for. Broadcasters demand an exact program length, so a 60-minute program is usually delivered in DF, because it’s exactly 60 minutes long and the show’s timings are in real time
Before you begin editing, your footage must first be transferred or downloaded, into the NLE
For productions shooting on tape, the downloaded tapes are digitized in real time - it takes eight hours to digitize eight hours of footage, so build digitizing time and costs into your budget. When it’s downloaded, it’s converted into a digital file that can be read by editing software
Memory cards have eliminated the need to transfer footage in real-time, but the process of transferring footage from a memory card to a computer’s hard drive can also be a time-consuming one
More on Downloading and Digitizing:
The editor of your project is not always the person who does the digitizing; often, it’s done overnight at a lower rate by a dubber on the night staff. As it is being digitized, perhaps you and/or the editor can categorize the footage with recognizable information like tape numbers, time codes, and scene descriptions, and store everything in computer folders or bins
Producers often designate only certain segments or portions of tapes to digitize, called select s, so they don’t take up storage space for footage they won’t use. This is an area in which a good logging program is an invaluable tool
Only a few years ago, the biggest drives available were one-gigabyte drives that sold for $10,000. Today, a 500-gig portable hard drive costs a tiny fraction of that, and a four-terabyte drive has been promised by a major manufacturer for a price that’s easy for most production budgets. This luxury of digital storage no longer limits the producer from loading footage and editing in low resolution. You can now cut in high rez, which looks much better than low rez, get client comments, and do any revisions in high rez as well
FireWire:
Initiated by Apple Computer, FireWire is also called known as IEEE-1394 and is a standard communications protocol for high-speed, short-distance data transfer
FireWire theoretically presents itself as the only “lossless” way to digitize (in the case of increasing outmoded tape-based productions) or transfer footage directly into an NLE. It’s currently considered the most efficient way to load editing components into an NLE. It allows you to transfer video to and from your hard drive without paying the higher costs of JPEG compression or buying NLE software or banks of RAID-striped hard drives
After all the footage, audio, and graphic elements have been loaded into the NLE, the editor cuts together the first rough cut - a basic edit. It forms the core of your finished piece and reflects all the basic editing decisions. Over time, and as part of the creative editing process, this rough cut changes and evolves, but it’s this first cut that shapes the project. Some editors refer to the rough cut as a radio edit or an A-roll edit. This describes the process of first laying down all the sound bites, with video, and listening to it as much as watching it. This helps make sense of the project’s narrative viewpoint and pace
The next step is to make it visually interesting by editing all the video footage. But each project is unique, and it dictates its own approach to the rough cut. In a music video, the editor first lays the music down and then cuts the footage to synchronize with the musical beats. In some programs, the narration is laid down first. Then, the footage is edited to fit the narration. If the narration, or the voice-over, hasn’t been finalized, you can record the script by using a scratch track as your cue. This preliminary scratch track of narration, read by you or someone else, helps set the timings and beats for your rough cut. It is replaced later by a professional narrator
Regardless of what your particular project calls for, your rough cut clearly shows what works and what doesn’t, what shots cut well with other shots, and the total running time (TRT) of this first pass
Throughout the editing process, the editor works closely with the audio tracks:
Separating them
Balancing out levels
Keeping track of where everything is on the computer
Most editors lay out their audio tracks like this:
Tracks 1 & 2: Narration
Tracks 3 & 4: Sound on tape/digital file
Tracks 5 & 6: Stereo music
Tracks 7 & 8: Sound effects
Tracks 9, etc.: Overlapping audio, music, or dialogue
Most projects take time to edit. The editing process usually goes through several rough versions before there’s a final product that makes everyone happy. Then, the editor makes a frame-accurate edit decision list (EDL) that provides exact notes of all the reel numbers, time codes, cuts, and transitions in the rough cut. Finally, the editor re-edits or conforms the rough cut by matching the original footage in high rez, using the EDL
For now, the offline-to-online process has become the norm in HD editing
As the industry has moved to shoot almost exclusively in high definition, the editing and post-production processes have met with a host of new challenges. High definition has complicated the post-production workflow, so careful planning, prior to shooting even, is very important
Step One:
The first step is to ask yourself whether you need to downconvert your footage. Smaller, shorter web-based projects can often be edited natively. But if the answer is affirmative, then you must know what downconversion format you want to use
Shooting 24 frames in HD can occasionally complicate the editing process. Some producers downconvert the 24 frames to 30-frame DVCam and then rely on a conversion program to reconvert the 30 frames back to 24 frames for the conform session. Other producers stay in 24 frames, feeding the 24 frames directly into the NLE
Because a mistake can be costly down the line, professionals recommend that projects that are shot in 1080i or 24p be edited in the NTSC video format; it’s easier and cheaper at the moment
Step Two:
Next, the downconverted footage is digitized into the NLE system
Before you download, clearly mark each reel with a name or number that can be easily read by the computer. Ideally, limit it to four to six characters so the computer can easily read and distinguish each name
Make sure that the TC from your original field recordings is downconverted properly with an exact match
Step Three:
Although it’s easy to import animation, graphics, and computer-generated imagery (CGI) into an NLE system, taking these elements into an online session can be tricky. You can either have them created in the final HD resolution, or you can bring them into the online session, render them out to frames, and transfer these to an HD file. These are then downconverted and treated like all the other elements in your edit
Step Four:
After you’ve completed your NLE edit, the editor can export an EDL (edit decision list) with all the information needed to conform in the online session, if needed
After you’ve made your final cut of your project, send its EDL and the digital cut to the online editing facility in advance of your actual session. Come prepared for the online session with all your:
Original camera reels
Graphics files
CGI and effects reels
Any titling or credit information that may be added to your cut
Step Five:
The online editor then assembles the show, using the EDL information. Your presence in this phase of editing is critical - the editor isn’t familiar with your project, and the EDL is only an impersonal list of numbers that may not include transitions, wipes, dissolves, and other important creative details
Certain shots take on specific meanings when they are juxtaposed with other shots. This juxtaposition is editing. It can manipulate time and create drama, tension, action, and comedy. Without editing, you’d only have disconnected pieces of an idea floating in isolation, looking for a connection
Editing in today’s media world still follows classic editing guidelines. These were established by American director D. W. Griffith, and Russian directors V. I. Pudovkin and Sergei Eisenstein, early in the last century. These pioneer filmmakers realized a century ago that film possessed its own language, with rules for “speaking” that language. They set the standards for editing that are used today by virtually all editors, no matter what the format
Some styles of editing include:
Parallel editing:
Two separate yet related events appear to be happening at the same time, as the editor intercuts sequences in which the camera shifts back and forth between one event and another
Montage editing:
Short shots or sequences are cut together to represent action, and ideas, or to condense a series of events
The montage usually relies on close-ups, dissolves, frequent cuts, and even jump cuts to suggest a specific theme
The montage effect gives the viewer a lot of information in minimal screen time
Seamless editing:
The viewer is unaware of the editing because it is unobtrusive except for special dramatic shots. It supports the narrative and doesn’t distract with effects
The characters are the focus, and the cuts are motivated by the story’s events
Seamless editing motivates the realism of the story and traditionally uses longer takes, match cuts rather than jump cuts, and selective audio that can act as a bridge between scenes
Quick cut editing:
This style of editing is highly effective in action and youth-targeted programming
It’s used in music videos, promos, commercials, children’s TV, UGC, and in programs on fashion, lifestyle, and youth culture
It combines fast cuts, jump cuts, montages, and special graphics effects
Cut:
A quick change from one shot with one viewpoint or location to another
On most TV shows there is a cut every five to nine seconds, and much faster in some shows
Most cuts are usually made on an action, like a door slamming or a slap to the face
A cut can:
Compress time
Change the scene or point of view
Emphasize an image or an idea
Match cut:
A cut between two different camera angles of the same movement or action in which the change appears to be one smooth action
Jump cut:
Two similar angles of the same picture cut together, such as two closeup shots of the same actor
This style of editing can occasionally be edgy or make a dramatic point, but it can also signal poor editing and continuity
Cutaway:
A shot that is edited to act as a bridge between two other shots of the same action
It helps to avoid awkward jumps in time, place, or viewpoint and can shorten the passing of time
Reaction shot:
A shot in which an actor responds to something that has just occurred
Insert shot:
A close-up shot that is edited into the larger context and provides an important detail of the scene
Few shows on television, online, or on other platforms are viewed or seen in real-time
What the viewer sees is known as screen time, a period of time in which events are happening on the screen
There are several devices that an editor can use to give the viewer an impression of compressed time or time that has passed or is passing:
Compressed time:
The condensing of long periods of time is traditionally achieved by using long dissolves or fades, as well as cuts to close-ups, reaction shots, cutaways, montages, and parallel situations
Simultaneous time:
Parallel editing, or cross-cutting, shifts the viewer’s attention to two or more events that are happening at the same time
The editor can build split screens with several images on the screen at once, or can simply cut back and forth from one event to another
When the stories eventually converge, the passage of time stops
Long take:
This one uninterrupted shot lasts for a longer period of time than usual. There is no editing interruption, which gives the feeling of time passing more slowly
Slow motion (slo-mo):
A shot that is moving at a normal speed, and then slowed down
This can emphasize a dramatic moment, make an action easier to see at a slower speed, or create an effect that is strange or eerie
Fast motion:
A shot that is taking place at a normal speed that the editor speeds up
This effect can add a layer of humor to familiar action or can create the thrill of speed
Reverse motion:
By taking the action and running it backward, the editor creates a sense of comedy or magic
Reverse motion can also help to explain the action in a scene or act as a flashback in time or action
Instant replay:
Most commonly used in sports or news, a specific play from the game or news event is repeated and replayed, usually in slo-mo
Freeze-frame:
The editor finds a specific frame from the video and holds on to it or freezes it
This effect abruptly halts the action for specific narrative effects
A freeze frame can create the look of a still photo
Flashback:
A break in the story in which the viewer is taken back in time
The flashback is usually indicated by a dissolve or when the camera intentionally loses focus
Dissolve:
When one image begins to disappear gradually and another image appears and overlaps it
Dissolves can be quick (five frames, or one-sixth of a second), or they can be slow and deliberate (20 to 60 frames). Both signal a change in mood or action
Fade-outs and fade-ins:
A fade-out is when an image fades slowly out into a blank black frame signaling either a gradual transition or an ending
A fade-in is when an image fades in from a black frame introducing a scene
A fade out or fade in can also be effective from a white blank frame rather than a black one; like a dissolve, this editing transition also works to show time passing or to create a special “look”
Wipe:
An effect in which one shot essentially “wipes off” another shot
A wipe can be effective, or it can be a distraction; overuse of wipes can be the mark of an amateur
Split screen:
The screen is divided into boxes or parts. Each has its own shot and action that connect the story. The boxes might also show different angles of the same image, or can contrast one action with another
It works as a kind of montage, telling a story more quickly
Overlays:
Two or more images superimposed over one another, creating a variety of effects that can work as a transition from one idea to the next
Text:
Almost every show has opening titles (including the name of the show) and a limited list of the top creative people (such as the producer, writer, director, actors, etc.); these are called opening credits
Titles that appear at the end of the show are called closing credits, and they list the actors’ names and roles, or positions in the production, as well as other detailed production information
Words that slide under someone on screen and spell out a name, location, or profession are called lower thirds because they’re generally inserted in the lower-third portion of the screen
The electronic text is known generically as the chyron
The text can be digitally imported onto the picture with various speed, rhythms, and movements and from any angle
The graphics give the viewer an impression of the tone and pace of the show, and when combined with music, text can create a unique style for your piece
Opening and closing credits might be superimposed over a scene from the show, or on top of stills, background animation, or simple black. Some projects require subtitles for foreign languages or close captioning for the hearing-impaired
As the producer, you’re responsible for double-checking all names, spellings, and legal or contractual information for the lower thirds and final end credits
Animation:
Simple animation can be created easily and cheaply by using software like Flash and After Effects
More complex animation is created by an animation designer who uses storyboards and narration and manages an impressive crew of people who draw, color, and edit animated sequences
Motion control camera:
Special computer-controlled cameras that shoot a variety of flat art such as old newspapers, artwork, and photos, sometimes called title cameras
They are designed to pinpoint detail and to create a sense of motion for otherwise static material with camera moves
Design elements:
Some project genres depend on the use of various design elements to add depth and information to the content
These elements include:
Logos
Maps
Diagrams
Charts and graphs
Historical photographs
Still shots
Illustrations
The look of film:
Falling loosely into the graphics realm, there are several postproduction processes that give the video the appearance of a film by closely mirroring the color levels, contrasts, saturation, and grain patterns of the film at a fraction of the cost and time of the film
Color-correction:
The process of reducing or boosting color, contrast, or brightness levels can be done by using color-correcting tools such as Flame or After Effects
Retouching:
This plug-in process offers a gamut of tricks that can enhance an image, like “erasing” a boom dangling into the shot, or a wire holding up a prop
Compositing:
Two or more images are combined, layered, or superimposed in the composite plug-in process
Rotoscoping:
Frame-by-frame manipulation of an image, either adding or removing a graphic component
In a less complex project, the video editor can mix all the audio requirements and components in the edit session
Some projects have more complicated audio elements that require an audio facility for additional work and refining
An audio facility might be a simple, room-sized studio with one or two sound editors who work on audio equipment that synchronizes TC and computers and, depending on the facility, can charge $50 to $200 an hour. It could also be an elaborate, theater-sized studio with several audio mixers and assistants, extensive equipment, and a setup that could be quite costly
Before you book time in an audio facility, discuss your project’s audio needs and their possible costs
The sound designer works with two contrasting “qualities” of sound studio and approaches them differently, both aesthetically and technically:
Direct sound:
Live sound
This is recorded on location and sounds real, spontaneous, and authentic, though it may not be acoustically ideal
Studio sound:
Sound recorded in the studio
This method improves the sound quality, eliminates unwanted background noise, and can then be mixed with live sound
As the producer, you want to work closely with the sound designer: supply the necessary audio elements and logs, then discuss the final cut of your piece; offer your ideas, and ask for suggestions
In the first stages of an audio mix session, you and the audio crew sit in a spotting session during which you review each area of your project that needs music and effects for dramatic or comedic tension. In this session, you’re listening for variations in sound levels, for hums and hisses, and anything else that wasn’t caught in the rough mix
Often, an audio facility is willing to negotiate a flat fee for the whole job
The sound designer can:
Mix tracks
Smooth out dialogue
Equalize levels and intensity of sound
Add and layer other elements like music and effects
Working with the sound editor is much more effective if you can:
Be prepared:
When possible, send a rough cut of the project to the sound editor before the mix session
Come to the mix with a show run-down that lists important audio-related details like transitions and music
Provide a music cue sheet that lists all the music selection titles, the composers and their performing rights society affiliation, the recording artists, the length and timing of each cue, the name and address of the copyright owner(s) for each sound recording and musical composition, and the name and address of the publisher and company controlling the recording
Be patient:
At the beginning of the mix, the sound editor needs to do several things before the actual mix can begin, including separating the audio elements, patching them into the console, adjusting the gear, and finally, carefully listening to everything
Be quiet:
Although you may have worked with these audio tracks for days in the editing room, it is the first time the sound editor has heard them. Keep your conversations, phone calls, and interruptions to a minimum
Be realistic:
Your mix may sound excellent in the audio mixing room because the speakers are professional quality, and balanced, and the acoustics are ideal. But most TV shows and online projects are played on TV sets or computer monitors with mediocre speakers. Many of the subtler sound effects you could spend hours mixing may never be heard, so listen to the mix on small speakers that simulate the sound that the end user will hear
Digital sound offers an unparalleled clarity of sound. There is no loss of quality when dubbed, and because digital requires less storage space than video, it doesn’t need compression
In situations where the mix is more complex, the picture is first locked or finalized, and then the audio tracks are exported, usually to a DAW. The tracks are either married to the video or are separate
The piece can be delivered in:
Stereo
Mono
5.1
All versions
Digital audio is easily labeled and stored, making it more efficient to keep audio in sync and slide it around when needed. Most soundtracks are now prepared on a multitrack digital storage system. The popular professional options include:
DAW (digital audio workstation):
Programs such as Pro Tools
Digital multitrack:
Programs include DASH 3324 or 3348
Analog multitrack:
24-track Dolby SR or A
Dialogue:
Dialogue is the primary audio element
Words spoken between two (or more) actors or people on-screen are called dialogue
Sometimes it’s recorded with background ambient sound, although usually it is recorded in isolation from other audio
Sound effects (SFX):
On a set or on location, any background sounds that surround the dialogue are ideally recorded separately
If the sounds don’t exist in that location, the sound editor can search through prerecorded sound effects available from a sound effects library
Producers often buy libraries of sound effects and stock music that offer thousands of audio options and their royalty fees are covered in the initial cost
Automatic dialogue replacement (ADR):
After all their scenes have been shot, actors may need to rerecord lines of dialogue or add a line written after the shoot was over
In the recording studio, actors read their lines, keeping them in sync with their on-screen lip movements. Another option is to record new lines that will be mixed into the program later, either over a cutaway or in a long shot if their lips don’t match the new lines. Actors might also read a script in a different language that is later dubbed over the original track
Often, a loop group of people is brought into an ADR session to create crowd sounds that will be mixed into the dialogue. This area of ADR is called walla, which is intentionally unintelligible so audible words won’t intrude on the dialogue
Voice-over (VO) or narration:
The narrator who reads a script or commentary adds another layer to the audio
Narration can introduce a theme or link elements of a story together. It adds extra information with an air of authority and helps interpret ideas or images for the viewer
Often, an on-camera character speaks over the picture in the first person as though she is directly speaking to the viewer
A minor character can tell the story in the third person, or an unidentified narrator who is not on camera can distance the viewer from the image by adding an objective voice to the story
Narration is generally recorded in a separate audio session and mixed in later over the picture
Voice-over can be dialogue that is shot originally on-camera and later played over another picture
Foley:
If the sounds can’t be found in a sound effects library, they can be created by the Foley artist
They’re recorded separately in an audio facility, often in sync with the action, and then mixed with other sound elements
Foley is the sound of:
An actor’s movements
Hands clapping
Rustling clothing
A kiss
Quiet footsteps
A fistfight
Music:
Original:
This is music that’s been composed specifically for a project
It may include themes for the opening and closing, and/or for the body of the show; its emotional direction can highlight the action, characters, and their relationships
The composer is familiar with the creative and technical process, and either hires the musicians or creates the music alone or with a partner
A composer can use computer language known as musical instrument digital interface (MIDI). It is capable of simulating a range of music from a single guitar to an entire orchestra
The final score can go straight from the computer into the mix
Stock:
This is music that has been specifically composed and recorded to be available for multiple uses
The composers use audio sampling and composition software and sophisticated equipment to create vast libraries of engaging and effective music that is both versatile and inexpensive
Stock music is a creative alternative used in every genre. It’s less expensive than hiring a composer, and the negotiated rights can be either exclusive or shared, depending on your budget and the end use. Stock music houses can be researched and located by an online search, and most offer samplings that can be downloaded from the internet
Prerecorded:
The source of this music could range from a popular song to an obscure CD, but a strong soundtrack adds an extra appeal to your project. Regardless of the source, you’ll first need to clear all music rights
Music cue sheet:
Regardless of where your music comes from, you’ll make a music cue sheet that lists every piece of music, its source, its length, and who holds the rights
Diegetic:
Music that the characters in the scene hear
Non-diegetic:
Music not heard by the characters that is added later, such as a soundtrack
Sound bridge:
Transition between one shot (or scene) and the next:
Audio elements
Dialogue
Sound effects
Music
Narration
Selective sound:
Lowering some sounds in a scene, and raising others, can focus the viewer on an aspect of the story
Overlapping dialogue:
In natural speech patterns, people tend to speak over one another and interrupt. Yet dialogue is usually recorded on separate tracks without this overlap. The sound editor can recreate this authentic-sounding effect in the mix, and can also separate dialogue tracks that are too close together. Conversations between several people, like those in two different groups, are often recorded on separate tracks so they can be woven together in the mix for a natural sound
Steps in audio mixing vary from project to project
During your video edit session, the editor separates the dialogue, music, effects, and other audio elements onto various tracks or channels
Depending on the complexity of your project, the editor can mix the elements in the edit room, or will do a preliminary mix that needs to be completed in an audio facility
During the mix, all the separate audio elements are blended together into a final mix track that is then “married” to the picture and locked in
Before the final audio mix begins, make sure that all the video and audio edits have been agreed upon by the clients and other creative team members, and won’t require any further changes. Any revisions involving audio after the picture is locked can mean costly remixes
Sound editors take varying routes in mixing, and each has a unique style of approaching the proces
Depending on the complexity of the project, any or all of the following components are part of an audio mix:
Dialogue:
All dialogue is cleaned up and extra sound effects or extraneous noise are either deleted or moved to separate effects tracks
Any ADR, narration, or voice-overs are also laid onto their own tracks
Special effects:
Any special effects tracks are separated, cleaned up, and each put onto its own channel
Ideally, there is ample room tone from each location that can fill in any gaps in the audio
Music tracks:
The music is generally the last element that is mixed into the audio
All the musical tracks are separated and divided into two categories: diegetic or source music (music the characters or actors hear on screen, like a car radio) or underscore music (music that only the audience hears, such as an opening theme)
5.1 audio:
5.1 refers to the positions in a five-speaker setup in which speakers are placed to the right, center, left, right rear, and left rear of the TV set
This kind of mixing is also called AC3 and Dolby Digital, and is prominent in Blurays, theatrically released films using SDDS and DTS systems, and in some TV broadcasts
5.1 audio requires a specially equipped television set to hear it at home
Most clients are very specific about what they expect as a deliverable or final product. Deliverables are generally part of your overall contract with a client, so you want to find out exactly what their expectations and specifications are. Ask for these deliverables in writing so there are no mistakes
The most common requirements for deliverables include:
Video format:
If your project is being broadcast, it is usually evaluated by a station engineer to make sure it meets broadcast standards
If it’s being dubbed, the dub house has technical specifications, too
You may be asked to provide a clean copy of the show that has no text superimposed on it
Audio format:
This might include separate mono mixes and stereo mixes, or a 5.1 mix, an M&E mix, special tracking, levels that are constant or undipped, and often one mix in English and another in a different language
Length:
The required program length can be quite specific
In most cases, PBS show lengths are six seconds less to accommodate a PBS logo
Commercial stations may require a half-hour show to be 22 minutes, while premium and cable channels are less demanding
Most nonbroadcast projects are more flexible
Dubbing:
Depending on the client’s requirements, you may be responsible for making protection copies, which are exact copies of your final master. These serve as backups in case of damages or loss in shipping
You might need to provide DVD copies of the project to the client. The amount of copies and their format should be spelled out in your contract, as should any special labeling or packaging and related shipping costs
Abridged versions:
You may need to provide an edited version of your project in which any nudity, violence, or offensive language has been removed or “bleeped out.” This version can be required by airlines, certain broadcasters, and foreign distributors
Subtitling:
Written text under a picture that translates only those words being spoken on screen from one language into another
Song lyrics or sounds are seldom subtitled
Closed captioning:
Also called close captions, this method of supplying visible text under a broadcast picture is mandated by law to be built into all American TV sets sold after 1993
These sets are designed with a special decoding chip that translates all the audio on the screen into text, such as spoken dialogue, and describes unseen sounds like a dog bark or a knock at the door
Especially designed for the hearing impaired, closed captioning is also useful in loud public places, when learning a language, and when the dialogue isn’t clear
The text usually appears in white letters in a black box at the bottom or top of the screen
Editing:
Editing for web video has the same ultimate goal as in more traditional media - to shape a coherent story and engage the audience
You can speed up the editor’s job by providing him with quality footage and sound, a variety of shots, shots for continuity, and a reasonable shooting ratio. A typical shooting ratio for web is 4:1, which means shooting four minutes of footage for every minute used in the final video. Choosing the right technology will make the editing process easier and faster
If your ultimate goal is to work in a larger professional house, you may want to learn Avid. Try to learn a program fully and it will make the editing process more efficient and easier if you do want to shift programs
One of the big differences between editing for web video and television is the length of the final product. Web video is typically between 3 and 15 minutes long. Webisodes and podcasts are on-demand and not time-based like traditional TV. They aren’t restricted to fitting a 30-minute time slot
When using graphics and titles, make sure you choose a consistent style and legible font for the web. In general the simplicity of sans serif typefaces works better for on-screen readability. Be wary of using too many templates or presets for your titles
Editors decide:
What is essential to the story
How to enhance the story
How long it needs to be
What to leave out
Sound Design:
Technically, the sound can be adjusted for better performance by filtering unwanted noise and leveling the volume. This volume adjustment is especially important for web video, since the audience is commonly using headphones tucked in their ears, or in contrast, using a small device in a noisy, populated area
When editing, test the audio by listening through headphones and a variety of speakers and devices. Also note that most web audio is delivered in mono, so you might want to mix down tracks into a single track and listen carefully to the results
Aesthetically, narratively, and thematically, the choice of music is an extremely important one in completing the video. When producing web video, be aware of the many options under Fair Use
Original music is a great way to ensure being free from copyright issues. This can also be an affordable option if you find someone local or who wants to use your video as a promotional opportunity
Sound can be used to:
Alter the viewer’s perspective
Create emotional impact
Transition between scenes
Final Deliverables:
The final delivery for web video can be a bit more complicated than traditional TV. There are many aspects to consider while trying to deliver high-quality video to your target audience. The most important thing you can do in the process is begin with high-quality sound and image, a great story, and a clear set of goals. If you know your audience, genre, distribution range, and finance plan, then all of this will inform how to share your finished video
Hosting:
One of the first considerations when planning the final stages of production is deciding on a web host for your video. You may choose a host that has a widespread, reliable, low-bit-rate distribution model across all platforms or one that focuses strictly on delivering the highest quality video to a more targeted audience. When deciding on a host you also need to be aware of the technical requirements and limitations. Some will be more affordable and possibly free, but may come with more limitations. You need to balance the needs of your project with the cost of the hosting
Keep the following features in mind when deciding on a host:
Quality of video
Bandwidth
Other hosted videos
Storage
Accepted file formats
Codec
Video player
Customer support
File organization
Privacy settings
Pay vs. free model
Analytics
Compression:
When you hand your video over to a host they will most likely be re-compressing it to fit on their server after you have just compressed it in order to transfer to them
Some web hosts have made their encoding practices public knowledge, and it’s worth taking the time to encode your video to their standards, as you will do it with more care than they would
Compression works by removing information from files to make them smaller and easier to view on the web, while trying to maintain as much quality as possible. A compression algorithm decides which pixels to keep and remove. This algorithm is called a codec, which is short for Combination of Compression and Decompression. There are a number of codecs to choose from and there is no one right answer for all projects
Some popular web formats are:
MPEG-4
H.264
HTML5
Quicktime
Flash Video
Windows Media
Silverlight
There are some compression tools out there that can help you discover the right codec for your needs:
Quicktime
iTunes
iMovie/Garageband
Adobe Premiere Elements
MPEG Streamclip
Sorenson Squeeze
Apple Compressor
Adobe Media Encoder
Telestream Episode Pro
Few tips worth mentioning during the compression phase that may help you produce high quality in a smaller file size:
It is helpful to determine the size of the finished video. You will either produce a standard 4:3 or a widescreen 16:9 aspect ratio. Typically you should keep the project in the size you shot it. In the compression phase, you can consider shrinking the frame. It dramatically reduces the file size
Pixel aspect ratios is the ratio of the width of a pixel to its height, and is important to consider when taking into account the delivery format of the final video. Standard formats and widescreen formats require different pixel aspect ratios
Videos produced for television are interlaced to produce smoother motion, but should be progressive for computer playback. To prevent getting a pixelated image from interlaced video, you can run a de-interlace filter through your editing software
Audio is extremely important but also quite large. You can often get away with lowering the sample rate and producing a smaller file size with an unnoticeable change
Another space-saver worth trying is reducing the frame rate, which results in less information to encode and may be acceptable viewing on the web
To ensure better looking images also try adjusting the brightness, contrast, and saturation levels throughout the process, since video signals and computers operate on slightly different RGB models
At this stage, you may have a tangible product you can see on the screen. You have delivered all the final dubs to the client, and said goodbye to the editor and audio mixer
Your project itself isn’t really finished. There are more details to wrap up, as well as guidelines for getting exposure for your project and for yourself
1. What is the producer’s role in post-production? How is it different from that of the post-production supervisor?
2. Name four important legal documents that are essential to check prior to the post-production process
3. Why is time code so important in the editing and mixing of a project?
4. Describe the uses and the differences between stock footage, archival footage, and footage that is public domain
5. What can you do as a producer to prepare for the edit session? For the audio mix?
6. What would you look for in hiring an editor? How could you find one in your area?
7. Compare an NLE system with linear film editing
8. What audio elements are needed in mixing most projects?
9. Briefly describe the audio mixing process
10. Name three deliverables that are required in most contracts
Before starting postproduction, you want to study any shortcuts that can make the postproduction process more creative and efficient
The producer’s job is to know as much as possible about everyone else’s job:
What they do
The tools of their trade
Their rates
The facilities in which they work
The subtleties of their art
In producing for TV and new media, the producer usually supervises the project from beginning to end, including the entire postproduction process
A more complex project might require a postproduction supervisor who acts as the producer of postproduction. They work closely with the producer to maintain the vision of the project and supervise all phases of postproduction including:
Editing
Mixing
Graphics design
Final composite
Delivery of the final master
The postproduction supervisor keeps track of:
All the footage that has been shot as well as:
All the numbered and organized tape reels or storage devices
Screening logs
Dubs
Other log sheets
Other visual images, such as:
Stock footage
Archival footage
Animation
Graphics
Artwork
Copies of any related legal release forms
All audio elements such as:
Dialogue
Background audio
Special effects
Original and/or stock music
Cue sheets
The need to legally protect your project never stops, it continues way past postproduction
You want an editor who has a keen sense of storytelling. You want them to be familiar with the best editing system for your project, someone who has kept up with its technical compatibilities and system nuances. They must be able to deliver a color-corrected, audio-balanced final product that’s technically up to specification. You want them, ideally, to have experience in cutting a show that’s similar to yours, and you want them to be the kind of person with whom you can spend long hours or days at a time
You can save money and time in postproduction when you:
Organize your tapes or storage devices and location logs
Screen and log your footage
Organize editing elements including footage, audio, and graphics
Write a paper cut for the edit session
When labeling your tapes or memory cards, design an easy system for naming each one
In a studio setting with several cameras, match the camera number with a tape number
It helps when you label each tape cassette (or disk, memory card, etc.), including:
The tape number (Tape 1, Tape 2, etc.)
The location where it was shot (Studio B, in Central Park, etc.)
The date of the shoot
The audio tracks (Track 1 is the lav, Track 2 is the boom, etc.)
The camera it was shot with (Camera 1, Camera 2, etc.) in multicamera shoots
Tape log:
The producer keeps track of the footage that’s been shot in a tape log
The tape log provides a fast way to find your footage
Film-to-Tape:
Any footage shot on film must first be transferred to digital video before it can be edited in a nonlinear editing (NLE) system
The film-to-tape transfer is a complicated and costly procedure in which the film is converted to video via a telecine machine, also called a film chain, that scans each frame and converts it into a video signal
During the film-to-tape transfer process, the image and production sound is transferred, and if needed, the film can also be color-corrected
Sometimes, complications can arise from the difference in frame rates between film (24 fps) and video (30 fps) as well as audio syncing. If you plan to shoot in film and transfer it to video, discuss the film-to-tape process with the editor, and research the resources available on the subject
Tape-to-Film:
Producers occasionally need a 35-mm film print of their video work
Alternative Sources: Stock and Archival Footage:
Stock footage facilities license a wide range of high-quality footage that’s been shot all over the globe by professionals who sell clearance rights to producers
This footage is high-quality and is often shot on 35-mm film and transferred to video, or shot on high definition, or onto 2K or 4K digital files
Stock Footage Search:
Go online, and search “stock footage” facilities. In most cases, you can see all their footage online - it’ll be watermarked in some way so it can’t be stolen. After you’ve made your choices, and picked the footage you want, you’ll then negotiate a fee for the rights to use it in your project
Stock Footage Fees:
Double-check the clearances on any copyrights and trademarks
In some cases, stock footage may require you to obtain releases from any talent or people on screen
Music or narration that is mixed into footage needs to be cleared
The factors that influence the license fee are:
The amount of time for which you want the rights
The territories
Any special advertising or promotional uses
The total number of runs
Use in new media formats
Archival Footage:
The archivists will research, gather, and/or clear the rights for historical footage
As with the stock footage, fees vary and are dependent on their use
Public Domain Footage:
When the copyright has elapsed on the footage, it is no longer owned by anyone and its rights are in the public domain (PD). You can use it freely, without paying for clearances or royalty fees
A cost-effective method of preparing for editing is to screen and log your footage before the edit session. From these log notes, you can construct a paper cut or an editing storyboard. It’s like a shooting script for your editor and the sound designer; it gives them a clear outline of what scenes appear in what order, and where each shot can be found. The paper cut lists time code (TC) locations and descriptions of selected edits, as well as notes about graphics and audio, and the order in which footage appears in the script
Ideally, you want to transfer your footage for screening to DVD (or an FTP site) with a matching time code. This means that the TC on your original footage is exactly the same as on your screening cassette. It’s called visible time code, also vizcode, or VTC, and it is displayed in a small box on the bottom or top of the screen
Screening Log:
If you’re logging dialogue, either scripted or unscripted, you might type each word verbatim for an exact transcription. Or, type just the keywords and mark irrelevant sections with an ellipsis (…)
Often, the tapes are transcribed by a professional transcriber who makes a note of the TC at regular intervals, usually every 30 to 60 seconds
Your tape log details the:
Tape number
TC numbers for the in-point and the out-point of the scene
Shot’s angle (MS, etc.)
Brief description of the scene
The Scope of the Log:
Your log sheet might include any of these elements:
Studio or location footage
Stock footage
Archival footage
Graphics
Animation
Audio tracks
Additional audio components
Not every producer has the ability to “visualize” what shots cut well with other shots. But you know what the primary scenes are, and their sequence in the script. Because you’ve most likely shot your footage out of order from what appears in the script, you’ll include all the reel numbers and TCs onto your paper cut, in the order in which they’ll appear in the final edited product
In writing your paper cut, you’ll find these terms helpful:
Shot:
A single uninterrupted videotaped segment which is the primary element of a scene
Scene:
A dramatic or comedic piece consisting of one or more shots
Generally, a scene takes place in one time period, involves the same characters, and is in the same setting
Sequence:
A progression of one or more scenes that share the narrative momentum and energy connected by emotional and narrative energy
Each editor has their own strengths and styles of cutting
An experienced editor can take disparate shots and elements and weave them together, creating a seamless flow. As creative artists, they can “paint” a mood with pacing, place a perspective on the action, and signal conflict or comedy
A technically adept editor can design special effects or transitions between scenes, color-correct the footage, and make sure your project conforms to broadcast standards
The producer’s role with the editor is highly collaborative. You want to give the editor specific targets for the project
With the user-friendly, inexpensive, creative, and evolving NLE systems an entire project can be edited on a laptop
Many producers do their own rough cut first, working out some of the more obvious problems, then bring that rough cut for the editor to fine-tune and take to the next level. But, not every producer has the technical savvy or creative eye to be a good editor. You want your project to reflect your vision, and to adhere to all broadcast standards so it can be aired or connected to other platforms, and also have the technical capacity to be dubbed with no loss of generations, or quality
You want to create an environment in which the work can get done:
When you’re in the edit room, the editor needs to concentrate, so keep phone calls and distracting conversations to a minimum
Discourage people from crowding into the space
When possible, encourage creative leeway with different shots or new ideas
Make sure they get a genuine “thank you” along with plenty of food, water, and coffee during the edit sessions
To find an editor you can:
Talk to other producers, directors, and writers about editors they’ve worked with
Call regional or local television stations who may “hire out” their editors and facilities for outside work. If not, ask if they can recommend local freelance editors and/or facilities
Check with local high schools and colleges that have editing equipment for their students. Often, their student editors can be hired for low-budget projects or can work for academic credit
The rapid evolution of post-production technology has brought editing, sound mixing, and graphics into the digital domain
Every six months, new equipment and software floods the marketplace; a system that is state-of-the-art this year is either upgraded or replaced next year
These systems work on the same basic principle as editing on film with an NLE system, pieces of footage can be digitally “spliced” together out of order, just like film editing. Film editing has always been nonlinear, done with tape and scissors, and its pieces cut and taped together by hand
Before nonlinear editing, video editing was linear - electronically edited in an “always moving forward” direction. The traditional way of editing video has been to edit in the chronological or lineal order that shots appeared in the piece. Now, editing with digital equipment is done in a cut-and-paste mode, just as with film, except it’s edited electronically rather than manually
The popular NLE systems all work on similar principles. When you can learn one system, it’s only a matter of nuance to find the right buttons in the right place on another system
Final Cut Pro and Avid are the systems currently used by most professionals. They offer high-quality options for finishing, are updated consistently, and support more plug-ins
All professional-quality cameras now shoot a digital signal
A few holdout producers in the news or unscripted programming still shoot in Beta - it’s a tried-and-true standard. It can be downloaded into an NLE via a component signal, or through a Digibeta with an analog board that can process the analog signal to a component digital path
On the other end of the technological spectrum are newer cameras like the Red One. It doesn’t use tape or a disk, but records images and audio onto a digital file, in this case, files up to 4K
Compression relates to digital video, and simply means that the video signal is compressed to reduce the need for extra storage, as well as transmission space and costs
Compression techniques involve removing redundant data, or data that is less critical to the viewer’s eye
The more the digital signal is compressed, the more distorted the image’s details. You can see this effect in pirated copies of DVDs when the picture dissolves or fades to black - the sharpness of the image disintegrates and the pixels become larger. You can see this same effect on your NLE at a low resolution, also called low rez
During the shoot, the DP or camera operator might ask if the footage needs to be shot with a TC setting that’s either a drop frame (DF) or nondrop frame (NDF)
Because video runs at 29.97 fps and not 30 fps, nondrop frame footage has a: 0.3-frame discrepancy. By the end of a one-hour show, there are 3.6 extra seconds to account for. Broadcasters demand an exact program length, so a 60-minute program is usually delivered in DF, because it’s exactly 60 minutes long and the show’s timings are in real time
Before you begin editing, your footage must first be transferred or downloaded, into the NLE
For productions shooting on tape, the downloaded tapes are digitized in real time - it takes eight hours to digitize eight hours of footage, so build digitizing time and costs into your budget. When it’s downloaded, it’s converted into a digital file that can be read by editing software
Memory cards have eliminated the need to transfer footage in real-time, but the process of transferring footage from a memory card to a computer’s hard drive can also be a time-consuming one
More on Downloading and Digitizing:
The editor of your project is not always the person who does the digitizing; often, it’s done overnight at a lower rate by a dubber on the night staff. As it is being digitized, perhaps you and/or the editor can categorize the footage with recognizable information like tape numbers, time codes, and scene descriptions, and store everything in computer folders or bins
Producers often designate only certain segments or portions of tapes to digitize, called select s, so they don’t take up storage space for footage they won’t use. This is an area in which a good logging program is an invaluable tool
Only a few years ago, the biggest drives available were one-gigabyte drives that sold for $10,000. Today, a 500-gig portable hard drive costs a tiny fraction of that, and a four-terabyte drive has been promised by a major manufacturer for a price that’s easy for most production budgets. This luxury of digital storage no longer limits the producer from loading footage and editing in low resolution. You can now cut in high rez, which looks much better than low rez, get client comments, and do any revisions in high rez as well
FireWire:
Initiated by Apple Computer, FireWire is also called known as IEEE-1394 and is a standard communications protocol for high-speed, short-distance data transfer
FireWire theoretically presents itself as the only “lossless” way to digitize (in the case of increasing outmoded tape-based productions) or transfer footage directly into an NLE. It’s currently considered the most efficient way to load editing components into an NLE. It allows you to transfer video to and from your hard drive without paying the higher costs of JPEG compression or buying NLE software or banks of RAID-striped hard drives
After all the footage, audio, and graphic elements have been loaded into the NLE, the editor cuts together the first rough cut - a basic edit. It forms the core of your finished piece and reflects all the basic editing decisions. Over time, and as part of the creative editing process, this rough cut changes and evolves, but it’s this first cut that shapes the project. Some editors refer to the rough cut as a radio edit or an A-roll edit. This describes the process of first laying down all the sound bites, with video, and listening to it as much as watching it. This helps make sense of the project’s narrative viewpoint and pace
The next step is to make it visually interesting by editing all the video footage. But each project is unique, and it dictates its own approach to the rough cut. In a music video, the editor first lays the music down and then cuts the footage to synchronize with the musical beats. In some programs, the narration is laid down first. Then, the footage is edited to fit the narration. If the narration, or the voice-over, hasn’t been finalized, you can record the script by using a scratch track as your cue. This preliminary scratch track of narration, read by you or someone else, helps set the timings and beats for your rough cut. It is replaced later by a professional narrator
Regardless of what your particular project calls for, your rough cut clearly shows what works and what doesn’t, what shots cut well with other shots, and the total running time (TRT) of this first pass
Throughout the editing process, the editor works closely with the audio tracks:
Separating them
Balancing out levels
Keeping track of where everything is on the computer
Most editors lay out their audio tracks like this:
Tracks 1 & 2: Narration
Tracks 3 & 4: Sound on tape/digital file
Tracks 5 & 6: Stereo music
Tracks 7 & 8: Sound effects
Tracks 9, etc.: Overlapping audio, music, or dialogue
Most projects take time to edit. The editing process usually goes through several rough versions before there’s a final product that makes everyone happy. Then, the editor makes a frame-accurate edit decision list (EDL) that provides exact notes of all the reel numbers, time codes, cuts, and transitions in the rough cut. Finally, the editor re-edits or conforms the rough cut by matching the original footage in high rez, using the EDL
For now, the offline-to-online process has become the norm in HD editing
As the industry has moved to shoot almost exclusively in high definition, the editing and post-production processes have met with a host of new challenges. High definition has complicated the post-production workflow, so careful planning, prior to shooting even, is very important
Step One:
The first step is to ask yourself whether you need to downconvert your footage. Smaller, shorter web-based projects can often be edited natively. But if the answer is affirmative, then you must know what downconversion format you want to use
Shooting 24 frames in HD can occasionally complicate the editing process. Some producers downconvert the 24 frames to 30-frame DVCam and then rely on a conversion program to reconvert the 30 frames back to 24 frames for the conform session. Other producers stay in 24 frames, feeding the 24 frames directly into the NLE
Because a mistake can be costly down the line, professionals recommend that projects that are shot in 1080i or 24p be edited in the NTSC video format; it’s easier and cheaper at the moment
Step Two:
Next, the downconverted footage is digitized into the NLE system
Before you download, clearly mark each reel with a name or number that can be easily read by the computer. Ideally, limit it to four to six characters so the computer can easily read and distinguish each name
Make sure that the TC from your original field recordings is downconverted properly with an exact match
Step Three:
Although it’s easy to import animation, graphics, and computer-generated imagery (CGI) into an NLE system, taking these elements into an online session can be tricky. You can either have them created in the final HD resolution, or you can bring them into the online session, render them out to frames, and transfer these to an HD file. These are then downconverted and treated like all the other elements in your edit
Step Four:
After you’ve completed your NLE edit, the editor can export an EDL (edit decision list) with all the information needed to conform in the online session, if needed
After you’ve made your final cut of your project, send its EDL and the digital cut to the online editing facility in advance of your actual session. Come prepared for the online session with all your:
Original camera reels
Graphics files
CGI and effects reels
Any titling or credit information that may be added to your cut
Step Five:
The online editor then assembles the show, using the EDL information. Your presence in this phase of editing is critical - the editor isn’t familiar with your project, and the EDL is only an impersonal list of numbers that may not include transitions, wipes, dissolves, and other important creative details
Certain shots take on specific meanings when they are juxtaposed with other shots. This juxtaposition is editing. It can manipulate time and create drama, tension, action, and comedy. Without editing, you’d only have disconnected pieces of an idea floating in isolation, looking for a connection
Editing in today’s media world still follows classic editing guidelines. These were established by American director D. W. Griffith, and Russian directors V. I. Pudovkin and Sergei Eisenstein, early in the last century. These pioneer filmmakers realized a century ago that film possessed its own language, with rules for “speaking” that language. They set the standards for editing that are used today by virtually all editors, no matter what the format
Some styles of editing include:
Parallel editing:
Two separate yet related events appear to be happening at the same time, as the editor intercuts sequences in which the camera shifts back and forth between one event and another
Montage editing:
Short shots or sequences are cut together to represent action, and ideas, or to condense a series of events
The montage usually relies on close-ups, dissolves, frequent cuts, and even jump cuts to suggest a specific theme
The montage effect gives the viewer a lot of information in minimal screen time
Seamless editing:
The viewer is unaware of the editing because it is unobtrusive except for special dramatic shots. It supports the narrative and doesn’t distract with effects
The characters are the focus, and the cuts are motivated by the story’s events
Seamless editing motivates the realism of the story and traditionally uses longer takes, match cuts rather than jump cuts, and selective audio that can act as a bridge between scenes
Quick cut editing:
This style of editing is highly effective in action and youth-targeted programming
It’s used in music videos, promos, commercials, children’s TV, UGC, and in programs on fashion, lifestyle, and youth culture
It combines fast cuts, jump cuts, montages, and special graphics effects
Cut:
A quick change from one shot with one viewpoint or location to another
On most TV shows there is a cut every five to nine seconds, and much faster in some shows
Most cuts are usually made on an action, like a door slamming or a slap to the face
A cut can:
Compress time
Change the scene or point of view
Emphasize an image or an idea
Match cut:
A cut between two different camera angles of the same movement or action in which the change appears to be one smooth action
Jump cut:
Two similar angles of the same picture cut together, such as two closeup shots of the same actor
This style of editing can occasionally be edgy or make a dramatic point, but it can also signal poor editing and continuity
Cutaway:
A shot that is edited to act as a bridge between two other shots of the same action
It helps to avoid awkward jumps in time, place, or viewpoint and can shorten the passing of time
Reaction shot:
A shot in which an actor responds to something that has just occurred
Insert shot:
A close-up shot that is edited into the larger context and provides an important detail of the scene
Few shows on television, online, or on other platforms are viewed or seen in real-time
What the viewer sees is known as screen time, a period of time in which events are happening on the screen
There are several devices that an editor can use to give the viewer an impression of compressed time or time that has passed or is passing:
Compressed time:
The condensing of long periods of time is traditionally achieved by using long dissolves or fades, as well as cuts to close-ups, reaction shots, cutaways, montages, and parallel situations
Simultaneous time:
Parallel editing, or cross-cutting, shifts the viewer’s attention to two or more events that are happening at the same time
The editor can build split screens with several images on the screen at once, or can simply cut back and forth from one event to another
When the stories eventually converge, the passage of time stops
Long take:
This one uninterrupted shot lasts for a longer period of time than usual. There is no editing interruption, which gives the feeling of time passing more slowly
Slow motion (slo-mo):
A shot that is moving at a normal speed, and then slowed down
This can emphasize a dramatic moment, make an action easier to see at a slower speed, or create an effect that is strange or eerie
Fast motion:
A shot that is taking place at a normal speed that the editor speeds up
This effect can add a layer of humor to familiar action or can create the thrill of speed
Reverse motion:
By taking the action and running it backward, the editor creates a sense of comedy or magic
Reverse motion can also help to explain the action in a scene or act as a flashback in time or action
Instant replay:
Most commonly used in sports or news, a specific play from the game or news event is repeated and replayed, usually in slo-mo
Freeze-frame:
The editor finds a specific frame from the video and holds on to it or freezes it
This effect abruptly halts the action for specific narrative effects
A freeze frame can create the look of a still photo
Flashback:
A break in the story in which the viewer is taken back in time
The flashback is usually indicated by a dissolve or when the camera intentionally loses focus
Dissolve:
When one image begins to disappear gradually and another image appears and overlaps it
Dissolves can be quick (five frames, or one-sixth of a second), or they can be slow and deliberate (20 to 60 frames). Both signal a change in mood or action
Fade-outs and fade-ins:
A fade-out is when an image fades slowly out into a blank black frame signaling either a gradual transition or an ending
A fade-in is when an image fades in from a black frame introducing a scene
A fade out or fade in can also be effective from a white blank frame rather than a black one; like a dissolve, this editing transition also works to show time passing or to create a special “look”
Wipe:
An effect in which one shot essentially “wipes off” another shot
A wipe can be effective, or it can be a distraction; overuse of wipes can be the mark of an amateur
Split screen:
The screen is divided into boxes or parts. Each has its own shot and action that connect the story. The boxes might also show different angles of the same image, or can contrast one action with another
It works as a kind of montage, telling a story more quickly
Overlays:
Two or more images superimposed over one another, creating a variety of effects that can work as a transition from one idea to the next
Text:
Almost every show has opening titles (including the name of the show) and a limited list of the top creative people (such as the producer, writer, director, actors, etc.); these are called opening credits
Titles that appear at the end of the show are called closing credits, and they list the actors’ names and roles, or positions in the production, as well as other detailed production information
Words that slide under someone on screen and spell out a name, location, or profession are called lower thirds because they’re generally inserted in the lower-third portion of the screen
The electronic text is known generically as the chyron
The text can be digitally imported onto the picture with various speed, rhythms, and movements and from any angle
The graphics give the viewer an impression of the tone and pace of the show, and when combined with music, text can create a unique style for your piece
Opening and closing credits might be superimposed over a scene from the show, or on top of stills, background animation, or simple black. Some projects require subtitles for foreign languages or close captioning for the hearing-impaired
As the producer, you’re responsible for double-checking all names, spellings, and legal or contractual information for the lower thirds and final end credits
Animation:
Simple animation can be created easily and cheaply by using software like Flash and After Effects
More complex animation is created by an animation designer who uses storyboards and narration and manages an impressive crew of people who draw, color, and edit animated sequences
Motion control camera:
Special computer-controlled cameras that shoot a variety of flat art such as old newspapers, artwork, and photos, sometimes called title cameras
They are designed to pinpoint detail and to create a sense of motion for otherwise static material with camera moves
Design elements:
Some project genres depend on the use of various design elements to add depth and information to the content
These elements include:
Logos
Maps
Diagrams
Charts and graphs
Historical photographs
Still shots
Illustrations
The look of film:
Falling loosely into the graphics realm, there are several postproduction processes that give the video the appearance of a film by closely mirroring the color levels, contrasts, saturation, and grain patterns of the film at a fraction of the cost and time of the film
Color-correction:
The process of reducing or boosting color, contrast, or brightness levels can be done by using color-correcting tools such as Flame or After Effects
Retouching:
This plug-in process offers a gamut of tricks that can enhance an image, like “erasing” a boom dangling into the shot, or a wire holding up a prop
Compositing:
Two or more images are combined, layered, or superimposed in the composite plug-in process
Rotoscoping:
Frame-by-frame manipulation of an image, either adding or removing a graphic component
In a less complex project, the video editor can mix all the audio requirements and components in the edit session
Some projects have more complicated audio elements that require an audio facility for additional work and refining
An audio facility might be a simple, room-sized studio with one or two sound editors who work on audio equipment that synchronizes TC and computers and, depending on the facility, can charge $50 to $200 an hour. It could also be an elaborate, theater-sized studio with several audio mixers and assistants, extensive equipment, and a setup that could be quite costly
Before you book time in an audio facility, discuss your project’s audio needs and their possible costs
The sound designer works with two contrasting “qualities” of sound studio and approaches them differently, both aesthetically and technically:
Direct sound:
Live sound
This is recorded on location and sounds real, spontaneous, and authentic, though it may not be acoustically ideal
Studio sound:
Sound recorded in the studio
This method improves the sound quality, eliminates unwanted background noise, and can then be mixed with live sound
As the producer, you want to work closely with the sound designer: supply the necessary audio elements and logs, then discuss the final cut of your piece; offer your ideas, and ask for suggestions
In the first stages of an audio mix session, you and the audio crew sit in a spotting session during which you review each area of your project that needs music and effects for dramatic or comedic tension. In this session, you’re listening for variations in sound levels, for hums and hisses, and anything else that wasn’t caught in the rough mix
Often, an audio facility is willing to negotiate a flat fee for the whole job
The sound designer can:
Mix tracks
Smooth out dialogue
Equalize levels and intensity of sound
Add and layer other elements like music and effects
Working with the sound editor is much more effective if you can:
Be prepared:
When possible, send a rough cut of the project to the sound editor before the mix session
Come to the mix with a show run-down that lists important audio-related details like transitions and music
Provide a music cue sheet that lists all the music selection titles, the composers and their performing rights society affiliation, the recording artists, the length and timing of each cue, the name and address of the copyright owner(s) for each sound recording and musical composition, and the name and address of the publisher and company controlling the recording
Be patient:
At the beginning of the mix, the sound editor needs to do several things before the actual mix can begin, including separating the audio elements, patching them into the console, adjusting the gear, and finally, carefully listening to everything
Be quiet:
Although you may have worked with these audio tracks for days in the editing room, it is the first time the sound editor has heard them. Keep your conversations, phone calls, and interruptions to a minimum
Be realistic:
Your mix may sound excellent in the audio mixing room because the speakers are professional quality, and balanced, and the acoustics are ideal. But most TV shows and online projects are played on TV sets or computer monitors with mediocre speakers. Many of the subtler sound effects you could spend hours mixing may never be heard, so listen to the mix on small speakers that simulate the sound that the end user will hear
Digital sound offers an unparalleled clarity of sound. There is no loss of quality when dubbed, and because digital requires less storage space than video, it doesn’t need compression
In situations where the mix is more complex, the picture is first locked or finalized, and then the audio tracks are exported, usually to a DAW. The tracks are either married to the video or are separate
The piece can be delivered in:
Stereo
Mono
5.1
All versions
Digital audio is easily labeled and stored, making it more efficient to keep audio in sync and slide it around when needed. Most soundtracks are now prepared on a multitrack digital storage system. The popular professional options include:
DAW (digital audio workstation):
Programs such as Pro Tools
Digital multitrack:
Programs include DASH 3324 or 3348
Analog multitrack:
24-track Dolby SR or A
Dialogue:
Dialogue is the primary audio element
Words spoken between two (or more) actors or people on-screen are called dialogue
Sometimes it’s recorded with background ambient sound, although usually it is recorded in isolation from other audio
Sound effects (SFX):
On a set or on location, any background sounds that surround the dialogue are ideally recorded separately
If the sounds don’t exist in that location, the sound editor can search through prerecorded sound effects available from a sound effects library
Producers often buy libraries of sound effects and stock music that offer thousands of audio options and their royalty fees are covered in the initial cost
Automatic dialogue replacement (ADR):
After all their scenes have been shot, actors may need to rerecord lines of dialogue or add a line written after the shoot was over
In the recording studio, actors read their lines, keeping them in sync with their on-screen lip movements. Another option is to record new lines that will be mixed into the program later, either over a cutaway or in a long shot if their lips don’t match the new lines. Actors might also read a script in a different language that is later dubbed over the original track
Often, a loop group of people is brought into an ADR session to create crowd sounds that will be mixed into the dialogue. This area of ADR is called walla, which is intentionally unintelligible so audible words won’t intrude on the dialogue
Voice-over (VO) or narration:
The narrator who reads a script or commentary adds another layer to the audio
Narration can introduce a theme or link elements of a story together. It adds extra information with an air of authority and helps interpret ideas or images for the viewer
Often, an on-camera character speaks over the picture in the first person as though she is directly speaking to the viewer
A minor character can tell the story in the third person, or an unidentified narrator who is not on camera can distance the viewer from the image by adding an objective voice to the story
Narration is generally recorded in a separate audio session and mixed in later over the picture
Voice-over can be dialogue that is shot originally on-camera and later played over another picture
Foley:
If the sounds can’t be found in a sound effects library, they can be created by the Foley artist
They’re recorded separately in an audio facility, often in sync with the action, and then mixed with other sound elements
Foley is the sound of:
An actor’s movements
Hands clapping
Rustling clothing
A kiss
Quiet footsteps
A fistfight
Music:
Original:
This is music that’s been composed specifically for a project
It may include themes for the opening and closing, and/or for the body of the show; its emotional direction can highlight the action, characters, and their relationships
The composer is familiar with the creative and technical process, and either hires the musicians or creates the music alone or with a partner
A composer can use computer language known as musical instrument digital interface (MIDI). It is capable of simulating a range of music from a single guitar to an entire orchestra
The final score can go straight from the computer into the mix
Stock:
This is music that has been specifically composed and recorded to be available for multiple uses
The composers use audio sampling and composition software and sophisticated equipment to create vast libraries of engaging and effective music that is both versatile and inexpensive
Stock music is a creative alternative used in every genre. It’s less expensive than hiring a composer, and the negotiated rights can be either exclusive or shared, depending on your budget and the end use. Stock music houses can be researched and located by an online search, and most offer samplings that can be downloaded from the internet
Prerecorded:
The source of this music could range from a popular song to an obscure CD, but a strong soundtrack adds an extra appeal to your project. Regardless of the source, you’ll first need to clear all music rights
Music cue sheet:
Regardless of where your music comes from, you’ll make a music cue sheet that lists every piece of music, its source, its length, and who holds the rights
Diegetic:
Music that the characters in the scene hear
Non-diegetic:
Music not heard by the characters that is added later, such as a soundtrack
Sound bridge:
Transition between one shot (or scene) and the next:
Audio elements
Dialogue
Sound effects
Music
Narration
Selective sound:
Lowering some sounds in a scene, and raising others, can focus the viewer on an aspect of the story
Overlapping dialogue:
In natural speech patterns, people tend to speak over one another and interrupt. Yet dialogue is usually recorded on separate tracks without this overlap. The sound editor can recreate this authentic-sounding effect in the mix, and can also separate dialogue tracks that are too close together. Conversations between several people, like those in two different groups, are often recorded on separate tracks so they can be woven together in the mix for a natural sound
Steps in audio mixing vary from project to project
During your video edit session, the editor separates the dialogue, music, effects, and other audio elements onto various tracks or channels
Depending on the complexity of your project, the editor can mix the elements in the edit room, or will do a preliminary mix that needs to be completed in an audio facility
During the mix, all the separate audio elements are blended together into a final mix track that is then “married” to the picture and locked in
Before the final audio mix begins, make sure that all the video and audio edits have been agreed upon by the clients and other creative team members, and won’t require any further changes. Any revisions involving audio after the picture is locked can mean costly remixes
Sound editors take varying routes in mixing, and each has a unique style of approaching the proces
Depending on the complexity of the project, any or all of the following components are part of an audio mix:
Dialogue:
All dialogue is cleaned up and extra sound effects or extraneous noise are either deleted or moved to separate effects tracks
Any ADR, narration, or voice-overs are also laid onto their own tracks
Special effects:
Any special effects tracks are separated, cleaned up, and each put onto its own channel
Ideally, there is ample room tone from each location that can fill in any gaps in the audio
Music tracks:
The music is generally the last element that is mixed into the audio
All the musical tracks are separated and divided into two categories: diegetic or source music (music the characters or actors hear on screen, like a car radio) or underscore music (music that only the audience hears, such as an opening theme)
5.1 audio:
5.1 refers to the positions in a five-speaker setup in which speakers are placed to the right, center, left, right rear, and left rear of the TV set
This kind of mixing is also called AC3 and Dolby Digital, and is prominent in Blurays, theatrically released films using SDDS and DTS systems, and in some TV broadcasts
5.1 audio requires a specially equipped television set to hear it at home
Most clients are very specific about what they expect as a deliverable or final product. Deliverables are generally part of your overall contract with a client, so you want to find out exactly what their expectations and specifications are. Ask for these deliverables in writing so there are no mistakes
The most common requirements for deliverables include:
Video format:
If your project is being broadcast, it is usually evaluated by a station engineer to make sure it meets broadcast standards
If it’s being dubbed, the dub house has technical specifications, too
You may be asked to provide a clean copy of the show that has no text superimposed on it
Audio format:
This might include separate mono mixes and stereo mixes, or a 5.1 mix, an M&E mix, special tracking, levels that are constant or undipped, and often one mix in English and another in a different language
Length:
The required program length can be quite specific
In most cases, PBS show lengths are six seconds less to accommodate a PBS logo
Commercial stations may require a half-hour show to be 22 minutes, while premium and cable channels are less demanding
Most nonbroadcast projects are more flexible
Dubbing:
Depending on the client’s requirements, you may be responsible for making protection copies, which are exact copies of your final master. These serve as backups in case of damages or loss in shipping
You might need to provide DVD copies of the project to the client. The amount of copies and their format should be spelled out in your contract, as should any special labeling or packaging and related shipping costs
Abridged versions:
You may need to provide an edited version of your project in which any nudity, violence, or offensive language has been removed or “bleeped out.” This version can be required by airlines, certain broadcasters, and foreign distributors
Subtitling:
Written text under a picture that translates only those words being spoken on screen from one language into another
Song lyrics or sounds are seldom subtitled
Closed captioning:
Also called close captions, this method of supplying visible text under a broadcast picture is mandated by law to be built into all American TV sets sold after 1993
These sets are designed with a special decoding chip that translates all the audio on the screen into text, such as spoken dialogue, and describes unseen sounds like a dog bark or a knock at the door
Especially designed for the hearing impaired, closed captioning is also useful in loud public places, when learning a language, and when the dialogue isn’t clear
The text usually appears in white letters in a black box at the bottom or top of the screen
Editing:
Editing for web video has the same ultimate goal as in more traditional media - to shape a coherent story and engage the audience
You can speed up the editor’s job by providing him with quality footage and sound, a variety of shots, shots for continuity, and a reasonable shooting ratio. A typical shooting ratio for web is 4:1, which means shooting four minutes of footage for every minute used in the final video. Choosing the right technology will make the editing process easier and faster
If your ultimate goal is to work in a larger professional house, you may want to learn Avid. Try to learn a program fully and it will make the editing process more efficient and easier if you do want to shift programs
One of the big differences between editing for web video and television is the length of the final product. Web video is typically between 3 and 15 minutes long. Webisodes and podcasts are on-demand and not time-based like traditional TV. They aren’t restricted to fitting a 30-minute time slot
When using graphics and titles, make sure you choose a consistent style and legible font for the web. In general the simplicity of sans serif typefaces works better for on-screen readability. Be wary of using too many templates or presets for your titles
Editors decide:
What is essential to the story
How to enhance the story
How long it needs to be
What to leave out
Sound Design:
Technically, the sound can be adjusted for better performance by filtering unwanted noise and leveling the volume. This volume adjustment is especially important for web video, since the audience is commonly using headphones tucked in their ears, or in contrast, using a small device in a noisy, populated area
When editing, test the audio by listening through headphones and a variety of speakers and devices. Also note that most web audio is delivered in mono, so you might want to mix down tracks into a single track and listen carefully to the results
Aesthetically, narratively, and thematically, the choice of music is an extremely important one in completing the video. When producing web video, be aware of the many options under Fair Use
Original music is a great way to ensure being free from copyright issues. This can also be an affordable option if you find someone local or who wants to use your video as a promotional opportunity
Sound can be used to:
Alter the viewer’s perspective
Create emotional impact
Transition between scenes
Final Deliverables:
The final delivery for web video can be a bit more complicated than traditional TV. There are many aspects to consider while trying to deliver high-quality video to your target audience. The most important thing you can do in the process is begin with high-quality sound and image, a great story, and a clear set of goals. If you know your audience, genre, distribution range, and finance plan, then all of this will inform how to share your finished video
Hosting:
One of the first considerations when planning the final stages of production is deciding on a web host for your video. You may choose a host that has a widespread, reliable, low-bit-rate distribution model across all platforms or one that focuses strictly on delivering the highest quality video to a more targeted audience. When deciding on a host you also need to be aware of the technical requirements and limitations. Some will be more affordable and possibly free, but may come with more limitations. You need to balance the needs of your project with the cost of the hosting
Keep the following features in mind when deciding on a host:
Quality of video
Bandwidth
Other hosted videos
Storage
Accepted file formats
Codec
Video player
Customer support
File organization
Privacy settings
Pay vs. free model
Analytics
Compression:
When you hand your video over to a host they will most likely be re-compressing it to fit on their server after you have just compressed it in order to transfer to them
Some web hosts have made their encoding practices public knowledge, and it’s worth taking the time to encode your video to their standards, as you will do it with more care than they would
Compression works by removing information from files to make them smaller and easier to view on the web, while trying to maintain as much quality as possible. A compression algorithm decides which pixels to keep and remove. This algorithm is called a codec, which is short for Combination of Compression and Decompression. There are a number of codecs to choose from and there is no one right answer for all projects
Some popular web formats are:
MPEG-4
H.264
HTML5
Quicktime
Flash Video
Windows Media
Silverlight
There are some compression tools out there that can help you discover the right codec for your needs:
Quicktime
iTunes
iMovie/Garageband
Adobe Premiere Elements
MPEG Streamclip
Sorenson Squeeze
Apple Compressor
Adobe Media Encoder
Telestream Episode Pro
Few tips worth mentioning during the compression phase that may help you produce high quality in a smaller file size:
It is helpful to determine the size of the finished video. You will either produce a standard 4:3 or a widescreen 16:9 aspect ratio. Typically you should keep the project in the size you shot it. In the compression phase, you can consider shrinking the frame. It dramatically reduces the file size
Pixel aspect ratios is the ratio of the width of a pixel to its height, and is important to consider when taking into account the delivery format of the final video. Standard formats and widescreen formats require different pixel aspect ratios
Videos produced for television are interlaced to produce smoother motion, but should be progressive for computer playback. To prevent getting a pixelated image from interlaced video, you can run a de-interlace filter through your editing software
Audio is extremely important but also quite large. You can often get away with lowering the sample rate and producing a smaller file size with an unnoticeable change
Another space-saver worth trying is reducing the frame rate, which results in less information to encode and may be acceptable viewing on the web
To ensure better looking images also try adjusting the brightness, contrast, and saturation levels throughout the process, since video signals and computers operate on slightly different RGB models
At this stage, you may have a tangible product you can see on the screen. You have delivered all the final dubs to the client, and said goodbye to the editor and audio mixer
Your project itself isn’t really finished. There are more details to wrap up, as well as guidelines for getting exposure for your project and for yourself
1. What is the producer’s role in post-production? How is it different from that of the post-production supervisor?
2. Name four important legal documents that are essential to check prior to the post-production process
3. Why is time code so important in the editing and mixing of a project?
4. Describe the uses and the differences between stock footage, archival footage, and footage that is public domain
5. What can you do as a producer to prepare for the edit session? For the audio mix?
6. What would you look for in hiring an editor? How could you find one in your area?
7. Compare an NLE system with linear film editing
8. What audio elements are needed in mixing most projects?
9. Briefly describe the audio mixing process
10. Name three deliverables that are required in most contracts