I spend a lot of time preparing projects from editorial for delivery to sound editors and colorists. Getting a project from one NLE to another audio or color application is varying degrees of painful. No applications have identical features or a single rock-solid format for translating between them. However, remember that it’s much easier for the editor or post supervisor to reconcile problems before sending anything to color or sound. Access to the entire project’s assets, as well as familiarity with the project are huge. Remember that sound and color have no idea what you’re sending them and though they may unhappily spend as much time blindly conforming a project on their end, they will happily bill you for it. Here are some tips to keep both of them happy.
Before doing anything, it’s important to export a reference video of picture and sound as they’ve been approved by the creative team, client, or you yourself. We’ll be doing a lot of track modifications so make sure your reference is exported and that you’ve duplicated your sequence before we begin.
Your reference video needs to include the audio and all effects done in editorial. This rough mix will be very useful as a reference to the sound team. Also include a timecode burn-in on your reference video file.
You don’t have to use a specific codec, but remember that this file is only for reference so you can keep it small. Often I’m transferring files online and I prefer DNxHD LB 8 bit which looks fantastic for its low bit rate. If you’re more used to ProRes than Avid codecs, this is closer to ProRes proxy at its minimal 36 mbps (it’s the same as DNxHD 36). You could get a smaller file using one of the temporally-compressed codecs, but they’re harder on the system and the reduction in file size isn’t worth it.
This one’s a bit trickier than sound and there is a lot that can go wrong.
As with sound, it’s imperative that you export a reference video so that the colorist knows exactly what your intent is. This video will be checked against to verify the translation from NLE to color software.
The reference is both a creative and technical reference. Not only does it serve to check accuracy of the translation of a timeline between software, but it serves as a guide to the creative decisions established before the sequence got to color. You’d be surprised how often the colorist has next-to-no communication with anyone from the creative team. If a producer approved a dark, desaturated look that the editor applied for a viewing session, the colorist will see that in the reference (but not in the actual source media to be colored!).
Collapse, where possible, everything down to as few a number of tracks as possible. Some things, like composites, will require clip stacking, but outside of that, keep your track count down and delete empty tracks. Delete all of your audio tracks.
“Baking” simply refers to rendering out anything that won’t transfer between your editing NLE and the color application. Bake your effect clips and re-import them into the timeline, replacing the old clips. If the ‘effects’ are superimpositions that are not meant to be colored (e.g. graphic overlays) then do not bake them in.
The most common format for timeline interchange is XML. That said, XML isn’t what most people think it is. XML is not one specific format for doing one specific thing, instead, it’s more of a standardized method of representing information that can be used for a huge variety of tasks. This means there’s more room for error than you might assume. That said, there are some XML standards that work pretty well like the FCP XML.
The EDL is simply an “Edit Decision List”. It’s old and primitive compared to XML. It basically gives you the ability to export one track and the clips that are on it. The EDL’s genesis comes from film days when films stopped being edited manually and were edited on tape or non-linearly on a computer. After said edit was done, the edit decision list was the map for going back to the physical film and making the associated cuts, this time actually physically cutting the celluloid film. Anyhow, that’s pretty old-school, and anything that’s shot film anymore is scanned at a high enough quality it’s never going back to film, so we currently use the EDL in the following method.
This goes by many names but the idea is this: rather than fighting the lack of consistency between applications, just render everything into one file and let the colorist slice it up. The obvious advantage is that nothing is lost in translation. If you employ this method, here are some things to consider
Firstly, you need to know what you want back from the colorist. It’s traditional for the colorist to render individual clips and deliver those renders and an XML back to the post supervisor. The post supervisor than imports the colored timeline and adds graphics overtop, reapplies transitions and custom effects, etc. I’ve recently found, however, that on some projects it’s becoming much simpler for the colorist to also act as the finishing artist and be the one responsible for delivering the final render. This isn’t going to work in every situation, and most post supervisors will want understandably want to maintain control of finishing in their NLE, but for non-broadcast work I’ve found it sometimes helpful to just create the deliverable myself.
You’ll need to render or ‘bake’ any effect specific to your application, just like I recommended for color.
I’ve received projects with massive gaps in the timeline where nothing was present at all due to this.
Keyframe interpolation won’t always be identical between software and you need to know how frame-accurate you need things.
Speed changes often don’t transfer perfectly between software. FCP X to Resolve has some of the best support, including not only linear and variable speed changes, but also keyframe interpolation and frame blending vs. optical flow methods.
Scaling can be a surprisingly tricky translation because each software deals with it differently. I prefer to leave Premiere’s “set to frame size” option enabled while editing in Premiere as to not lose resolution when moving around a scaled clip. If the scaling is then handled in Premiere’s motion panel, Resolve’s “Input Scaling” simply needs to be set to “Center crop with no resizing”. You’ll also want to make sure that the setting Resolve uses for mixed frame rate media match. If your coming from Final Cut select the Final Cut option obviously, but if you’re coming from Premiere choose “Resolve”.
You’ll want to make sure sequence settings for resolution and frame rate match. Also verify that your sequence timecode in the color application matches that of the reference video.
Here’s a look at what translates between NLEs and Resolve. The list is impressive, even basic color corrections in FCP X will appear as primary grades in Resolve’s color page.
Once you’ve imported the XML into your color program, you need to check your import against your reference video. There are several ways to do this, including Resolve’s older method of designating media as reference media on import, but this is the simplest:
Open your imported XML timeline. Set the “Source” viewer on the left to “Offline Mode”. Now just drag your reference clip to the blank source viewer’s window. The coolest part is that now you can right click the Timeline viewer and see various methods of comparing the two, including the useful “difference” method where everything identical will appear black so anything you see indicates a mismatch. The same modes are available using wipes on the color page.
Certain programs (looking at you reality TV) can really benefit from a hybrid approach to conform. It’s normal for the lowest quality, cheapest cameras to give you the most hassle in post. Reel name and time code problems can result in wasted hours as you try to get an accurate conform. My opinion is it’s simply not worth it. The biggest loss to the pre-conform method is you don’t get handles like you would with an XML. But if it saves you hours of conform work on the footage that looks the worst anyway is it worth it? It’s obviously your call, but here’s a method I’m finding to be a fairly successful hybrid XML and EDL approach.
Things that don’t EDL “pre-conform” well generally include cross dissolves (because you need frames or ‘handles’ beyond what’s given in the clip) and superimpositions. You can imagine how it’s tough to color a picture in picture effect when they’re married into one file for example. Things that don’t conform well with the XML method are, as mentioned, the consumer camera formats and any custom effects, some speed changes and or position and scaling anomalies. So the hybrid solution seeks to play to the advantages of both. In essence, you export the project with both methods. Generally you’ll rely on the XML first, but if you hit weirdness, simply default to the EDL. Depending on workflow, file size requirements, your project might benefit from one over the other, but that’s the beauty and customizability of this system.
Back to Premiere
If all of this sounds a bit intimidating and you’re thinking, “How in the world does this not go wrong”, the answer is, it does. And that’s ok, it just means more time is required to communicate and track down missing pieces. That’s why I wrote this article. It’s quicker for me to sit down and type it out once so I only have to share a link instead of reiterating the same things in e-mails.
And of course I can’t end without a Resolve plug. The entirety of what I’ve just written represents one of the main reasons I so love Resolve. There’s no XML, AAF, EDL, OMF. There’s no prepping of a sequence to send it to color or sound, then receive it back only to reapply everything before export. There’s no requirement for picture lock before getting a sequence to sound or color. All I do is click a button and I’m working on audio, color, or VFX and at any point, with a single click I’m back to working on the edit.