0% Complete
0/121 Steps

Export for Sound & Color

Lesson Progress
0% Complete

This is another section of the course made public because it saves me time if I share this information with others. If you’re working as a one-man-band, or using Resolve, you can ignore much of this section.

I spend a lot of time preparing projects from editorial for delivery to sound editors and colorists. Getting a project from one NLE to another audio or color application is varying degrees of painful. No applications have identical features or a single rock-solid format for translating between them. However, remember that it’s much easier for the editor or post supervisor to reconcile problems before sending anything to color or sound. Access to the entire project’s assets, as well as familiarity with the project are huge. Remember that sound and color have no idea what you’re sending them and though they may unhappily spend as much time blindly conforming a project on their end, they will happily bill you for it. Here are some tips to keep both of them happy.

Reference Video

Before doing anything, it’s important to export a reference video of picture and sound as they’ve been approved by the creative team, client, or you yourself. We’ll be doing a lot of track modifications so make sure your reference is exported and that you’ve duplicated your sequence before we begin.

Your reference video needs to include the audio and all effects done in editorial. This rough mix will be very useful as a reference to the sound team. Also include a timecode burn-in on your reference video file.

You don’t have to use a specific codec, but remember that this file is only for reference so you can keep it small. Often I’m transferring files online and I prefer DNxHD LB 8 bit which looks fantastic for its low bit rate. If you’re more used to ProRes than Avid codecs, this is closer to ProRes proxy at its minimal 36 mbps (it’s the same as DNxHD 36). You could get a smaller file using one of the temporally-compressed codecs, but they’re harder on the system and the reduction in file size isn’t worth it.


Track Order

First, delete any unused tracks. I advocated an “industry-standard” track order earlier in the course. If you didn’t follow it from the beginning, rearrange your tracks before sending to sound. It may sound silly, but it will make you look much more professional. Also remember to label the tracks and group any mono and stereo sources next to each other in each category. Ensure each track is stereo or mono (e.g. avoid using Premiere’s “adaptive tracks”). From top to bottom use this order:

  • Voiceover
  • Dialog
  • Production Sound
  • SFX
  • Music

Timeline Prep

Watch out for muted audio clips. Instead of using the NLE’s “mute” or “disable” feature, delete what you don’t really need or at the very least pull the clip gain down since that should transfer in the AAF. Sometimes volume automation is used to fade in or out of a clip rather than using a transition. This can cause problems so keep that automation within the clip and use only dedicated transition tools on either end of the clip.

Narrative Workflow

If you’re working on a bigger narrative project, there’s a bit of additional complexity because you need to be extra careful to maintain metadata and you’ll likely have to do a conform. On small projects you may not even need to conform the audio. It’s worth understanding the overall workflow:

When offline files were created for editorial, the assistant editor or DIT likely synced them to a mix track from the sound recorder. This is a mixdown of all the individual mics and while I believe it should only be used for dailies, it sometimes is the only option given to editorial. This means your various boom mics, lav mics, plant mics, etc. are all on one audio track and you can’t individually separate them. For the purpose dailies are intended to suit, this isn’t a problem. For the progressive editor desiring maximum control however, it’s useful to sync to the full sound files with all of their individual channels. Modern NLE’s can handle the large amount of tracks coming from production sound mixers, as well as display it in convenient ways. I think it’s best to leave the information available to the editor rather than simply syncing a mixdown, though many pros find it too cumbersome to deal with in an NLE and will disagree. Resolve is excellent at reading and displaying iXML, a metadata format for including track names from the audio field recorder. While they show up great in Resolve, only in Resolve 16 is Blackmagic finally allowing for synced audio filenames to export in an AVID AAF. This has been a weak point in using Resolve to deliver files for AVID editorial in the past.

A problem with editing using mixdowns is that if you’ll eventually have to replace them with the sound recordist’s original files. This is conforming, like we talked about previously. On a bigger shoot, it’s usually the dialog editor that gets the fun job of ensuring all the original “split” sound files are correctly imported in place of the mixdown files. The typical “industry-standard” workflow has been to export both a reference movie of the picture locked edit along with an AAF file linking to the original sound recorder’s associated files, but it’s the sound department’s job to do the conform.

Even if you edit with the original sound files, don’t forget to give the sound editors all the original recorded production sound. It’s helpful to provide more than just a media managed project of the final edit with handles because they’ll need to pull things like room tone, ambience, and potentially alternate takes.

I’ll talk more about going from Premiere to ProTools in my examples, but see the similar process of AVID to ProTools here.


The audio will be exported to a sound professional via an AAF. This format replaces the older, OMF and is superior, mainly in terms of passing metadata like track names and volume automation. Here is an example of the AAF exporter in Premiere with the settings I like to use.

I don’t include the “Mixdown” video for a couple reasons: As mentioned above, I like to specify my codec, and in this case, Premiere auto-defaults to an MXF-wrapped DNxHD 175 export. Some older Pro Tools installations won’t import correctly if the mix down video is included in the AAF so it’s another reason to avoid it.

Best Practices

Deleting all the video tracks before exporting the AAF is generally a good idea. Your rendered reference video is all you need and you’ll get errors in the AAF based on what’s not compatible in the video. I don’t like extraneous errors as they keep me from noticing the real errors hidden between them.

I usually like to include the complete audio file in case sound needs it for room tone or a “Franken-edit”. Depending on the project, this can drastically increase file size. If you’re not including the entire original audio file, you have the option of exporting a portion of it with handles. 5–10 second handles is a good balance between smaller file size and enough excess useful material for the sound editor to pull from.

I also prefer to *not* check ‘embedded audio’ because I like to see the audio files and the directory they came from and because embedding can sometimes compromise metadata. This will require a small step for your sound team importing the AAF into ProTools, so they may request an embedded AAF. On bigger film projects there’s usually more of an insistence on relinking to the original sound recorder’s files (since they’ll actually have an abundance of useful metadata) and on smaller projects (e.g. web spots) the size of the project and priority on speed often opt for an embedded AAF .

Audio effects can be rendered, and the original audio plus the render will be included. Why? Because an editor is usually skilled enough to apply the effect for the sake of creative review or editorial utility, but they generally won’t spend the time on it that a sound professional would. For example, the editor may automate a reverb to indicate physical or emotional distance. The effect will be good enough for a producer or creative to give approval on the idea, but you want to leave the fine-tuning of such effects to the person you’re paying for that skill. An example of effects for editorial utility include volume automation or simple gain adjustments so the the edit is watchable. You have the option of rendering these effects and sending both audio files, but often I’ll leave the effects unrendered and simply rely on the reference video to give the sound professional the idea of what’s wanted.


This one’s a bit trickier than sound and there is a lot that can go wrong.

Reference Video

As with sound, it’s imperative that you export a reference video so that the colorist knows exactly what your intent is. This video will be checked against to verify the translation from NLE to color software.

The reference is both a creative and technical reference. Not only does it serve to check accuracy of the translation of a timeline between software, but it serves as a guide to the creative decisions established before the sequence got to color. You’d be surprised how often the colorist has next-to-no communication with anyone from the creative team. If a producer approved a dark, desaturated look that the editor applied for a viewing session, the colorist will see that in the reference (but not in the actual source media to be colored!).

Timeline Prep

Collapse, where possible, everything down to as few a number of tracks as possible. Some things, like composites, will require clip stacking, but outside of that, keep your track count down and delete empty tracks. Delete all of your audio tracks.


“Baking” simply refers to rendering out anything that won’t transfer between your editing NLE and the color application. Bake your effect clips and re-import them into the timeline, replacing the old clips.


The most common format for timeline interchange is XML. That said, XML isn’t what most people think it is. XML is not one specific format for doing one specific thing, instead, it’s more of a standardized method of representing information that can be used for a huge variety of tasks. That said, there are some XML standards that work pretty well like the FCP XML. The EDL is simply an “Edit Decision List” and it’s best used in the following method.

The Baked, Notched, Preconformed Method

This goes by many names but the idea is this: rather than fighting the lack of consistency between applications, just render everything into one file and let the colorist slice it up. The obvious advantage is that nothing is lost in translation. If you employ this method, here are some things to consider

  • Do NOT “bake in” titles or your own color. I frequently get projects where an editor has made an admirable effort at color but made the footage quite unusable for color grading.
  • Use your brain and watch for other things that might cause problems if ‘baked in’. For example, a composite shot where both elements will need to be colored separately will need to be exported separately.
  • Export an EDL instead of an XML. An EDL is a much older format and the information it transmits is much more primitive. However, it’s more standardized, and in this case all the fancy stuff is included in the render. All we really need is the edit points. Another good reason for an EDL is that Resolve’s ability to “preconform” or “notch” the baked file into its individual shots relies on use of an EDL.
  • You’ll have to watch your transitions. The color applied to the clip will also have to “fade” with the transition. This means the colorist will ideally be given handles on any clips with transitions. Sometimes the color itself can just be keyframed or dissolved, but other clips will require more work. It’s often easier to just pull all clips in the NLE which have transitions to a V2 track, extend them with appropriate handles, and then add them to the tail end of the clip so the colorist can apply color and you can reapply transitions after the round trip.

Round trip?

Firstly, you need to know what you want back from the colorist. It’s traditional for the colorist to render individual clips and deliver those renders and an XML back to the post supervisor. The post supervisor than imports the colored timeline and adds graphics overtop, reapplies transitions and custom effects, etc. I’ve recently found, however, that on some projects it’s becoming much simpler for the colorist to also act as the finishing artist and be the one responsible for delivering the final render. This isn’t going to work in every situation, and most post supervisors will want understandably want to maintain control of finishing in their NLE, but for non-broadcast work I’ve found it sometimes helpful to just create the deliverable myself.

Beware of application-specific effects

You’ll need to render or ‘bake’ any effect specific to your application, just like I recommended for color.

Collapse multi-cam sequences

I’ve received projects with massive gaps in the timeline where nothing was present at all due to this.

Keep an eye on keyframes

Keyframe interpolation won’t always be identical between software and you need to know how frame-accurate you need things.

Speed Changes

Speed changes often don’t transfer perfectly between software. FCP X to Resolve has some of the best support, including not only linear and variable speed changes, but also keyframe interpolation and frame blending vs. optical flow methods.


Scaling can be a surprisingly tricky translation because each software deals with it differently. I prefer to leave Premiere’s “set to frame size” option enabled while editing in Premiere as to not lose resolution when moving around a scaled clip. If the scaling is then handled in Premiere’s motion panel, Resolve’s “Input Scaling” simply needs to be set to “Center crop with no resizing”. You’ll also want to make sure that the setting Resolve uses for mixed frame rate media match. If your coming from Final Cut select the Final Cut option obviously, but if you’re coming from Premiere choose “Resolve”.

Sequence Settings

You’ll want to make sure sequence settings for resolution and frame rate match. Also verify that your sequence timecode in the color application matches that of the reference video.

Here’s a look at what translates between NLEs and Resolve. The list is impressive, even basic color corrections in FCP X will appear as primary grades in Resolve’s color page.


Once you’ve imported the XML into your color program, you need to check your import against your reference video. There are several ways to do this, including Resolve’s older method of designating media as reference media on import, but this is the simplest:

Open your imported XML timeline. Set the “Source” viewer on the left to “Offline Mode”. Now just drag your reference clip to the blank source viewer’s window. The coolest part is that now you can right click the Timeline viewer and see various methods of comparing the two, including the useful “difference” method where everything identical will appear black so anything you see indicates a mismatch. The same modes are available using wipes on the color page.


If all of this sounds a bit intimidating and you’re thinking, “How in the world does this not go wrong”, the answer is, it does. And that’s ok, it just means more time is required to communicate and track down missing pieces. That’s why I wrote this article. It’s quicker for me to sit down and type it out once so I only have to share a link instead of reiterating the same things in e-mails.

And of course I can’t end without a Resolve plug. The entirety of what I’ve just written represents one of the main reasons I so love Resolve. There’s no XML, AAF, EDL, OMF. There’s no prepping of a sequence to send it to color or sound, then receive it back only to reapply everything before export. There’s no requirement for picture lock before getting a sequence to sound or color. All I do is click a button and I’m working on audio, color, or VFX and at any point, with a single click I’m back to working on the edit.

Scroll to Top