First, delete any unused tracks. I advocated an “industry-standard” track order earlier in the course. If you didn’t follow it from the beginning, rearrange your tracks before sending to sound. It may sound silly, but it will make you look much more professional. Also remember to label the tracks and group any mono and stereo sources next to each other in each category. Ensure each track is stereo or mono (e.g. avoid using Premiere’s “adaptive tracks”). From top to bottom use this order:
Watch out for muted audio clips. Instead of using the NLE’s “mute” or “disable” feature, delete what you don’t really need or at the very least pull the clip gain down since that should transfer in the AAF. Sometimes volume automation is used to fade in or out of a clip rather than using a transition. This can cause problems so keep that automation within the clip and use only dedicated transition tools on either end of the clip.
If you’re working on a bigger narrative project, there’s a bit of additional complexity because you need to be extra careful to maintain metadata and you’ll likely have to do a conform. On small projects you may not even need to conform the audio. It’s worth understanding the overall workflow:
When offline files were created for editorial, the assistant editor or DIT likely synced them to a mix track from the sound recorder. This is a mixdown of all the individual mics and while I believe it should only be used for dailies, it sometimes is the only option given to editorial. This means your various boom mics, lav mics, plant mics, etc. are all on one audio track and you can’t individually separate them. For the purpose dailies are intended to suit, this isn’t a problem. For the progressive editor desiring maximum control however, it’s useful to sync to the full sound files with all of their individual channels. Modern NLE’s can handle the large amount of tracks coming from production sound mixers, as well as display it in convenient ways. I think it’s best to leave the information available to the editor rather than simply syncing a mixdown, though many pros find it too cumbersome to deal with in an NLE and will disagree. Resolve is excellent at reading and displaying iXML, a metadata format for including track names from the audio field recorder. While they show up great in Resolve, only in Resolve 16 is Blackmagic finally allowing for synced audio filenames to export in an AVID AAF. This has been a weak point in using Resolve to deliver files for AVID editorial in the past.
A problem with editing using mixdowns is that if you’ll eventually have to replace them with the sound recordist’s original files. This is conforming, like we talked about previously. On a bigger shoot, it’s usually the dialog editor that gets the fun job of ensuring all the original “split” sound files are correctly imported in place of the mixdown files. The typical “industry-standard” workflow has been to export both a reference movie of the picture locked edit along with an AAF file linking to the original sound recorder’s associated files, but it’s the sound department’s job to do the conform.
Even if you edit with the original sound files, don’t forget to give the sound editors all the original recorded production sound. It’s helpful to provide more than just a media managed project of the final edit with handles because they’ll need to pull things like room tone, ambience, and potentially alternate takes.
I’ll talk more about going from Premiere to ProTools in my examples, but see the similar process of AVID to ProTools here.
The audio will be exported to a sound professional via an AAF. This format replaces the older, OMF and is superior, mainly in terms of passing metadata like track names and volume automation. Here is an example of the AAF exporter in Premiere with the settings I like to use.
I don’t include the “Mixdown” video for a couple reasons: As mentioned above, I like to specify my codec, and in this case, Premiere auto-defaults to an MXF-wrapped DNxHD 175 export. Some older Pro Tools installations wonβt import correctly if the mix down video is included in the AAF so it’s another reason to avoid it.
Deleting all the video tracks before exporting the AAF is generally a good idea. Your rendered reference video is all you need and you’ll get errors in the AAF based on what’s not compatible in the video. I don’t like extraneous errors as they keep me from noticing the real errors hidden between them.
I usually like to include the complete audio file in case sound needs it for room tone or a βFranken-editβ. Depending on the project, this can drastically increase file size. If you’re not including the entire original audio file, you have the option of exporting a portion of it with handles. 5β10 second handles is a good balance between smaller file size and enough excess useful material for the sound editor to pull from.
I also prefer to *not* check βembedded audioβ because I like to see the audio files and the directory they came from and because embedding can sometimes compromise metadata. This will require a small step for your sound team importing the AAF into ProTools, so they may request an embedded AAF. On bigger film projects there’s usually more of an insistence on relinking to the original sound recorder’s files (since they’ll actually have an abundance of useful metadata) and on smaller projects (e.g. web spots) the size of the project and priority on speed often opt for an embedded AAF .
Audio effects can be rendered, and the original audio plus the render will be included. Why? Because an editor is usually skilled enough to apply the effect for the sake of creative review or editorial utility, but they generally won’t spend the time on it that a sound professional would. For example, the editor may automate a reverb to indicate physical or emotional distance. The effect will be good enough for a producer or creative to give approval on the idea, but you want to leave the fine-tuning of such effects to the person you’re paying for that skill. An example of effects for editorial utility include volume automation or simple gain adjustments so the the edit is watchable. You have the option of rendering these effects and sending both audio files, but often I’ll leave the effects unrendered and simply rely on the reference video to give the sound professional the idea of what’s wanted.