The world of video production features many complications not obvious at first glance. One of these is the fact that we often don’t shoot video files and then edit those files directly. In the days of film, just as the actual negative wasn’t used on the cutting floor, today, even in the digital era, we often create new video files of every clip we’ve shot and edit from those.
First of all, you definitely can, and if it works great for you there may be no need to reinvent the wheel. There’s no right or wrong way, but explaining why the professional workflow has historically used separate files for editing is valid to understand.
The on-set DIT (or you) should create edit-friendly files rather than editing directly off camera raw originals. These files are smaller and easier to edit, but are inferior in quality. Some people dismiss the proxy workflow because their computer is fast enough to play the native media format of their camera. Proxy isn’t all about playback speed for a couple reasons. The smaller file size can be necessary when working on a huge project, say, a reality TV show with a 500:1 shooting ratio, or when working from shared storage. Also, just because your computer doesn’t drop frames when you hit play, it doesn’t mean it’s up to the rigorous demands of editing. I like to use the JKL shuttle keys to really test how well a machine can handle a long-GOP codec. Pick a random place in a video clip and navigate forward via ‘L’ and then immediately press ‘J’ once or twice. It’s easy to decode playing forward, but playing back at double speed, at a random point in the timeline could likely stress playback. This is something you’ll do all the time in editing and if there’s any lag whatsoever it will hamper your efficiency.
Another reason we don’t typically use media straight from the camera may have to do with dual system sound and workflow. On bigger productions, the sound is usually recorded separate from picture. This means at some point before editing you’ll likely want to sync the two together since your camera original files don’t have the production audio. You can associate the production sound with the camera original inside an NLE as you sync, and if you’re the one handling all of post production and editorial this is fine. In that case, the sync association lives in the NLE. Often, however, the syncing is done by someone in software entirely different than the NLE used by the editor, and possibly done by a DIT during production. Creatives will want to see “Dailies” not long after shooting, and they’ll need the production audio in that review process. Often a mixdown can be sent to camera, and that can often suffice, but if a DIT is syncing for dailies it makes a lot of sense to generate the offline edit files at the same time. And in this case, the sync is “transferred” between applications because it’s embedded in the new offline media files rather than simply being an association made by software. These offline edit files nowadays serve the purpose of “proxy” files for the online media.
You get to choose whether or not this offline/online sort of edit fits your workflow.
Proxies can be generated in camera, by a DIT, by an assistant editor, and can even be created in the NLE itself. In the case of the NLE, the proxy’s aim is primarily to aid in playback performance. Also, “Proxy” means different things based on context. In Resolve, it can be a mode that simply plays your media back at a lower resolution, and the more typical “offline” file proxy is called “optimized media” in Resolve. In Premiere, you generate proxies and it creates new files with “proxy” appended to the file name. This can cause problems later on. A more robust approach uses timecode rather than file name.
Generating these Resolve-managed proxy files known as optimized media is simple. You right-click clips in the media bin and tell Resolve to convert them to an edit-friendly format on your hard drive (you specify where in the preferences). You toggle usage of the optimized media on or off similar to how you’d do it in Premiere, under the “Playback” menu item with “Use Optimized Media If Available”.
After editorial is complete however, the ‘offline’ picture from the proxies and the ‘mixdown’ sound will need to be matched up with the original files so the sound and color teams have the highest quality originals to work with in the “finishing” phase. At that point we’re concerned with preserving the maximum visual quality of the camera original rather than optimizing our editing experience. We now need to “conform” the offline media back to the original high quality media as we enter the “online” phase.
There are two features in Resolve that make this easy:
The simplest is a right click on the media in any timeline, where you can uncheck the option to conform lock. Now, if there’s other media present in that same Resolve project with matching timecode, you’ll get a little “!” symbol alerting you of the match. Click it on any clip to conform to the other piece of media. You can also do this in bulk using File:Reconform from bins where you pick the bin containing the online media to conform the entire timeline at once.
There’s a very good Frame.io article here that will give you an idea of some best practices to avoid conform issues since it’s a process that can easily go wrong.
You should now understand the importance of maintaining accurate, matching timecode across your offline and your online files. Even though file names, codec, and format differ, the timecode is the link between the two. So what if you have multiple clips with matching timecode? Usually a conform will use timecode and another piece of metadata called “Reel Name” to do a conform. Resolve gives you fantastic control over what parameters it considers when matching up clips. If reel names aren’t present (as happens frequently on consumer cameras) they can be generated based on the media’s other parameters using the “Assist using reel names from the:” options.