The production sound you’ve captured makes up a surprisingly small percentage of the total film’s soundscape. The work sound designers and foley artists perform is hugely underrated and it’s often one of the things documentary or non-fictional films spend too little time considering. As with everything else in this course “designing” the sound, rather than just lazily taking what you recorded, is key to success. Here’s a brief look at the elements commonly present in good sound design.
So here’s the process for working with sound:
1. Track organization: Get individual actors and their individual mics all on dedicated tracks. You’ll generally apply effects across an entire track and this upfront organization is critical. Put the mono elements above the stereo ones. That’s why music and sound effects come at the bottom of the list as they’re more likely to be stereo files. Each of these bullets could multiple tracks. For example, “Dialog” could be 8 different tracks of dialog. That’s partially why this organization is so important.
2. Normalize the audio to get levels in the ballpark. This is often done during editorial so the editor has everything roughly loud enough to edit. As you know, modern recording generally leaves ample headroom which means you’ll likely need to boost levels. Resolve (and ProTools) are both cool in that they let you adjust clip gain automation before the track level automation. “Peak normalizing” all your dialog clips to get peak loudness in the same ballpark of -2 dbFS or so usually works well. Just select your clips in the timeline, right click and choose “Normalize Audio Levels”. In the dialog box that follows, go ahead and leave the mode to “Sample Peak Program” and set the target level to which you want to “normalize” your clips. If you choose “Relative”, Resolve will consider all the clips you selected and apply one normalization value to the whole group. This means if you have three clips from the same interview, but in one of the clips the subject coughed, that loud audio spike will cause normalization to improperly pull the level of all the clips down since it’s normalizing the audio based on the peaks. “Independent” is a better option if you want each of the selected clips to be analyzed and then normalized on its own. -2dBFS is a pretty loud normalization, assuming you’re working with a dialog track as the primary audio source for your program. This will get the audio level’s peaks right up near clipping and resolve any massive discrepancy in levels between your various audio clips.
3. “Franken-edit” the audio for the best possible dialog. You’ll often have the need to chop up the audio based on performance or technical needs and what sounds like a fluid sentence in the final product is actually a hodgepodge of several different takes (and possible microphones). Make sure each clip has a smooth transition or crossfade at its head and tail to avoid pops or distracting sounds. As with other audio edits, make sure there’s a constant room tone underneath your dialog. It will hide a multitude of sins.
4. Clip Automation First, per clip, use automation (rubber bands) to visually “compress” clips with excessive dynamic range. Again, this step can often be done in the editing workflow of any NLE. This is similar to the compression you’ll apply later, but it’s a way to manually even out loud and soft parts across the dialog.
6. Levels are wise to start thinking more about at this stage. Pick the loudness standard you’re working to under Project Settings>Fairlight (-15dBFS is a good average for web content) and then you meter your mix to see how close you are to that standard. Reset the loudness meter, start the meter, playback your audio and watch the measure of integrated loudness. In this case, the -3.5 indicates that I’m 3.5 dB below my target loudness. We’ll want to bring up the overall loudness of our mix but not clip our peaks, which brings us to the next section.
5. Effects are visited more thoroughly on the audio effects page, but a few of the most common ones are worth mentioning. Noise Reduction is uses to help separate speech from the noise floor. EQ will get your audio sounding well-balanced. Dynamics adjustments (e.g. a compressor) are used to change the dynamic range of the audio. They allow us to modify the volume of the clip with more targeted precision than a clip or track-wide gain.
9. Music editing requires that you pull down or “duck” the level of the music when the dialog arrives overtop. This can be done manually with clip automation, with track automation, or by “side chaining” a compressor so it watches the dialog track and dips the music whenever dialog appears.
10. Effects, Foley and ADR should be added and treated with whatever effects are necessary to get them at the right level and make them feel like the sit within the same sound space (e.g. how does reverb compare between the sound effect and the existing mix?). Reverb can be used to match a ‘dry’ effect sound with the appropriate “echo” of the production environment or indicate the sound source’s proximity. EQ can help match production microphones or help a sound ‘sit’ where it needs to in conjunction with other sounds.
All of these elements combine in what’s called a “mix”. This mixing process itsefl is generally divided into two or three person teams based on dialog, music and effects. Again, you might not have the manpower to divide all these responsibilities, but knowing they exist is key to making sure you give each phase of the process its due diligence.
Getting all the levels correct in relation to one another is typically done in the “mixing” phase. It’s much easier to work with the various elements of Music (MX), Dialog (DX), and Sound Effects (SFX) if they are grouped together into what we call “stems”.
Afterward, your sound will be sweetened and tailored to the specifics of its distribution medium. Mastering engineers have all manner of tricks up their sleeves for bringing out the best in your audio. One method I’m grateful someone showed me was a simple “Ultramaximizer” plugin. I’d describe it as “HDR” for audio in that it sort of takes everything and makes it hyperreal. You can definitely push it too far, but for what’s basically a one-slider master it’s not bad. This can be set as an end-of-chain limiter so nothing in the mix is allowed to exceed whatever limit you set (-.5 dB for example).
That said, it’s not just peak audio levels we’re concerned with. Modern distribution platforms will watch your audio’s loudness as an average. Bouncing your mix to a new track and running the whole thing through the integrated loudness meter (yes, you’ll have to wait for the whole thing to play) is a great way to check the integrated loudness of the entire mix.