If you recall, in production, I advise recording at an average level of around -18dBFS and keeping peaks around -12dBFS. Recording at these lower levels is useful if you’re able to record 24 bit or higher UNLESS you know your source is dynamic range limited then record hotter because you don’t risk clipping.
In post, those dBFS meters aren’t as important as LUFS anymore except for watching that the scale doesn’t hit 0. You can put a limiter at the end to prevent that. Some software, like Apple’s Logic, is unique in that it’s floating point, meaning that you can’t actual clip.
Some plugins will expect levels to enter at -18dBFS to work correctly. It’s not a bad idea to set every track up to around this level in preparation for the mix.
Master no louder than -9 LUFS short-term at the loudest moments(with True Peaks no higher than -1) Integrated Loudness is some sort of average across the entire song which is pointless because some songs have lots of quiet parts that will skew the average. BUT usually most online streaming platforms want your integrated loudness between -12 to -14 LUFS. Decide how loud to master the loudest part of the music and then match the softer parts to it by ear. Broadcast standard is -23db integrated LUFs.
Loudness Units Relative to Full Scale (AKA LKFS)An industry push to keep perceived audio levels “normalized” resulted in LUFS. Integrated LUFs is quite similar to RMS just frequency weighted based on human perception (not unlike gamma). So where RMS conveys the true power of the signal, LUFs conveys perceived volume. Ever heard an ad that was way louder than the program around it? The loudness war.
-24 LUFS is the ATSC US broadcast standard. Curtis from LLS recommends targeting around -16 LUFS for web deliverables.Spotify, for example, will normalize to -14 LUFS. Master no louder than -9 LUFS and true peaks no higher than -1. This is an awesome tool to see what YouTube, Spotify, Pandora, etc. will do to the loudness of a file you upload.
LUFS really quite similar to RMS when you consider they are both an average. 1LU is close to 1db.
Having all tracks at a similar loudness makes it easier to mix. Audition allows you to normalize all clips to a certain loudness. Not the case in Resolve. It can be nice to aim to keep your individual tracks around -18dBFS (=0dbVU) with peaks around -10dBFS for mixing. This puts them at the level plugins expect them to be at. You can then balance them out relative to each other in the mix. Make sure to have headroom on all your faders. The loudest part of your combined audio should peak around -4 to -6dBFS on the stereo output.True Peak metering takes into account the fact that when your stair-stepped digital samples are converted into analog, there is some natural rounding that occurs. This means if you push signal up to 0db it will play back possibly as high as +3dBFS and peaks will distort. If you don’t have a true peak meter, just keep the limiter at the end of your signal chain at around -2dBFS to be safe. Again, because these are just peaks and the perceived loudness of your sound has less to do with peaks, you’ll still have impactful audio.1 LU = 1 dB. So if your master has a reading of -12.3 LUFS int (integrated), and your target is -14 LUFS int, then you would need to reduce the gain of the master by the difference, so 1.7dB (-12.3 + -1.7 = -14).
Mastering aims to make the audio sound “louder” and more present and tunes it to different types of output scenarios. Automated services like LANDR and Aria can work pretty well. Plugins like Waves UltraMaximizer are other ‘1-click’ solutions.
NormalizationRight-click on a YouTube video and get “Stats for Nerds”.
“The final value is the “content loudness” value, and indicates the difference between YouTube’s estimate of the loudness and their reference playback level. This value is fixed for each clip, and isn’t affected by the Volume slider.”So for example a reading of 6dB means your video is 6dB louder than YouTube’s reference level, and a 50% normalization adjustment (-6dB) will be applied to compensate. Whereas a negative reading of -3dB, say, means it’s 3 dB lower in level than YouTube’s reference, and no normalization will be applied, so the normalization percentage will always be 100% of the Volume slider’s value – YouTube doesn’t turn up quieter videos but it supposedly normalizes loud ones to -12 to -14 LUFS.
For those working in 32 bit where the source file does not have any true-peak overshoots (clipping events), then that is correct, you don’t clip within the DAW, only on export. But when exporting to a lossy format, I think some sort of rounding errors mean you need to leave more headroom. “The .5 is just a rule of thumb and will not ALWAYS keep you out of trouble. However, the conversion from lossless to lossy formats (e.g., wav to AAC) uses up headroom. This is, in part, how the lossy codecs are able to “throw out info” and retain most of the audio fidelity. So if you normalized your peaks to -.5dBFS in the edit while working with wav source files, when you export to a codec where the audio is encoded to AAC, you could end up with legitimate clipping instances. From a practical standpoint, for people publishing only online, this may not be an issue at all and may just be a problem for pretentious audio people. But in some cases those instances can be heard. There are a ton of big YouTube channels where I hear quite a bit of clipping on a regular basis and I suspect it is because their post people are not leaving enough headroom for the conversion to a lossy format.”
Applications like Adobe Audition can be convenient because loudness effects will be “applied” so you don’t have to wait for playback for awhile before actually seeing the integrated loudness accurately. Fairlight works well to non-destructively send clips to RX (or other pugin) for processing. You simply click ‘save’ in RX and then it sends it back to Resolve as a new layer.