πŸ’‘Color Management, Gamut & Gamma & LUTs (CU)

Photos and Color Profiles: The Quickly Approaching Move to Wide-Gamut |  PetaPixel

We’ve talked a bit about words like gamut and gamma and determined that we need quantifiable ways of representing color digitally. We’ll cover gamut and ‘gamma’ here and the introduce “color management” which is nothing more than translating between these different color reference points to ensure the correct color is represented at the correct point in the signal chain.

So What’s a Gamut?


“Gamut” is simply the range of colors that falls into what’s produceable from the three primary colors, in our case red green and blue. There is no combination of real-world-sourced primary colors (e.g. paint or light) that can produce a gamut as large as that perceivable by the human visual system. In other words, they haven’t made a camera that can capture what your eye can see, or a TV that can show it.

We reference human vision with a chart called the “CIE chromaticity chart”. There’s no technology capable of showing the extents of the CIE chromaticity chart so it’s much more efficient to work with smaller gamuts we call “color spaces”. Those three RGB primaries are mapped to a specific location based on the color space you’re working with. We’ll term those three values “tristimulus” primaries and discuss the “device independent” CIE XYZ gamut more when we cover calibration.Generally speaking, it doesn’t matter what color space you’re working in as long as every device in your digital chain is consistently using the same space. When it comes to managing color the translation between color spaces is often more important than the color space itself. A mismatch in the chain might result in colors that are under or oversaturated.

Common gamuts include:

  • Rec. 709: The “standard” video color space. This has been the most widely-used color space for broadcast video and remains the color space of choice for most video.
  • Rec. 2020: The replacement to Rec. 709. Nothing can really display this color space which we hope will make it somewhat futureproof.
  • DCI-P3: A standard gamut for digital cinema projection.

The farther out the primaries get from white, the more code values we need to properly span that distance. This is where bit depth comes into play. Remember that your bit depth determines the number of available values/colors you have to work with.

The images below show how a color space is represented two-dimensionally. See how colors can be represented by graphing them to an XY point on the gamut?

Some interesting points to bear in mind:

  • All human-visible colors can be described in terms of X,Y and Z.
  • Colors of the same XYZ values can actually be different spectrally, but will appear identical to a human if those three values match.
  • Y is how we perceive luminance. Our eye is most sensitive to green light.


“Gamma” is a term used loosely to define how a camera or display captures light values. If I have one candle and I add another candle I get twice the amount of light, this is a linear relationship between the amount of candles and how much light is output. This linearity makes sense, but it’s not how our eyes work. The non-linear response of our eyes (and old CRT displays) can be mapped fairly accurately in a gamma equation:

Gamma 2.2 is linear^(1/2.2)

  • Gamma 2.2 is for web delivery and sRGB applications. Since it’s a bit brighter, it will read better in the brighter environments typical of web content viewing.
  • Gamma 2.4. This is more designed for the home theater sort of environment. It’s a bit darker, but better suited for the dim living room in which someone might be watching TV. It’s very similar to BT1886 which we won’t discuss for now.
  • 2.4 gamma actually matches fairly close to human perceived luminance. As mentioned, we don’t see linearly.
  • Linear is an effects curve used for vfx, you normalize, not linearize a log curve.
  • 9-10 stops immediate dynamic range are perceivable at one time, but 20 stops available to us because eye changes dynamically.
  • Charles Poynton is the authority on this topic. For more depth, read his work.

Applying in Resolve

Resolve has immense flexibility for working in any popular color space. This is good in that it gives you control, but it requires some education to use it properly. By default, Resolve, like Premiere, assumes gamma 2.4 Rec. 709 until you tell it differently. But in Resolve, you can tell it differently. Inside Resolve the simplest form of color management happens with our project settings. Here, we can define a global gamut and gamma for our entire project. Depending on the version of Resolve you’re using, all newly-imported clips will default to this input space. Any clip inside Resolve’s media pool can be independently assigned a color space via a simple right-click.

Your timeline color space basically determines how the changes you make with the grading controls will affect the signal. For example, the same contrast operation done on a log image will behave very differently to an adjustment made on a normalized image. Grading controls need to know the range of values they are operating on in order for the adjustments to behave intuitively. If you switch this mid-project you’ll throw all existing grades off so be careful.

Output Color Space is determined by what you’re viewing your video signal on. If it’s a computer monitor, choose sRGB gamma 2.2. If it’s a grading display for broadcast, choose Rec. 709 gamma 2.4, etc.

The cool thing about color management is that you can change these at any time. If you want to output your project for theatrical projection, it’s fairly easy to change your output space to DCI-P3 and you’re ready to grade in a grading theater.

These color management controls are technical functions designed to map one color space to another, not to look good creatively. When you use a manufacturer-supplied LUT, there’s much more going on than a simple color space conversion. In order to look pleasing, there needs to be a graceful transition from the larger gamut and gamma to the smaller one. That’s where “mapping” comes in. “Tone mapping” deals with gamma adaptation, and the “simple” method is little more than a smooth contrast reduction to go from the higher dynamic range to the lower one. “Luminance mapping” lets you adjust max luma levels, but it will clip the output unlike the other mapping operations. “Saturation mapping” deals with the change in gamut.

When it comes to Resolve, gamma naming conventions are confusing. Camera sensors have a very bright (closer to linear) gamma of 1.9 and that’s the value Resolve is using with the label Rec709(Scene). But, often when Resolve lets you specify a gamma it will only use the label “Rec709” and internally process with the brighter 1.9 “Scene” gamma. In these cases, you simply need to make sure you’re explicitly choosing the 2.4 gamma option and you’re good to go.

This project-wide color management approach has it’s advantages. It’s pretty simple to set up and works across the entire project without much fuss. It’s perfect for an editor working in Resolve. Most often, however, a professional colorist will want more control over images at different stages in their development. For example, it’s important to remember to do your corrections before the mapping operations, but this is tricky when using project-wide settings.

The color space transform node is one of the most useful additions to Resolve in recent past. It allows you to set up a node tree whereby you can manipulate the footage before the ‘normalization’ and after. “Normalization” is the term I’ll use when referring to the color space transform node which converts to Rec709. When working with the log footage before the normalization, get the bulk of the look right by using offset, contrast and pivot controls. These controls were designed to work on the ranges typical of a log image. After the normalization node, the lift, gamma and gain wheels should behave as expected.

“Saturation Mapping” in the Gamut Mapping OFX plugin is also a great way to software-“legalize” out of gamut saturation. It basically compresses everything out of gamut by a certain ratio. It’s not perfect, but it’s an easy first step towards aiming for broadcast-legal saturation values.

It’s important to note that color management in a lot of photo-based applications takes a different approach and often confuses people. This is covered in much more depth in the calibration section, but the idea here is that we generate an .icc profile which ‘characterizes’ the response of a display and then rely on the software to correctly map the color space of the footage to the needs of our display. But all of this happens on-the-fly by the software. If you want more info on how this works, see this resource:


This is a great review of Resolve’s color management:

Followed up by this series from the same source (PVC):


In place of color management happening as a feature of the application, Lookup Tables or LUTs can be used. No one is more qualified to talk LUTs than Steve Shaw so give this a read. His explanation is great. Bear in mind however that transforms between the spaces we shoot in (camera space) and grade in are better handled with non-destructive color management techniques.

Important takeaways:

  • A 1D LUT can only control overall contrast and fix grayscale “tracking”.
  • Some displays only offer a 3×3 matrix in conjunction with a 1D LUT for calibration. This is a limited fix.
  • LUTs vary in size; the larger the LUT, the more accurate, but the more computationally intensive. 17x17x17 LUT with good interpolation is usually enough points (17 points on each axis)

The video below is an excellent resource. Hopefully with what you have read on Frame and on this page, in tandem with the video below, you’ll start to grasp the concepts discussed.