Question: What do you think this color represents? It’s a sample taken directly from a piece of footage I was given to color.
Answer: Skin tone!
All that said, let’s turn to some physiological reasons the idea of a complementary color palette can still be useful. Our eyes and brains are great adaptors. In discussing white balance, we learned that we can see a piece of paper as ‘white’ indoors under tungsten lighting as well as outdoors under daylight, even though the color of light hitting the white paper is drastically different. This compensatory function of our visual system lets us see relative color differences despite strong global color casts. But it sometimes works to our disadvantage when trying to color something. You might have the brilliant idea to color a moody scene blue to convey emotional depth in a character. In an effort to create a vivid blue, you might saturate the frame with strong blue hues. In a short space of time, the viewer’s brain will successfully remove the blue you’ve added by balancing that blue with it’s complement: a warmish yellow tone not at all similar to your creative intent. One way of keeping the blues vivid is to balance the blue with that yellow complementary color in the frame. The blue will be more vivid due to the presence of the yellow. You can slowly add a color cast to your scene so that when you reveal something of the complementary color it will look even more saturated due to the viewers’ eyes having compensated.
In short, using complementary colors against each other is a great way to create visual contrast, either within a single shot or between scenes.
Let’s take a look at some proof that your eyes are fooling you. First with the complementary color example: Stare at the ‘X’ in the center of the image for several seconds and notice how the circling magenta dot becomes cyan (magenta’s complement).
Let’s do another trippy test. Which square is lighter, A, or B?
If you answered ‘B’ you are correct. If you guessed that they’re the same it’s because you’ve done this before and you’re cheating. I modified the test in Photoshop to call out you cheaters. Here’s the real thing:
Even more interesting, it’s not just the color of an image that changes, but actual shapes as well.
Another interesting resource on color perception.
After all the talk on how subjective our eyes can be, let’s talk about objective ways to measure a video signal. Scopes are visual displays of various components of an image and they are a very useful tool when it comes to color correction.
Resolve has scopes built in, and for casual use there is no problem using them, but there are a couple reasons you may benefit from using scopes outside of the color application. External scopes simply take your video signal as an input, usually via SDI or HDMI, this means there is minimal GPU strain involved on processing the scopes themselves. If you’re on an underpowered system, or need every last ounce of your GPU for computing the grade, handling the scopes’ processing externally can be a noticeable improvement, and they’ll always play realtime. There’s also typically more power in a dedicated outboard scope. Scopebox provides tools for assessing RGB gamut errors (Scopebox channel plots help these sorts of errors or if you have the money go for a Tek Double Diamond); other scopes like HML balance help you remove color casts quickly; but they’re also highly configurable in that you can enlarge the data trace and position things how you need them for your setup. Newer scopes can also chart changes over time, provide false color overlays, superimpose traces, and provide target markers. In addition to all these features, it’s always wise to let your scopes monitor the same signal chain your grading display is using. Levels issues or problematic cabling will affect what you see on your grading display, but Resolve’s software scopes won’t help you spot it.
ScopeBox is a piece of software that turns your computer into a very professional set of scopes. I run it on a laptop and use the small, bus-powered BlackMagic mini recorder hardware to get the video signal into the computer.
Some NLEs like Premiere and Resolve can send the video signal via software to Scopebox on the same system. This might not help you much when it comes to reducing system strain, but it’s an easy way to get the power of Scopebox on a single system. Scopebox is also a handy tool during production. It’s a very full-featured signal monitoring tool for creative work and you can even record the signal. Be aware that it can really tax your system and drain a laptop battery quickly however.
In 2019, Scopebox switching pricing strategies to more of an annual subscription approach.
Measures an images saturation. No saturation at the center and fully saturated at the perimeter. Hues are represented at various locations around the scope, just like you’ll see in Resolves primary grading controls. The boxes identifying each hue show the extents which should not be exceeded for a signal to be “broadcast safe”. This actually varies based on the luminance of the color, but it’s an easy rule of thumb to just draw a line connecting the “hue dots” and not let you saturation exceed that. Some vectorscopes also have a “skin tone line” which can be helpful to see where your subjects skin color falls objectively. Using offset to “center the blob” is an easy way to quickly compensate for color casts.
Waveform represents luminance. In FLAT view it combines luma and color signals. It correlates, left to right, with your image and shows brightness bottom to top. So if you have a bright white window on the left of frame you’ll see a bright white ‘trace’ on the waveform’s left side near the top.
A waveform broken into R,G and B channels shows the individual luminance of red, green and blue. It’s a very useful tool when you’re trying to eliminate color casts because anything black, white, or gray should line up horizontally since it contains equal parts red, green and blue.
There is some confusion about scopes in Resolve since it displays “code values” for 10 bit data rather than traditional “video levels”. The idea of video levels is related to voltages in the analog days and is somewhat archaic, but many traditional video professionals still relate to such levels in terms of exposure. Caucasian skin tone, in an average exposure, generally sits around code value 600 on Resolve’s scopes. This is only relatively useful information, however, as the mood of the scene can drastically alter this number.
Resolve: Color Grading (Blackmagic Introductory Video)
Light reflects off objects and into our eyes. Light is simply electromagnetic radiation whose wavelengths will trigger different physiological and psychological processes in our visual system. Rod cells in our eyes sense luma information and cone cells sense color in long, medium and short wavelengths (red, green and blue accordingly). This high level explanation should be qualified by mentioning the fact that humans are very subjective and individually unique creatures. Age, gender and environment drastically affect our visual system. We lose our sense of color in low light. Generally speaking, we have a greater concentration of cone cells near the center of our retina so color reproduction is also drastically reduced at our visual periphery. Males tend to have a higher rod count and females a higher cone count. Also interesting is the effect of culture on physiology: In Russian, many more words exist for the color “blue” than we use in English. Due to this, Russians are physiologically more adapt at discerning shades of blue. So if you’re picking between strong candidates, hire the female Russian colorist.
Based on how we see color, it would make sense to develop a digital color system that “sees” similarly to our own biological one, right?
This is the genesis of everyone’s favorite CIE 1931 Chromaticity Chart. It’s basically a map of human color and gives us a way to quantify the perception of color with something objective. Any given color a human can see can be plotted to a specific point on this diagram.
The term ‘gamut’, covered earlier, is simply the range of colors that falls into what’s produceable from the three primary colors. There’s no tech capable of showing the extents of the CIE chromaticity chart so it’s much more efficient to work with smaller gamuts we call “color spaces”. Those three primaries are mapped to a specific location based on the color space you’re working with. Generally speaking, it doesn’t matter what color space you’re working in as long as every device in your digital chain is consistently using the same space. We’ll term those three values “tristimulus” primaries and discuss the “device independent” CIE XYZ gamut when we cover calibration.
The farther out the primaries get from white, the more code values we need to properly span that distance. This is where bit depth comes into play. Remember that your bit depth determines the number of available values/colors you have to work with.