When we talk about color in video, we divide the work into two major priorities, one technical, the other aesthetic:
In this section we’ll deal with number one and the following section with number 2.
Question: What do you think this color represents? It’s a sample taken directly from a piece of footage I was given to color.
Answer: Skin tone!
More than anything, the blue/black or white/gold dress brings the subjectivity of human color interpretation to the foreground.
But before I get ahead of myself, let’s define these three:
Let’s take a look at some proof that your eyes are fooling you. First with the complementary color example: Stare at the ‘X’ in the center of the image for several seconds and notice how the circling magenta dot becomes cyan (magenta’s complement).
Let’s do another trippy test. Which square is lighter, A, or B? You’ve likely seen this test before. You can double check for yourself, but both square B and A are the same brightness.
The same phenomenon, but a bit easier to see visually. Despite how the top image appears, there is no gradient to the middle bar.
Another interesting resource on color perception.
1/100 people have perfect pitch. 1/10,000 people have perfect color memory. So not only is our immediate sense of color quite subjective, but our recollection of color is also unreliable.
After all the talk on how subjective our eyes can be, let’s talk about objective ways to measure a video signal. Scopes are visual readouts of the makeup of an image signal and they are a very useful tool when it comes to color correction.
Measures an images saturation. No saturation at the center and fully saturated at the perimeter. Hues are represented at various locations around the scope, just like you’ll see in Resolves primary grading controls. The boxes identifying each hue show the extents which should not be exceeded for a signal to be “broadcast safe”. This actually varies based on the luminance of the color, but it’s an easy rule of thumb to just draw a line connecting the “hue dots” and not let your saturation exceed that. Some vectorscopes also have a “skin tone line” which can be helpful to see where your subjects skin color falls objectively (but don’t get too hung up on this). Using the offset control to “center the blob” is also an easy way to quickly compensate for color casts.
Waveform represents luminance. In FLAT view it combines luma and color signals. It correlates, left to right, with your image and shows brightness bottom to top. So if you have a bright white window on the left of frame you’ll see a bright white ‘trace’ on the waveform’s left side near the top.
Waveforms are useful to diagnose problems. One common issue is to work with video in the wrong range. Broadcast/legal/video range data doesn’t use all the brightness values that full swing/data clips use. If you’ve got this incorrectly defined for a clip (Resolve is usually good at auto-detecting it) the scopes can help you see it.
A waveform broken into R,G and B channels shows the individual luminance of red, green and blue. It’s a very useful tool when you’re trying to eliminate color casts because anything black, white, or gray should line up horizontally since it contains equal parts red, green and blue.
Resolve has scopes built in, and for casual use there is no problem using them, but there are a couple reasons you may benefit from using scopes outside of the color application. External scopes simply take your video signal as an input, usually via SDI or HDMI, this means there is minimal GPU strain involved on processing the scopes themselves. If you’re on an underpowered system, or need every last ounce of your GPU for computing the grade, handling the scopes’ processing externally can be a noticeable improvement, and they’ll always play realtime. There’s also typically more power in a dedicated outboard scope. Scopebox provides tools for assessing RGB gamut errors (Scopebox channel plots help these sorts of errors or if you have the money go for a Tek Double Diamond); other scopes like HML balance help you remove color casts quickly; but they’re also highly configurable in that you can enlarge the data trace and position things how you need them for your setup. Newer scopes can also chart changes over time, provide false color overlays, superimpose traces, and provide target markers. In addition to all these features, it’s always wise to let your scopes monitor the same signal chain your grading display is using. Levels issues or problematic cabling will affect what you see on your grading display, but Resolve’s software scopes won’t help you spot it.
ScopeBox is a piece of software that turns your computer into a very professional set of scopes. I run it on a laptop and use the small, bus-powered BlackMagic mini recorder hardware to get the video signal into the computer.
Some NLEs like Premiere and Resolve can send the video signal via software to Scopebox on the same system. This might not help you much when it comes to reducing system strain, but it’s an easy way to get the power of Scopebox on a single system. Scopebox is also a handy tool during production. It’s a very full-featured signal monitoring tool for creative work and you can even record the signal. Be aware that it can really tax your system and drain a laptop battery quickly however.
In 2019, Scopebox switching pricing strategies to more of an annual subscription approach.
There is some confusion about scopes in Resolve since it displays “code values” for 10 bit data rather than traditional “video levels”. The idea of video levels is related to voltages in the analog days and is somewhat archaic, but many traditional video professionals still relate to such levels in terms of exposure. Caucasian skin tone, in an average exposure, generally sits around code value 600 on Resolve’s scopes. This is only relatively useful information, however, as the mood of the scene can drastically alter this number.
Resolve: Color Grading (Blackmagic Introductory Video)
So how accurately do you see color compared to the rest of the world?
Use this link to check your Color IQ. This is a very condensed version of a color analysis test that can help you determine to some degree how accurate your eyes are.
Light reflects off objects and into our eyes. Light is simply electromagnetic radiation whose wavelengths will trigger different physiological and psychological processes in our visual system. Rod cells in our eyes sense luma information and cone cells sense color in long, medium and short wavelengths (red, green and blue accordingly). This high level explanation should be qualified by mentioning the fact that humans are very subjective and individually unique creatures. Age, gender and environment drastically affect our visual system. We lose our sense of color in low light. Generally speaking, we have a greater concentration of cone cells near the center of our retina so color reproduction is also drastically reduced at our visual periphery. Males tend to have a higher rod count and females a higher cone count. Also interesting is the effect of culture on physiology: In Russian, many more words exist for the color “blue” than we use in English. Due to this, Russians are physiologically more adapt at discerning shades of blue. So if you’re picking between strong candidates, hire the female Russian colorist.
Based on how we see color, it would make sense to develop a digital color system that “sees” similarly to our own biological one, right?
This is the genesis of everyone’s favorite CIE 1931 Chromaticity Chart. It’s basically a map of human color and gives us a way to quantify the averaged perception of color with something objective. Instead of saying “reddish-orange”, I can provide a set of values and define the color I’m referring to. The range of color a person can see or a display can show is called a “gamut”. Though these early tests helped produce the XYZ space for quantifying color, you’ll typically see the chart below. If you’re curious, it’s able to be represented in 2D because the luminance portion is removed. So we call it the xyY chart where “Y” represents luminance with no chromaticity (color) information. If you’re into this sort of thing, don’t worryβwe’ll cover it more when we discuss gamuts and calibration.
We’ve covered a lot of ground here so what do you need to walk away knowing? Much of this has been evidence that your eyes are fooling you. Scopes help keep you objective in post just like a histogram on the back of a camera does so while you’re shooting. Another tip is to color quickly rather than dwelling on one shot. Coloring in multiple passes will help you spend time where you need it and not allow your eye a long amount of time to adjust. Remember, the audience won’t see the film parked on a still frame.
Note that I didn’t give specific values for where to put your visual data. A lot of amateur tutorials will have recipes, e.g. “put your blacks at 0 and your whites at 100” sort of thing, or “put skin tones along this line on the vectorscope”. These can both be handy tools to be aware of, but for me, in the modern world of digital broadcast, the codec itself is going to scale your data and we’re not as worried about excessive levels as we once needed to be. So using scopes is often more about keeping shots consistent, within a given image’s various color channels or in the relation of one image to another. There is a degree of usage in using scopes to keep video “legal”, but for the every day user it’s less a priority.
Download the image above and observe scopes on your own machine. Can you set the white balance using the RGB parade?