This topic could occupy the rest of your life. In order to achieve accurate calibration, it’s necessary to understand a bit of color science and history of quantifying how human’s see the world. The unfortunate answer here is that there are not really any easy shortcuts. Properly calibrating your display and environment for color-critical photo and video work is simply not an easy task. Fortunately it can now be done with free, open-source software (DisplayCAL) and affordable hardware (X-rite i1DisplayPro), but do expect an investment in time. Another disclaimer: Calibration is important, but it’s also very easy to get it wrong and make things worse. Apple computers have a decent track record of looking decent out-of-the box (part of the reason I recommend them in the ‘computer’ section), but you’ll need to understand the concept of ‘color management’ to get predictable and trustworthy results. That said, something simple to use and fairly consistent like an iPhone screen can be a great way to quickly gauge if your work looks the same as it did on your computer display.
I suggest watching the above since learning the basics of calibration with SMPTE bars is still a decent life skill and it’s pretty easy. This is a good place to start and what many industry insiders will think of when they hear the word “calibration” so it bears mentioning. It’ll also help you understand the purpose of this colorful set of rectangles you’ve likely seen before. Before we start in earnest, an ambitious student would do well to refresh their knowledge on some of these topics:
Photo applications reference an “.icc profile” which basically outlines what your given display can show and then your media software converts colors in your document to look correct on your display. Proper video calibration relies on the video software usually being less intelligent, typically requiring an external, video-specific display and a color conversion between that display and your computer called a “LUT”. That said, many video NLE’s (like Resolve) are now supporting ICC profiles and can therefore provide a “calibrated” image on your computer screen. Any professional colorist will still have a good external display, but for the budding amateur, the computer GUI display is much better than it used to be. That said, there are just too many ways a computer display can show inaccurate color: Google “Quicktime gamma shift” or “Premiere render washed out” for multiple evidences of that fact.
DisplayCal and Argyll are built around the idea of ICC profiling, common to color management with computers for many years. It gets a bit confusing but here’s the gist: Profiling is just measuring what a display is capable of and that’s the first step to accurate color. A hardware device called a probe is placed on the display to be measured. Software (or a hardware device) spits out some patches of color, the software measures results, and the “profile” is basically just a description of what that display can do and what RGB values fed to the display are going to result in. This profile just describes the characteristics of the display but it’s not actually doing anything–it’s color managed software that’s required to make use of this. Again, the profile is just sort of a “characterization” of your monitor. So what then is “transforming” the image of your photo to look color accurate when you open it in Photoshop? What’s morphing the “bad colors” to the “correct colors”? That’s the responsibility of the Photoshop itself in this case. Color managed software like Photoshop will read metadata in the photo you open and see that this photo of your grandmother is sRGB. It will read the ICC profile associated with the display and properly selected under “Displays” in Mac’s “System Preferences”. This tells Photoshop that your particular monitor’s primaries are something close to a P3 color space. Now Photoshop, knowing the source and destination color spaces has the information necessary to display the photo correctly on your monitor and allow you to “convert” to a specific color space you intent to output to. And, importantly, all of this happens beneath a 1D system LUT, so if that LUT isn’t ‘on’ your calibration is potentially inaccurate. This gamma-only LUT helps get your white point into check as well as fix grayscale issues across the luminance range.
Bear with an odd analogy: Bill has a sensitive tongue and everything tastes salty to him. Jean has a semi-deficient tongue and adds salt to everything. You, the self-advertised taste-managed baker, are responsible for adjusting salt levels to please both clients. A hundred years ago you did a test to measure a sampling of the earth’s populous and how they responded to this phenomenon of “saltiness”. This gave you a concrete way to relate how people perceive something as “salty” and a quantifiable amount of “salt” from a molecular perspective. You now know that 150 milligrams of salt is the objective “perfect amount” to create a satisfyingly-salty sodium-sensitive dinner roll. But, because you know your clients, you’ve characterized them and know that Bill will require less salt and Jean more salt to receive the same sensation. This is why you are the taste-managed baker. You know the measurements related to the experience you want for the end user and you make the adjustment dynamically for a perfect fit. But you hire an apprentice who happens to be there begrudgingly as part of an internship and against his free will. You can’t trust him to know your customers and craft with the care of your customized culinary creations. He’ll need a recipe to follow and that recipe will be specific to each customer. There will be a Bill recipe and a Jean recipe. This is the 3D LUT approach.
A 3D LUT is much more “dumb” than software intelligently converting between color spaces and reading an ICC profile on-the-fly. The 3D LUT, as you read above, is just a simple (albeit large) table of input and output values. So historically the video world is used to a simpler form of color management. Because of that, I think it’s actually easier to understand as well as implement correctly. Again, remember how the software was responsible for doing the actual color transformation in the case of the computer display? In this case, that responsibility is managed by a 3D Color Lookup Table, or LUT, like we mentioned before. This LUT is “dumb” in that it’s specific: it can intelligently “read” the color space of the incoming signal and it has no idea what display device it’s connected to. It’s a “pre-baked” transform between one gamut and another and only valid in that specific situation.
So LUTs and ICC profiles are both color management tools but the implementation is different in most cases. That said, Device Link profiles are very similar to the functionality of a 3D LUT:
Profiling always creates a device profile. A device link profile would be the ICC equivalent of a video 3D LUT that maps a specified source (target) color space (e.g. Rec. 709) into the display gamut. You first need a device profile (characterisation), then you can create device link profile(s) / 3D LUT(s) with the desired target color space(s). Unlike ordinary source or destination profiles, they do not describe a specific color space, but define the conversion from a source color space to a destination color space. The basis for creating a device link profile is, therefore, always an ordinary ICC profile.Florian Höch
If you made it through the above list of resources and explanations you should have an understanding of some of the foundational principles of color calibration. Let’s move on to the tools.
Light Illusion’s ColourSpace is as much of an industry standard for calibration as any tool. Steve Shaw, the creator, even has a free version that’s great for checking a display and correcting gamut and white point with display’s controls, but it has no 3D LUT creation ability. The full version of this software is cost prohibitive for many students so we’ll spend more time discussing a slightly-less intuitive program called DisplayCal.
Graeme Gille is a brilliant man whose ArgyllCMS color engine work underlies the popular DisplayCal open source calibration software. He has even written his own driver which allows X-rite hardware to be used on Linux and Android. It also means he doesn’t have to comply to X-rite’s licensing conditions. It also enables a ‘hi-res’ spectral mode which allows spectros to sample at higher resolutions (we’ll get into that later). This same color engine powers the Android app that turns your phone (plus a probe) into a very accurate color meter. On the calibration side, you get a lot of functionality and incredible results for a price tag of $0.
The DisplayCal software is rooted in this functionality of ICC profiles and will actually profile your display much as you would for calibrating a computer screen and then generate a 3D LUT based on that profile and your intended destination space.
The most practical device or “probe” for doing these sorts of calibrations is typically an i1 Display Pro made by X-rite. These are not perfect units, but the device is very good for the price. This is a device that uses colored filters and a sensor to measure color and therefore requires an emissive display (e.g. computer screen). The shorthand term for these sorts of probes is a “colorimeter”. The quote below references pairing the i1D3 with another type of probe to create an offset and increase its accuracy. This is a good idea if you have the money.
“The unit-to-unit variation accuracy of the i1d3 is non-trivial. I have had some of them in our lab that are scarily accurate, most are not that good, and a few are not very good at all. The typical i1d3 would benefit from a i1Pro2 profile, though the details would depend on the display type.”
Spectroradiometers are this second type of probe–the i1Pro2 reference above. They are more costly but actually measure the wavelength of light unfiltered. They work on passive sources like a printed calibration page, rather than only on emissive sources. They are slower in operation, struggle in low light, and you pay more for narrower bandwidth sampling ‘resolution’
Take a look at this comparison between the two: X-Rite Comparison
The best way to learn DisplayCal is to simply follow their Quick Start Guide.
Download the software here, follow the guide (note the Resolve-specific section if calibrating for video color correction work) and best of luck.
Some of the following are other things to keep in mind:
Video vs. Data Levels
Color modification in linear (Broken Computer Color Video)
Why two monitors won’t look the same and how to (rather-manually) fix it. Create a custom white point.
Determine your display’s specs here.
The type of display you own can make your success in calibration vary widely, for example, see this note from Graeme.
Narrow spectral primaries on technology like laser projection and OLED means different people observe the same color differently: A new standard (such as the TC 1-36 2012 proposed observer) may improve average match slightly, but can’t possibly address individual variation. Effective methods to address individual variation are to broaden the primaries on the displays, perhaps moving in the direction of spectral reproduction, or to have a practical means of measuring individuals CMF’s. The latter means that only one person at a time can see the display perfectly calibrated though.
As mentioned, the world of calibration is a complex one, and the results are often not as good as anticipated due to human error, display inefficiency, and the fallibility or inconsistency of our visual system. That said, hopefully this sets you up with information to be able to start down the path of calibration.