This topic could occupy the rest of your life. In order to achieve accurate calibration, it’s necessary to understand a bit of color science and history of quantifying how human’s see the world. The unfortunate answer here is that there are not really any easy shortcuts. Properly calibrating your display and environment for color-critical photo and video work is simply not an easy task. Fortunately it can now be done with free, open-source software (DisplayCAL) and affordable hardware (X-rite i1DisplayPro), but do expect an investment in time.
Calibration is important, but it’s also very easy to get it wrong and make things worse. Apple computers have a decent track record of looking decent out-of-the box (part of the reason I recommend them in the ‘computer’ section), but you’ll need to understand the concept of ‘color management’ to get predictable and trustworthy results.
Photo applications reference an “.icc profile” which basically outlines what your given display can show and then convert colors in your document to look correct on your display. Proper video calibration is less intelligent, typically requiring an external, video-specific display and a color conversion between that display and your computer called a “LUT”. Google “Quicktime gamma shift” or “Premiere render washed out” for multiple examples of the confusing time hole that is our industry.
Learn the basics of calibration with SMPTE bars. This is where you start.
What’s the point of the CIE 1931 graph? Learn Here
Device Independent Color Space
Calibration vs. Characterization/Profiling”Calibration” is a lot like what you do with the SMPTE bars: using the display-provided controls to adjust things to get the display as close as possible.
Calibrating a “computer screen” vs “video monitor”
LUTs and Matrices 101
ICC profiles are not the same as LUTs
Light Illusion’s LightspaceWindows-onlyCalManArgyllCMSDisplayCalBased on ArgyllHCFRWindows-onlyBased on ArgyllGreat for checking a display and correcting gamut and white point with display’s controls, but no 3D LUT creation ability
Graeme has written his own driver which allows X-rite hardware to be used on Linux and Android. It also means he doesn’t have to comply to X-rite’s licensing conditions. It also enables a ‘hi-res’ spectral mode which allows spectros to sample at higher resolutions.
DisplayCal and Argyll are built around the idea of ICC profiling, common to color management with computers for many years. It gets confusing. Profiling is just measuring what a display is capable of. The probe spits out some patches, the software measures results, and the “profile” is basically just a description of what that display can do and what RGB values fed to the display are going to result in. Here’s where things get confusing: Now there’s a system-wide LUT, a 1D or gamma-only LUT that helps get your white point into check as well as fix grayscale issues across the luminance range. That is applied via a Video Card Gamma Lookup Table and, because it’s applied via the video card, it’s system wide. But that’s not what’s “fixing” the majority of your color. Neither is the profile. Again, the profile is just sort of a “characterization” of your monitor. So what then is “transforming” your data, what’s morphing the “bad colors” to the “correct colors”? That’s the responsibility of the software you’re using. Color managed software like Photoshop will read metadata in the photo you open and see that this photo of your grandmother is sRGB. It will read the ICC profile associated with the display and properly selected under “Displays” in Mac’s “System Preferences”. This tells Photoshop that your particular monitor’s primaries are something close to a P3 color space. Now Photoshop, knowing the source and destination color spaces has the information necessary to display the photo correctly on your monitor and allow you to “convert” to a specific color space you intent to output to. And, importantly, all of this happens beneath that 1D system LUT, so if that LUT isn’t ‘on’ your calibration is inaccurate.
Bear with an odd analogy:Bill has a sensitive tongue and everything tastes salty to him. Jean has a semi-deficient tongue and adds salt to everything. You, the self-advertised taste-managed baker, are responsible for adjusting salt levels to please both clients. A hundred years ago you did a test to measure a sampling of the earth’s populous and how they responded to this phenomenon of “saltiness”. This gave you a concrete way to relate how people perceive something as “salty” and a quantifiable amount of “salt” from a molecular perspective. You now know that 150 milligrams of salt is the objective “perfect amount” to create a satisfyingly-salty sodium-sensitive dinner roll. But, because you know your clients, you’ve characterized them and know that Bill will require less salt and Jean more salt to receive the same sensation. This is why you are the taste-managed baker. You know the measurements related to the experience you want for the end user and you make the adjustment dynamically for a perfect fit.
But you hire an apprentice who happens to be there begrudgingly as part of an internship and against his free will. You can’t trust him to know your customers and craft with the care of your customized culinary creations. He’ll need a recipe to follow and that recipe will be specific to each customer. There will be a Bill recipe and a Jean recipe. This is the 3D LUT approach.
Where things get additionally confusing (yipeee) is
OK, so that makes sense in terms of calibrating a computer display. Now how about a video display? Firstly, the video world is used to a simpler form of color management. Because of that, I think it’s actually easier to understand as well as implement correctly. Remember how the software was responsible for doing the actual color transformation in the case of the computer display? In this case, that responsibility is managed by a 3D Color Lookup Table, or LUT, like we mentioned before. This LUT is “dumb” in that it’s specific: it can intelligently “read” the color space of the incoming signal and it has no idea what display device it’s connected to. It’s a “pre-baked” transform between one gamut and another and only valid in that specific situation.
“Profiling always creates a device profile. A device link profile would be the ICC equivalent of a video 3D LUT that maps a specified source (target) color space (e.g. Rec. 709) into the display gamut. you first need a device profile (characterisation), then you can create device link profile(s) / 3D LUT(s) with the desired target color space(s).
“In a normal workflow situation, the color space of the input device is transformed to the color space of the output device via the device-independent L*a*b* color space (known as the profile connection space). This process requires two different profiles — a source profile and a destination profile.”A device link profile is a special kind of ICC profile that converts the color space of the input device directly into the color space of the output device, whereby the output device can be either a physical printer or a file format. Unlike ordinary source or destination profiles, they do not describe a specific color space, but define the conversion from a source color space to a destination color space. The basis for creating a device link profile is, therefore, always an ordinary ICC profile.
Colorimeter3 RGB filters make it device-specificCan be ‘corrected’ via a spectral file from a spectro or a matrix which is a specific transform between your specific device and your specific displayRequires emissive sourceCheaperBetter-performing in low light
“The unit-to-unit variation accuracy of the i1d3 is non-trivial. I have had some of them in our lab that are scarily accurate, most are not that good, and a few are not very good at all. The typical i1d3 would benefit from a i1Pro2 profile, though the details would depend on the display type.”
SpectroradiometerMore costlyMeasures actual wavelength of light, unfilteredWorks on passive sources like a printed calibration pageSlower in operation, struggles in low lightYou pay more for narrower bandwidth sampling ‘resolution’ Great Resource
Video vs. Data Levels
Color modification in linear (Broken Computer Color Video)
Why two monitors won’t look the same and how to (rather-manually) fix it. Create a custom white point.
Determine your display’s specs here.
Narrow spectral primaries on technology like laser projection and OLED means different people observe the same color differently:”A new standard (such as the TC 1-36 2012 proposed observer) may improve average match slightly, but can’t possibly address individual variation. Effective methods to address individual variation are to broaden the primaries on the displays, perhaps moving in the direction of spectral reproduction, or to have a practical means of measuring individuals CMF’s. The latter means that only one person at a time can see the display perfectly calibrated though.” – Graeme