15.02.2021       Выпуск 374 (15.02.2021 - 21.02.2021)       Статьи

Automatic color correction with OpenCV and Python

Читать>>




Экспериментальная функция:

Ниже вы видите текст статьи по ссылке. По нему можно быстро понять ссылка достойна прочтения или нет

Просим обратить внимание, что текст по ссылке и здесь может не совпадать.

In this tutorial, you will learn how to perform automatic color correction with OpenCV using a color matching/balancing card.

Last week we discovered how to perform histogram matching. Using histogram matching, we can take the color distribution of one image and match it to another.

A practical, real-world application of color matching is to perform basic color correction through color constancy.The goal of color constancy is to perceive the colors of objects correctly regardless of differences in light sources, illumination, etc. (which, as you can imagine, is easier said than done).

Photographers and computer vision practitioners can help obtain color constancy by using color correction cards, like this one:

Using a color correction/color constancy card, we can:

  1. Detect the color correction card in an input image
  2. Compute the histogram of the card, which contains gradated colors of varying colors, hues, shades, blacks, whites, and grays
  3. Apply histogram matching from the color card to another image, thereby attempting to achieve color constancy

In this tutorial, we’ll build a color correction system with OpenCV by putting together all the pieces we’ve learned from previous tutorials on:

  1. Detecting ArUco markers with OpenCV and Python
  2. OpenCV Histogram Equalization and Adaptive Histogram Equalization (CLAHE)
  3. Histogram matching with OpenCV, scikit-image, and Python

By the end of the guide, you will understand the fundamentals of how color correction cards can be used in conjunction with histogram matching to build a basic color corrector, regardless of the illumination conditions under which an image was captured.

To learn how to perform basic color correction with OpenCV, just keep reading.

Automatic color correction with OpenCV and Python

In the first part of this tutorial, we’ll discuss what color correction and color constancy are, including how OpenCV can facilitate automatic color correction.

We’ll then configure our development environment for this project and review our project directory structure.

With our development environment ready, we’ll implement a Python script that leverages OpenCV to perform color correction.

We’ll wrap up this tutorial with a discussion of our results.

What is automatic color correction?

The human visual system is impacted significantlyby illumination and light sources. Color constancy refers to the study of how humans perceive color.

For example, take a look at the following image from the Wikipedia article on color constancy:

Figure 2: In the topand bottomphotos, examine the second card from the left(i.e., the pink one). In the upperimage, the card appears to be a stronger shade of pink versus the lowerphoto, where the pink is more subdued. Both of these cards have the same RGB values, but our perception is impacted by the photo’s color cast (image source).

Looking at this card, it seems that the pink shade (second from the left) is substantially stronger than the pink shade on the bottom — but as it turns out, they are the same color!

Both these cards have the same RGB values. However, our human color perception system is affected by the color cast of the rest of the photo (i.e., applying a warm red filter on top of it).

That creates a bit of a problem if we seek to normalize our image processing environment. As I stated in my previous tutorial on Detecting low contrast images:

It’s far easier to write code for images captured incontrolled conditionsthan indynamic conditions with no guarantees.

If we can control our image capturing environment as much as possible, the easier it will be to write code to analyze and process these images captured from the controlled environment.

Think about it this way . . . suppose we can safely assume the lighting conditions of an environment. In that case, we can ditch expensive computer vision/deep learning algorithms, which help us obtain desirable results in non-ideal conditions. We instead leverage basic image processing routines, allowing us to hardcode parameters, including Gaussian blur sizes, Canny edge detection thresholds, etc.

Essentially, with controlled environments, we can get away with basic image processing algorithms that are far easier to implement. The catch is that we need safe assumptions on our lighting conditions. Color correction and white balancing help us achieve that.

One way we can help control our environment, even if lighting conditions change a bit, is to apply color correction.

Color checking cards are a favorite tool of photographers:

Figure 3: An example of a color checking card often used by professional photographers (image source).

Photographers place these cards into scenes they are capturing. They then snap photos, adjusting their lighting (while still keeping the card in view of the camera), and then continue shooting until they are done.

After shooting, they go back to their computer, transfer the photos onto their system, and use a tool such as Adobe Lightroom to achieve color consistency across the entire shoot (here’s a tutorial on doing that process if you are interested).

Of course, as computer vision practitioners, we do not have the luxury of using Adobe Lightroom, nor would we want to start/stop our pipeline by manually adjusting color balancing — defeating the entire purpose of using software to automate real-world processes.

Instead, we can leverage these same color correction cards, and along with a bit of histogram matching, we can build a system capable of performing color correction.

In the rest of this guide, you will utilize histogram matching and a color correction card (from Pantone) to perform basic color correction.

Pantone’s color correction card

Figure 4: An example of Pantone’s color matching card (image source).

For this tutorial, we’ll be using Pantone’s Color Match card.

This card is similar to a color correction card that photographers use but is instead used by Pantone to help their consumers match perceived colors in a scene to a shade of paint (most similar to that color) that Pantone sells.

The general idea is that:

  1. You place the color correction card over the shade you want to match
  2. You open Pantone’s smartphone app on your phone
  3. You snap a photo of the card
  4. The app automatically detects the card, performs color matching, and then returns the most similar shades that Pantone sells

For our purposes, we’ll be using the card strictly for color correction (but you could easily extend it as you see fit).

Configuring your development environment

To learn how to perform automatic color correction, you need to have both OpenCV and scikit-image installed:

Both are pip-installable using the following commands:

$ pip install opencv-contrib-python
$ pip install scikit-image

If you need help configuring your development environment for OpenCV, I highly recommend that you read mypip install OpenCV guide — it will have you up and running in a matter of minutes.

Having problems configuring your development environment?

Figure 5: Having trouble configuring your development environment? Want access to pre-configured Jupyter Notebooks running on Google Colab? Be sure to join PyImageSearch Plus — you’ll be up and running with this tutorial in a matter of minutes.

All that said, are you:

  • Short on time?
  • Learning on your employer’s administratively locked system?
  • Wanting to skip the hassle of fighting with the command line, package managers, and virtual environments?
  • Ready to run the code right now on your Windows, macOS, or Linux systems?

Then join PyImageSearch Plus today!

Gain access to Jupyter Notebooks for this tutorial and other PyImageSearch guides that are pre-configured to run on Google Colab’s ecosystem right in your web browser! No installation required.

And best of all, these Jupyter Notebooks will run on Windows, macOS, and Linux!

Project structure

While color matching and color correction may seem like a complicated process, as we’ll find out, we’ll be able to complete the entire project in just under 100 lines of code (including comments).

But before we start coding, let’s first review our project directory structure.

Start by accessing the “Downloads” section of this tutorial to retrieve the source code and example images — then take a look at the folder:

$ tree . --dirsfirst
.
├── examples
│   ├── 01.jpg
│   ├── 02.jpg
│   └── 03.jpg
├── color_correction.py
└── reference.jpg

1 directory, 5 files

We have a single Python script to review today, color_correction.py. This script will:

  1. Load our reference.png image (which contains our Pantone color correction card)
  2. Load one of the images in the examples directory (which we’ll color correct to match that of reference.png)
  3. Detect the color matching card via ArUco marker detection in both the reference and input image
  4. Apply histogram matching to round out the color correction process

Let’s get to work!

Implementing automatic color correction with OpenCV

We are now ready to implement color correction with OpenCV and Python.

Open the color_correction.py file in your project directory structure, and let’s get to work:

# import the necessary packages
from imutils.perspective import four_point_transform
from skimage import exposure
import numpy as np
import argparse
import imutils
import cv2
import sys

We start on Lines 2-8, importing our required Python packages. The notable ones include:

  • four_point_transform: Applies a perspective transform to obtain a top-down, bird’s-eye view of the input color matching card. See the following tutorial for an example of using this function.
  • exposure: Contains the histogram matching function from scikit-image.
  • imutils: My set of convenience functions for performing image processing with OpenCV.
  • cv2: Our OpenCV bindings.

With our imports taken care of, we can move on to defining the find_color_card function, the method responsible for locating the Pantone color matching card in an input image:

def find_color_card(image):
	# load the ArUCo dictionary, grab the ArUCo parameters, and
	# detect the markers in the input image
	arucoDict = cv2.aruco.Dictionary_get(cv2.aruco.DICT_ARUCO_ORIGINAL)
	arucoParams = cv2.aruco.DetectorParameters_create()
	(corners, ids, rejected) = cv2.aruco.detectMarkers(image,
		arucoDict, parameters=arucoParams)

Our find_color_card function requires only a single parameter, image, which is the image that (presumably) contains our color matching card.

From there, Lines 13-16 perform ArUco marker detection to find the four ArUco markers on the color matching card itself.

Next, let’s order the four ArUco markers in top-left, top-right, bottom-right, and bottom-left order (the required order for applying a top-down perspective transform):

	# try to extract the coordinates of the color correction card
	try:
		# otherwise, we've found the four ArUco markers, so we can
		# continue by flattening the ArUco IDs list
		ids = ids.flatten()

		# extract the top-left marker
		i = np.squeeze(np.where(ids == 923))
		topLeft = np.squeeze(corners[i])[0]

		# extract the top-right marker
		i = np.squeeze(np.where(ids == 1001))
		topRight = np.squeeze(corners[i])[1]

		# extract the bottom-right marker
		i = np.squeeze(np.where(ids == 241))
		bottomRight = np.squeeze(corners[i])[2]

		# extract the bottom-left marker
		i = np.squeeze(np.where(ids == 1007))
		bottomLeft = np.squeeze(corners[i])[3]

	# we could not find color correction card, so gracefully return
	except:
		return None

First, we wrap this entire code block in a try/except block. We do this just in case all four markers cannot be detected using np.where calls. If only a single np.where call fails, Python will throw an error.

Our try/except block will catch the error and return None, implying that the color correction card could not be found.

Otherwise, Lines 25-38 extract each of the individual ArUco markers in top-left, top-right, bottom-right, and bottom-left order.

Note:You may be wondering how I knew the IDs for each of the markers was going to be 923, 1001, 241, and 1007?That is addressed in my previous set of tutorials on ArUco marker detection.Be sure you give that tutorial a read if you haven’t read it yet.

Provided we found all four ArUco markers, and we can now apply the perspective transform:

	# build our list of reference points and apply a perspective
	# transform to obtain a top-down, bird’s-eye view of the color
	# matching card
	cardCoords = np.array([topLeft, topRight,
		bottomRight, bottomLeft])
	card = four_point_transform(image, cardCoords)

	# return the color matching card to the calling function
	return card

Lines 47-49 build a NumPy array from our ArUco marker coordinates and then apply the four_point_transform function to obtain a top-down, bird’s-eye view of the color correction card.

This top-down view of the card is returned to the calling function.

With our find_color_card function implemented, let’s move on to parsing command line arguments:

# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-r", "--reference", required=True,
	help="path to the input reference image")
ap.add_argument("-i", "--input", required=True,
	help="path to the input image to apply color correction to")
args = vars(ap.parse_args())

To perform color matching, we need two images:

  1. The path to the --reference image contains the input scene in the “ideal” conditions to which we want to correct any input image.
  2. The path to the --input image, which we assume has a different color distribution, presumably due to changes in lighting conditions.

Our goal is to take the --input image and perform color matching such that its distribution matches that of the --reference image.

But before we can do that, we need to load the reference and source images from disk:

# load the reference image and input images from disk
print("[INFO] loading images...")
ref = cv2.imread(args["reference"])
image = cv2.imread(args["input"])

# resize the reference and input images
ref = imutils.resize(ref, width=600)
image = imutils.resize(image, width=600)

# display the reference and input images to our screen
cv2.imshow("Reference", ref)
cv2.imshow("Input", image)

Lines 64 and 65 load our input images from disk, while Lines 68 and 69 preprocess them by resizing to a width of 600 pixels (to process the images faster).

Lines 72 and 73 then display the original ref and image to our screen.

With our images loaded, let’s now apply the find_color_card function to both images:

# find the color matching card in each image
print("[INFO] finding color matching cards...")
refCard = find_color_card(ref)
imageCard = find_color_card(image)

# if the color matching card is not found in either the reference
# image or the input image, gracefully exit
if refCard is None or imageCard is None:
	print("[INFO] could not find color matching card in both images")
	sys.exit(0)

Lines 77 and 78 attempt to locate the color matching card in both the ref and image.

If we cannot find the color matching card in either image, we gracefully exit the script (Lines 82-84).

Otherwise, we can safely assume we found the color matching card, so let’s apply color correction:

# show the color matching card in the reference image and input image,
# respectively
cv2.imshow("Reference Color Card", refCard)
cv2.imshow("Input Color Card", imageCard)

# apply histogram matching from the color matching card in the
# reference image to the color matching card in the input image
print("[INFO] matching images...")
imageCard = exposure.match_histograms(imageCard, refCard,
	multichannel=True)

# show our input color matching card after histogram matching
cv2.imshow("Input Color Card After Matching", imageCard)
cv2.waitKey(0)

Lines 88 and 89 display our refCard and imageCard to our screen.

We then apply the match_histograms function to transfer the color distribution from the refCard to the imageCard.

Finally, the output imageCard, after histogram matching, is displayed on our screen. This new imageCard now contains the color corrected version of the original imageCard.

Automatic color correction results

We are now ready to perform automatic color correction with OpenCV!

Be sure to access the “Downloads” section of this tutorial to retrieve the source code and example images.

From there, you can open a shell and execute the following command:

$ python color_correction.py --reference reference.jpg \
	--input examples/01.jpg
[INFO] loading images...
[INFO] finding color matching cards...
[INFO] matching images...
Figure 6: Left: Our reference image. Note the shade of teal placed in the center of the card. Right: Our input image. Here you can see that the shade of teal is brighter than the shade in the reference image. Our goal is to apply color matching/correction to resolve this discrepancy.

On the left, we have our reference image. Notice how we placed the color correction card over a shade of teal. Our goal here is to ensure that shade of teal is consistent across all input images, regardless of how lighting conditions change.

Now, examine the photo on the right. This is our example input image. You can see that due to lighting conditions, the shade of teal is slightly brighter than the shade of teal in the reference image.

How can we correct this appearance?

The answer is to apply color correction:

Figure 7: Left: Detecting the color matching card in the reference image. Middle: Extracting the color card from the input image. Right: Output after applying color matching. Notice how the shade of teal on the right more closely resembles the shade of teal in the input image.

On the left, we have detected the color card in the reference image. The middle shows the color card from the input image. And finally, the right displays the input color card after color matching.

Notice how the shade of teal on the right more closely resembles the shade of teal in the input reference image (i.e., the shade of teal on the right is darker than the one in the middle).

Let’s try another image:

$ python color_correction.py --reference reference.jpg \
	--input examples/02.jpg
[INFO] loading images...
[INFO] finding color matching cards...
[INFO] matching images...
Figure 8: Left: Our reference image. Right: The image to which we wish to apply color correction.

Again, we start with our reference image (left) and our input image (right), to which we seek to apply color correction.

Below is our output after applying color matching:

Figure 9: Left: The detected color matching card from the reference image. Middle: Detecting the color matching card from the input image. Right: Output of applying histogram matching.

The left contains the color matching card from the reference image, while the middle displays the color matching card from the input image (02.jpg). You can see that the shade of teal in the middle image is significantly brighter than the shade of teal on the left.

By applying color matching and correction, we can correct this disparity (right). Notice how the shades of teal on the left and right more similarly match each other.

Here is one final example:

$ python color_correction.py --reference reference.jpg \
	--input examples/03.jpg
[INFO] loading images...
[INFO] finding color matching cards...
[INFO] matching images...
Figure 10: Left: Our reference image for histogram matching. Right: Our input image. Notice there is additional shadowing in this image due to the lighting environment.

Here, the lighting conditions are significantly different from the previous two. The image on the left is our reference image (captured in my office), while the image on the right is the input image (captured in my bedroom).

Due to the windows in the bedroom and how the sun was entering the windows that day, there is significant shadowing on the rightside of the color matching card, thereby making this more of a challenge (and demonstrating some of the limitations of this basic color correction method).

Below is the output of applying color correction via histogram matching:

Figure 11: Left: Detecting the color matching card in the reference image. Middle: The color matching card from the input image. Right: Output of applying color correction with OpenCV.

The left image is the color matching card from our reference image. We then have the detected color correction card from our input image (03.jpg).

Applying histogram matching yields the right image. While we still have shadowing, we can see that the brighter teal color from the middle has been corrected to more similarly match the original darker teal color from the reference image.

What’s next?

Figure 12: Regardless of your prior experience, PyImageSearch Plus will help you level-up your skills.

How was that? Are you feeling confident about performing basic color correction with OpenCV?

If you’re the sort of person who likes learning by doing, why don’t you join us inside PyImageSearch Plus?

For years, PyImageSearch readers have emailed to ask me, “Adrian, can’t you just teach me this stuff? I know I’d master computer vision, deep learning, and OpenCV a lot faster if I could just work with you.”

Now you can! The doors to PyImageSearch Plus are OPEN!

If you want to access centralized code repositories of high-quality source code for all 400+ PyImageSearch blog posts, runJupyter Notebooks in pre-configured Google Colab instances, and watchvideo tutorials for every new weekly blog post, join us!

This kind of hands-on training simply isn’t available anywhere else online. And our members love it.

PyImageSearch Plus is really the best Computer Visions ‘Masters’ Degree that I wish I had when starting out.Being able to access all of Adrian’s blog posts in a single, indexed page, and being able to start playing around with the code without going through the nightmare of setting up everything is just amazing. 10/10! Recommend.

— Sanyam B. Machine Learning Engineer and Kaggle x2 Expert

If you’re ready to dive into computer vision and deep learning training, this is the resource you’ve been looking for. Fast, practical — and affordable for every budget, too. Find out more here.

Summary

In this tutorial, you learned how to perform basic color correction using OpenCV and Python.

We achieved this goal by:

  1. Placing a color correction card in the view of our camera
  2. Snapping a photo of the scene
  3. Detecting the color correction card with ArUco marker detection
  4. Applying histogram matching to transfer the color distribution of the card to another image

Taken together, we can think of this process as a color correction procedure (albeit quite basic).

Achieving pure color constancy, especially without markers/color correction cards, is still an active research area and will likely continue for many years to come. But in the meantime, we can leverage histogram matching and color matching cards to get us moving in the right direction.

To download the source code to this post (and be notified when future tutorials are published here on PyImageSearch), simply enter your email address in the form below!

Download the Source Code and FREE 17-page Resource Guide

Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!






Разместим вашу рекламу

Пиши: mail@pythondigest.ru

Нашли опечатку?

Выделите фрагмент и отправьте нажатием Ctrl+Enter.

Система Orphus