Improving background removal algorithm performance on overexposed images.
conda env create -f eden_transfer_learning.yml
Note: If you find any issues while executing the notebook, don't hesitate to open an issue on Github. We will reply you as soon as possible.
In order to improve the performance of a deep-learning-based system, often a pre-processing step known as background removal is applied. This technique is considered to remove as much noise/background as possible from the image, making the object of study appear as the main part of the picture. Additionally, background removal is not only useful for improving the performance of the deep neural network, but also for reducing overfitting.
In agriculture, several works have made use of background removal techniques for achieving a better performance (Mohanty et al., 2016; McCool et al., 2017; Milioto et al., 2017; Espejo-Garcia et al., 2020).
Contrast is a parameter that expresses how bright and dark pixels are distributed in the image. Because of complex dynamic lightning conditions or wrong camera configurations, the bright and dark areas of some images could blend together, creating images with a large number of either very dark or very bright pixels that make distinguishing certain relevant features significantly harder. Consequently, this problem can reduce the effectiveness of background removal techniques such as the ones published at Eden Platform.
When background removal algorithms are applied on overexposed images, performance might not be optimal as foreground and background pixels lose their characteristic colors. This results in removing regions of interest (foreground) falsely classified as background.
In this notebook, we are going to combine Histogram Equalization techniques with a background removal technique, more specifically the Grabcut algorithm. We are going to make this notebook autonomous and explain the most important steps, but studying the above techniques published in our previous notebooks Histogram Equalization and Background removal is recommended.
Designed in Rother et al., 2004, this is an algorithm for foreground extraction with minimal user interaction. The Grabcut algorithm works like this:
Below, you can see an overexposed image on the left. On the right we have directly applied the background removal technique mentioned above.
The overexposure of the image has a deteriorating effect on the performance of the grabcut algorithm. It is obvious that a big part of the object of interest (the plant) has been falsely classified as background and has been removed. Continue reading to see how we can combat this problem by pre-processing the image before grabcut algorithm.
UPDATES
import matplotlib.pyplot as plt
import numpy as np
import cv2
from tqdm import tqdm
from glob import glob
from pathlib import Path
%matplotlib inline
# Plot multiple numpy arrays at once
def plot_sample(X,title):
nb_rows = 1 # number of rows
nb_cols = 5 # number of columns
# Setting up figure parameters
fig, axs = plt.subplots(nb_rows, nb_cols, figsize=(18, 18))
# Configuring title parameters.
plt.suptitle(title,verticalalignment='top',horizontalalignment='left', y=0.6 ,fontsize='x-large')
index=0
for i in range(0, nb_rows):
for j in range(0, nb_cols):
axs[j].xaxis.set_ticklabels([])
axs[j].yaxis.set_ticklabels([])
axs[j].imshow(X[index])
index +=1
# Reads data from the specified paths in path_list
def read_data(path_list, im_size=(128,128)):
X = []
for path in path_list :
for im_file in tqdm(glob(path + '*/*')):
try:
im = cv2.imread(im_file)
# Resize to appropriate dimensions.You can try different interpolation methods.
im = cv2.resize(im, im_size,interpolation=cv2.INTER_LINEAR)
# By default OpenCV read with BGR format, return back to RGB.
im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
X.append(im)
except Exception as e:
# In case annotations or metadata are found
print("Not a picture")
X = np.array(X)# Convert list to numpy array.
return X
IM_SIZE = (256, 256)
# Datasets' paths we want to work on.
PATH_LIST = ['eden_data/Black nightsade-220519-Weed-zz-V1-20210225102034']
i=0
for path in PATH_LIST:
#Define paths in an OS agnostic way.
PATH_LIST[i] = str(Path(Path.cwd()).parents[0].joinpath(path))
i+=1
X = read_data(PATH_LIST, IM_SIZE)
100%|██████████| 123/123 [00:29<00:00, 4.12it/s]
# Applies histogram equalization on RGB images
def equalize_rgb(im) :
NUM_CHANNELS = 3
eqs = [cv2.equalizeHist(im[:,:,i])[:,:,np.newaxis] for i in range(NUM_CHANNELS)]
equalized_image = cv2.merge((eqs[0], eqs[1], eqs[2]))
return equalized_image
For more information about CLAHE algorithm check our previous Eden notebook
# Applies histogram equalization with the CLAHE technique on RGB images
def clahe_equal(image):
H, S, V = cv2.split(cv2.cvtColor(image, cv2.COLOR_RGB2HSV))
clahe = cv2.createCLAHE(clipLimit=20.0, tileGridSize=(2,2))
eqV = clahe.apply(V)
return cv2.cvtColor(cv2.merge([H, S, eqV]), cv2.COLOR_HSV2RGB)
# Grabcut Implementation
NUM_ITERS = 5 # Number of iterations applied on Grabcut algorithm
# Find contours of an object and draw them around it
def add_contours(image, mask):
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
cv2.drawContours(image, contours, -1, (255, 0, 0), 2)
c = max(contours, key = cv2.contourArea)
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0) ,2)
# Apply Grabcut algorithm
def remove_background(image):
h, w = image.shape[:2]
mask = init_grabcut_mask(h, w)
bgm = np.zeros((1, 65), np.float64)
fgm = np.zeros((1, 65), np.float64)
# We give the rectangle parameters and let the algorithm run for NUM_ITERS iterations.
# The parameter mode should be cv.GC_INIT_WITH_RECT since we are using rectangle.
cv2.grabCut(image, mask, None, bgm, fgm, NUM_ITERS, cv2.GC_INIT_WITH_MASK)
mask_binary = np.where((mask == 2) | (mask == 0), 0, 1).astype('uint8')
result = cv2.bitwise_and(image, image, mask = mask_binary)
add_contours(result, mask_binary) # optional, adds visualizations
return result
# Set the masks used by grabcut algorithm
def init_grabcut_mask(h, w):
mask = np.ones((h, w), np.uint8) * cv2.GC_BGD # A trivial background pixel
mask[h//10:9*h//10, w//10:9*w//10] = cv2.GC_PR_BGD # A possible backgroung pixel
mask[h//4:3*h//4, w//4:3*w//4] = cv2.GC_PR_FGD # A possible foreground pixel
mask[2*h//5:3*h//5, 2*w//5:3*w//5] = cv2.GC_FGD # A trivial foreground pixel
return mask
Histogram equalization is achieved with 2 techniques.
sub_X = X[10:15] # Subset from the nightshade dataset.
filtered_images = []
filtered_images_Eq = []
filtered_images_CLAHE = []
for i in sub_X:
# Apply background removal and append it to list
filtered_images.append(remove_background(i))
# Equalize image, apply background removal and append it to list
filtered_images_Eq.append(remove_background(equalize_rgb(i)))
# Equalize image with CLAHE, apply background removal and append it to list
filtered_images_CLAHE.append(remove_background(clahe_equal(i)))
As you can see there are cases where applying histogram equalization improves the accuracy of the background removal algorithm. In other cases results remain pretty much the same,usually they are already satisfying.
In some rare cases, there might be a slight negative effect on the accuracy of the Grabcut algorithm after HE, but that usually happens when the algorithm doesn't converge well anyway.
plot_sample(np.array(sub_X ), "Original Images")
plot_sample(np.array(filtered_images), "Background Removal without Histogram Equalization")
plot_sample(np.array(filtered_images_Eq), "Background Removal with Histogram Equalization")
plot_sample(np.array(filtered_images_CLAHE), "Background Removal with CLAHE")
Rother, C., Kolmogorov, V., & Blake, A. (2004). "GrabCut": interactive foreground extraction using iterated graph cuts. ACM SIGGRAPH 2004 Papers.
Mohanty, S.P., Hughes, D.P., Salathé, M., 2016. Using deep learning for image-based plant disease detection. Front. Plant. Sci. 7.
Espejo-García, B., Mylonas, N., Athanasakos, L., Fountas, S., & Vasilakoglou, I. (2020). Towards weeds identification assistance through transfer learning. Comput. Electron. Agric., 171, 105306.
Wikipedia on Histogram Equalization: https://en.wikipedia.org/wiki/Histogram_equalization
Wikipedia on CLAHE:https://en.wikipedia.org/wiki/Adaptive_histogram_equalization#CLAHE
OpenCV on Grabcut: https://docs.opencv.org/3.4/d8/d83/tutorial_py_grabcut.html
OpenCV on Histogram Equalization: https://docs.opencv.org/3.4/d4/d1b/tutorial_histogram_equalization.html
OpenCV on CLAHE: https://docs.opencv.org/3.4/d6/db6/classcv_1_1CLAHE.html
Previous Notebooks: Histogram Equalization, Background removal