conda env create -f eden_transfer_learning.yml
Note: If you find any issues while executing the notebook, don't hesitate to open an issue on Github. We will reply you as soon as possible.
In this notebook, we are going to cover a technique called Transfer Learning, which generally refers to a process where a machine learning model is trained on one problem, and afterwards, it is reused in some way on a second (probably) related problem (Bengio, 2012). Specifically, in deep learning, this technique is used by training only some layers of the pre-trained network. Its promise is that the training will be more efficient and in the best of the cases the performance will be better compared to a model trained from scratch.
In agriculture, since weeds compete with crops in the domain of space, light and nutrients, they are an important problem that can lead to a poorer harvest by farmers. To avoid this, weeds should be removed at every step of the growth, but especially at the initial stages. For that reason, identifying weeds accurately by deep learning has arisen as an important objective. Related to this, we can find the disease detection problem, where transfer learning has also been used. Among the most relevant recent works, we can find:
Wang et al., (2017) used transfer learning in order to obtain the best neural-based method for disease detection in plants. They extended the apple black rot images in the PlantVillage dataset, which were further annotated by botanists with four severity stages as ground truth. Then, they evaluated the performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning. Their best model was the VGG16 architecture trained with transfer learning, which yielded an overall accuracy of 90.4% on the hold-out test set. In Mehdipour-Ghazi et al., (2017), the authors used the plant datasets of LifeCLEF 2015. Three popular deep learning architectures were evaluated: GoogLeNet, AlexNet, and VGGNet. Their best combined system (a combination of GoogleNet and VGGNet) achieved an overall accuracy of 80% on the validation set and an overall inverse rank score of 0.752 on the official test set. In Suh et al., (2018), the authors compared different transfer learning approaches in order to find a suitable approach for weed detection (volunteer potato). Their highest classification accuracy for AlexNet was 98.0%. Comparing different networks, their highest classification accuracy was 98.7%, which was obtained with VGG-19. Additionally, all scenarios and pre-trained networks were feasible for real-time applications (classification time < 0.1 s). Another relevant study has been performed by Kounalakis et al., (2019) where they evaluated transfer learning by a combination of CNN-based feature extraction and linear classifiers to recognize rumex under real-world conditions. Their best system (Inception_v1+L2regLogReg) achieved an accuracy of 96.13 with a false positive rate of 3.62. In Too et al., (2019), the authors used transfer learning achieving a performance of 99.75% with the DenseNet architecture. Finally, in Espejo-Garcia et al., (2020), authors used transfer learning using agricultural datasets for pre-training neural networks, and afterwards, they fine-tuned the networks for classifying 4 species extracted from the Eden Platform. Their maximum performance was 99.54% by using the Xception architecture.
UPDATES
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import cv2
import os
from tqdm import tqdm
from glob import glob
from pathlib import Path
import tensorflow as tf
from tensorflow.keras.applications import Xception
from tensorflow.keras.applications.xception import preprocess_input
from tensorflow.keras.layers import Flatten,Dense, Dropout
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping,ReduceLROnPlateau,ModelCheckpoint
import tensorflow.keras.backend as K
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
import random
import matplotlib.pyplot as plt
def denormalise(values):
# Some functions need 1-d arrays
# This function transform n-dimensional y to 1-d y
y_den = []
for dist in values:
y_den.append(np.argmax(dist))
return y_den
# Function for plotting images.
def plot_sample(X):
# Plotting 6 sample images
nb_rows = 3
nb_cols = 3
fig, axs = plt.subplots(nb_rows, nb_cols, figsize=(6, 6))
for i in range(0, nb_rows):
for j in range(0, nb_cols):
axs[i, j].xaxis.set_ticklabels([])
axs[i, j].yaxis.set_ticklabels([])
axs[i, j].imshow(X[random.randint(0, X.shape[0]-1)])
def read_data(path_list, im_size=(224,224)):
X = []
y = []
# Exctract the file-names of the datasets we read and create a label dictionary.
tag2idx = {tag.split(os.path.sep)[-1]:i for i, tag in enumerate(path_list)}
for path in path_list:
for im_file in tqdm(glob(path + '*/*')): # Read all files in path
try:
# os.path.separator is OS agnostic (either '/' or '\'),[-2] to grab folder name.
label = im_file.split(os.path.sep)[-2]
im = cv2.imread(im_file)
# Resize to appropriate dimensions.You can try different interpolation methods.
im = cv2.resize(im, im_size,interpolation=cv2.INTER_LINEAR)
# By default OpenCV read with BGR format, return back to RGB.
im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
X.append(im)
y.append(tag2idx[label])# Append the label name to y
except Exception as e:
# In case annotations or metadata are found
print("Not a picture")
X = np.array(X) #Convert list to numpy array.
y = np.eye(len(np.unique(y)))[y].astype(np.uint8)
return X, y
# Callbacks are used for saving the best weights and
# early stopping.
def get_callbacks(weights_file, patience, lr_factor):
return [
# Only save the weights that correspond to the maximum validation accuracy.
ModelCheckpoint(filepath= weights_file,
monitor="val_accuracy",
mode="max",
save_best_only=True,
save_weights_only=True),
# If val_loss doesn't improve for a number of epochs set with 'patience' var
# training will stop to avoid overfitting.
EarlyStopping(monitor="val_loss",
mode="min",
patience = patience,
verbose=1),
# Learning rate is reduced by 'lr_factor' if val_loss stagnates
# for a number of epochs set with 'patience/2' var.
ReduceLROnPlateau(monitor="val_loss", mode="min",
factor=lr_factor, min_lr=1e-6, patience=patience//2, verbose=1)]
# Plot learning curves for both validation accuracy & loss,
# training accuracy & loss
def plot_training_curves(history):
# Define the metrics we will plot.
train_acc=history.history['accuracy']
val_acc=history.history['val_accuracy']
train_loss = history.history['loss']
val_loss = history.history['val_loss']
# Range for the X axis.
epochs = range(len(train_loss))
fig,axs =plt.subplots(1,2,figsize=(20,10))# Figure size w,h in inches
# Plotting Loss figures.
plt.rcParams.update({'font.size': 22}) #configuring font size.
fig = plt.subplot(1,2,1)
plt.plot(epochs,train_loss,c="red",label="Training Loss") #plotting
plt.plot(epochs,val_loss,c="blue",label="Validation Loss")
plt.xlabel("Epochs") # title for x axis
plt.ylabel("Loss") # title for y axis
plt.legend()
# Plotting Accuracy figures.
fig = plt.subplot(1,2,2)
plt.plot(epochs,train_acc,c="red",label="Training Acc") #plotting
plt.plot(epochs,val_acc,c="blue",label="Validation Acc")
plt.xlabel("Epochs") # title for x axis
plt.ylabel("Accuracy") # title for y axis
plt.legend()
INPUT_SHAPE = (224, 224, 3)
IM_SIZE = (224, 224)
EPOCHS = 50
BATCH_SIZE = 48
TEST_SPLIT = 0.15
VAL_SPLIT = 0.15
RANDOM_STATE = 2020
WEIGHTS_FILE = "weights.h5"# File that stores updated weights
# Datasets' paths we want to work on.
PATH_LIST = ['eden_data/Tomato-240519-Healthy-zz-V1-20210225103740',
'eden_data/Black nightsade-220519-Weed-zz-V1-20210225102034']
i=0
for path in PATH_LIST:
#Define paths in an OS agnostic way.
PATH_LIST[i] = str(Path(Path.cwd()).parents[0].joinpath(path))
i+=1
X, y = read_data(PATH_LIST, IM_SIZE)
100%|██████████| 200/200 [01:16<00:00, 2.63it/s] 100%|██████████| 123/123 [00:31<00:00, 3.92it/s]
# Class 0
plot_sample(X[:50])
# Class 1
plot_sample(X[-50:])
def get_architecture(y):
feature_extractor = Xception(
weights="imagenet", # Load weights pre-trained on ImageNet.
include_top=False, # Do not include the ImageNet classifier at the top.
input_shape=INPUT_SHAPE)
# Freeze the base_model,we don't want to update initial weights.
feature_extractor.trainable = False
# Create new model on top.
x = Flatten(name="flatten")(feature_extractor.output)#flattening layer.
x = Dense(units=100, activation="relu")(x) #Add a fully connected layer.
x = Dropout(0.5)(x) # Regularize with dropout.
# Create a Classifier with shape=number_of_training_classes.
out = Dense(units=y.shape[1],
activation="softmax")(x)
# This is the final model.
model = Model(feature_extractor.input, out)
# Defining a base learning rate for Adam optimizer.
base_learning_rate =0.001
model.compile(loss="categorical_crossentropy",
optimizer=tf.keras.optimizers.Adam(lr=base_learning_rate),
metrics=["accuracy"])
return model
X_prep = preprocess_input(X)
X_train, X_test, y_train, y_test = train_test_split(X_prep, y,
test_size=TEST_SPLIT,
random_state = RANDOM_STATE)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train,
test_size=VAL_SPLIT,
random_state = RANDOM_STATE)
model = get_architecture(y)
%%time
history=model.fit(X_train,# train data
y_train,# labels
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(X_val, y_val),
callbacks=get_callbacks(WEIGHTS_FILE,
EPOCHS//5,
0.25))
Epoch 1/50 5/5 [==============================] - 18s 4s/step - loss: 2.9088 - accuracy: 0.7241 - val_loss: 2.3405 - val_accuracy: 0.8333 Epoch 2/50 5/5 [==============================] - 17s 3s/step - loss: 1.5562 - accuracy: 0.8879 - val_loss: 2.0556e-05 - val_accuracy: 1.0000 Epoch 3/50 5/5 [==============================] - 17s 3s/step - loss: 0.1747 - accuracy: 0.9741 - val_loss: 0.6696 - val_accuracy: 0.9524 Epoch 4/50 5/5 [==============================] - 18s 4s/step - loss: 0.2033 - accuracy: 0.9828 - val_loss: 0.5602 - val_accuracy: 0.9524 Epoch 5/50 5/5 [==============================] - 19s 4s/step - loss: 0.2403 - accuracy: 0.9871 - val_loss: 0.3785 - val_accuracy: 0.9762 Epoch 6/50 5/5 [==============================] - 20s 4s/step - loss: 0.0508 - accuracy: 0.9914 - val_loss: 0.3851 - val_accuracy: 0.9762 Epoch 7/50 5/5 [==============================] - ETA: 0s - loss: 0.0376 - accuracy: 0.9914 Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628. 5/5 [==============================] - 22s 4s/step - loss: 0.0376 - accuracy: 0.9914 - val_loss: 0.2917 - val_accuracy: 0.9762 Epoch 8/50 5/5 [==============================] - 20s 4s/step - loss: 3.5273e-05 - accuracy: 1.0000 - val_loss: 0.2370 - val_accuracy: 0.9762 Epoch 9/50 5/5 [==============================] - 21s 4s/step - loss: 1.0431e-07 - accuracy: 1.0000 - val_loss: 0.2031 - val_accuracy: 0.9762 Epoch 10/50 5/5 [==============================] - 21s 4s/step - loss: 5.2411e-08 - accuracy: 1.0000 - val_loss: 0.1820 - val_accuracy: 0.9762 Epoch 11/50 5/5 [==============================] - 22s 4s/step - loss: 0.0103 - accuracy: 0.9957 - val_loss: 0.1379 - val_accuracy: 0.9762 Epoch 12/50 5/5 [==============================] - ETA: 0s - loss: 2.5692e-09 - accuracy: 1.0000 Epoch 00012: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05. 5/5 [==============================] - 21s 4s/step - loss: 2.5692e-09 - accuracy: 1.0000 - val_loss: 0.0990 - val_accuracy: 0.9762 Epoch 00012: early stopping Wall time: 4min 48s
# Plotting the learning curves.
plot_training_curves(history)
# Load optimal weights computed during training.
model.load_weights(WEIGHTS_FILE)
# Make precictions on test set and print model's accuracy.
f1_score(denormalise(y_test),
denormalise(model.predict(X_test)),
average='micro')
1.0
Bengio, Y., 2012. Deep Learning of Representations for Unsupervised and Transfer Learning. In: Journal of Machine Learning Research; 17–37.
Wang, G., Sun, Y., Wang, J., (2017). Automatic Image-Based Plant Disease Severity Estimation Using Deep Learning. Computational Intelligence and Neuroscience; 2017:8.
Mehdipour-Ghazi, M., Yanikoglu, B.A., & Aptoula, E. (2017). Plant identification using deep neural networks via optimization of transfer learning parameters. Neurocomputing, 235, 228-235.
Suh, H.K., IJsselmuiden, J., Hofstee, J.W., van Henten, E.J., (2018). Transfer learning for the classification of sugar beet and volunteer potato under field conditions. Biosystems Engineering; 174:50–65.
Kounalakis T., Triantafyllidis G. A., Nalpantidis L., (2019). Deep learning-based visual recognition of rumex for robotic precision farming. Computers and Electronics in Agriculture.
Too, E.C., Yujian, L., Njuki, S., & Ying-chun, L. (2019). A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric., 161, 272-279.
Espejo-Garcia, B., Mylonas, N., Athanasakos, L., & Fountas, S., (2020). Improving Weeds Identification with a Repository of Agricultural Pre-trained Deep Neural Networks. Computers and Electronics in Agriculture; 175 (August).