To run any of Eden's notebooks, please check the guides on our Wiki page.
There you will find instructions on how to deploy the notebooks on your local system, on Google Colab, or on MyBinder, as well as other useful links, troubleshooting tips, and more.
For this notebook you will need to download the Cotton-100619-Healthy-zz-V1-20210225102300, Black nightsade-220519-Weed-zz-V1-20210225102034, Tomato-240519-Healthy-zz-V1-20210225103740 and Velvet leaf-220519-Weed-zz-V1-20210225104123 datasets from Eden Library, and you may want to use the eden_pytorch_transfer_learning.yml file to recreate a suitable conda environment.
Note: If you find any issues while executing the notebook, don't hesitate to open an issue on Github. We will try to reply as soon as possible.
Open Neural Network Exchange ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.
ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community
ONNX Runtime is a performance-focused engine for ONNX models, which inferences efficiently across multiple platforms and hardware (Windows, Linux, and Mac and on both CPUs and GPUs). ONNX Runtime has proved to considerably increase performance over multiple models.
For this tutorial, you will need to install ONNX and ONNX Runtime. You can get binary builds of ONNX and ONNX Runtime with pip install onnx onnxruntime. Note that ONNX Runtime is compatible with Python versions 3.5 to 3.7.
In this notebook we are going to make use of ONNX format and export our model from PyTorch to ONNX. Furthermore, we are going to use onnnxruntime to run inference.
In this notebook, we are going to cover a technique called Transfer Learning, which generally refers to a process where a machine learning model is trained on one problem, and afterwards, it is reused in some way on a second (possibly) related problem (Bengio, 2012). Specifically, in deep learning, this technique is used by training only some layers of the pre-trained network. Its promise is that the training will be more efficient and in the best of the cases the performance will be better compared to a model trained from scratch. In this example we are using ResNet architecture and the PyTorch framework.
It is important to note that in this notebook, inspite of making use of ONNX, we are also using the PyTorch framework to design and train our neural networks. This represents an extension over the previous Eden notebooks:
# In case it is not installed in your system run pip installs ( Google Colab doesn't have onnx by default)
!pip install onnx
!pip install onnxruntime
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Collecting onnx Downloading onnx-1.9.0-cp38-cp38-manylinux2010_x86_64.whl (12.2 MB) |████████████████████████████████| 12.2 MB 3.2 MB/s eta 0:00:01 Requirement already satisfied: numpy>=1.16.6 in /home/air/anaconda3/envs/eden_pytorch_transfer/lib/python3.8/site-packages (from onnx) (1.20.2) Requirement already satisfied: typing-extensions>=3.6.2.1 in /home/air/anaconda3/envs/eden_pytorch_transfer/lib/python3.8/site-packages (from onnx) (3.7.4.3) Collecting protobuf Downloading protobuf-3.17.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.0 MB) |████████████████████████████████| 1.0 MB 3.4 MB/s eta 0:00:01 Requirement already satisfied: six in /home/air/anaconda3/envs/eden_pytorch_transfer/lib/python3.8/site-packages (from onnx) (1.15.0) Installing collected packages: protobuf, onnx Successfully installed onnx-1.9.0 protobuf-3.17.3 Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com Collecting onnxruntime Downloading onnxruntime-1.8.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.5 MB) |████████████████████████████████| 4.5 MB 2.7 MB/s eta 0:00:01 Requirement already satisfied: protobuf in /home/air/anaconda3/envs/eden_pytorch_transfer/lib/python3.8/site-packages (from onnxruntime) (3.17.3) Collecting flatbuffers Downloading flatbuffers-2.0-py2.py3-none-any.whl (26 kB) Requirement already satisfied: numpy>=1.16.6 in /home/air/anaconda3/envs/eden_pytorch_transfer/lib/python3.8/site-packages (from onnxruntime) (1.20.2) Requirement already satisfied: six>=1.9 in /home/air/anaconda3/envs/eden_pytorch_transfer/lib/python3.8/site-packages (from protobuf->onnxruntime) (1.15.0) Installing collected packages: flatbuffers, onnxruntime Successfully installed flatbuffers-2.0 onnxruntime-1.8.0
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import torchvision
from torchvision import datasets, models, transforms
import numpy as np
import matplotlib.pyplot as plt
import time
import os
import copy
import random
import shutil
# onnx necessary packages
import torch.onnx
import onnx
import onnxruntime
plt.ion() # interactive mode
We are going to create a main data folder 'eden_data' that will contain the 4 different datasets. We will also split the datasets into train and validation sub-sets.
# Change this path to correspong to your system. It needs to point to your eden-library-datasets folder
DATA_PATH = '/home/air/Desktop/EDEN-REPO/eden_library_notebooks/eden-library-datasets'
## WARNING : This cell script will Move the 4 datasets used here from your eden-library-datasets directory and put them to a new one created for this particular dataset.
# Your initial eden-library-datasets will not contain the 4 datasets used here after the script.
# Change paths to suit your system (this was created for google colab)
if not os.path.exists(DATA_PATH) :
os.makedirs(DATA_PATH)
# Directory that will contain all of the data needed for training
notebook_dataset = os.path.join(DATA_PATH, 'pytorch-onnx')
# Create train and val folders that will host the data.
train_path = os.path.join(notebook_dataset, 'train')
if not os.path.exists(train_path):
os.makedirs(train_path)
val_path = os.path.join(notebook_dataset, 'val')
if not os.path.exists(val_path):
os.makedirs(val_path)
# names of the datasets we are going to use
classes = ["Black nightsade-220519-Weed-zz-V1-20210225102034", "Tomato-240519-Healthy-zz-V1-20210225103740",
"Cotton-100619-Healthy-zz-V1-20210225102300", "Velvet leaf-220519-Weed-zz-V1-20210225104123"]
num_classes = len(classes) # we will need this later
for class_name in classes:
# Path to source folders
class_path = DATA_PATH + os.path.sep + class_name
# Create subfolder for each class in validation folder
class_val_path = val_path + os.path.sep + class_name
os.mkdir(class_val_path)
# Move original folder to train folder, created above
class_train_path = train_path + os.path.sep + class_name
shutil.move(class_path, train_path)
# List of all files
images = os.listdir(class_train_path)
# Splitting randomly, choosing some files for validation.
valid_images = random.sample(
images, (int(round(len(images) * 0.2)))
) # Change ' *0.1 ' to whatever train-test split value you want
# Move validation images to validation folder
for val_image in valid_images:
shutil.move(
class_train_path + os.path.sep + val_image,
class_val_path + os.path.sep + val_image,
)
print("Moved ", val_image, " to validation images")
Moved DSC_0514.JPG to validation images Moved DSC_0536.JPG to validation images Moved DSC_0741.JPG to validation images Moved DSC_0504.JPG to validation images Moved DSC_0501.JPG to validation images Moved DSC_0553.JPG to validation images Moved DSC_0532.JPG to validation images Moved DSC_0726.JPG to validation images Moved DSC_0740.JPG to validation images Moved DSC_0516.JPG to validation images Moved DSC_0550.JPG to validation images Moved DSC_0528.JPG to validation images Moved DSC_0506.JPG to validation images Moved DSC_0717.JPG to validation images Moved DSC_0628.JPG to validation images Moved DSC_0508.JPG to validation images Moved DSC_0517.JPG to validation images Moved DSC_0538.JPG to validation images Moved DSC_0711.JPG to validation images Moved DSC_0723.JPG to validation images Moved DSC_0646.JPG to validation images Moved DSC_0567.JPG to validation images Moved DSC_0500.JPG to validation images Moved DSC_0551.JPG to validation images Moved DSC_0689.JPG to validation images Moved DSC_0787.JPG to validation images Moved DSC_0828.JPG to validation images Moved DSC_0247.JPG to validation images Moved DSC_0195.JPG to validation images Moved DSC_0215.JPG to validation images Moved DSC_0259.JPG to validation images Moved DSC_0204.JPG to validation images Moved DSC_0268.JPG to validation images Moved DSC_0205.JPG to validation images Moved DSC_0795.JPG to validation images Moved DSC_0278.JPG to validation images Moved DSC_0203.JPG to validation images Moved DSC_0253.JPG to validation images Moved DSC_0210.JPG to validation images Moved DSC_0312.JPG to validation images Moved DSC_0817.JPG to validation images Moved DSC_0212.JPG to validation images Moved DSC_0230.JPG to validation images Moved DSC_0246.JPG to validation images Moved DSC_0326.JPG to validation images Moved DSC_0805.JPG to validation images Moved DSC_0315.JPG to validation images Moved DSC_0301.JPG to validation images Moved DSC_0832.JPG to validation images Moved DSC_0257.JPG to validation images Moved DSC_0280.JPG to validation images Moved DSC_0809.JPG to validation images Moved DSC_0792.JPG to validation images Moved DSC_0199.JPG to validation images Moved DSC_0813.JPG to validation images Moved DSC_0783.JPG to validation images Moved DSC_0273.JPG to validation images Moved DSC_0281.JPG to validation images Moved DSC_0272.JPG to validation images Moved DSC_0310.JPG to validation images Moved DSC_0261.JPG to validation images Moved DSC_0294.JPG to validation images Moved DSC_0250.JPG to validation images Moved DSC_0789.JPG to validation images Moved DSC_0798.JPG to validation images Moved DSC_0672.JPG to validation images Moved DSC_0651.JPG to validation images Moved DSC_0673.JPG to validation images Moved DSC_0666.JPG to validation images Moved DSC_0681.JPG to validation images Moved DSC_0663.JPG to validation images Moved DSC_0654.JPG to validation images Moved DSC_0635.JPG to validation images Moved DSC_0653.JPG to validation images Moved DSC_0583.JPG to validation images Moved DSC_0486.JPG to validation images Moved DSC_0642.JPG to validation images Moved DSC_0610 - Copy.JPG to validation images Moved DSC_0607.JPG to validation images Moved DSC_0493.JPG to validation images Moved DSC_0737.JPG to validation images Moved DSC_0638.JPG to validation images Moved DSC_0498.JPG to validation images Moved DSC_0705.JPG to validation images Moved DSC_0644.JPG to validation images Moved DSC_0562.JPG to validation images Moved DSC_0612 - Copy.JPG to validation images Moved DSC_0608.JPG to validation images Moved DSC_0619.JPG to validation images Moved DSC_0617.JPG to validation images Moved DSC_0708.JPG to validation images Moved DSC_0580.JPG to validation images Moved DSC_0743.JPG to validation images Moved DSC_0611.JPG to validation images Moved DSC_0728.JPG to validation images Moved DSC_0609.JPG to validation images Moved DSC_0489.JPG to validation images Moved DSC_0606 - Copy.JPG to validation images Moved DSC_0602.JPG to validation images Moved DSC_0579.JPG to validation images
"""Training function. Train input model based on the parameters given.
Input:
model: model to train
criterion: loss function to be used for training
optimizer: optimizer
scheduler: learning rate scheduler
num_epochs: number of training epochs
Returns: Trained model
"""
def train_model(model, criterion, optimizer, scheduler, num_epochs=50):
since = time.time()
best_model = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print("Epoch {}/{}".format(epoch, num_epochs - 1))
print("-" * 10)
# Each epoch has a training and a validation phase
for phase in ["train", "val"]:
if phase == "train":
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluation mode
# Reset loss
running_loss = 0.0
running_corrects = 0
# Iterate over data
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameters gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == "train"):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward pass + optimize only if in training phase
if phase == "train":
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == "train":
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print("{} Loss: {:.4f} Acc: {:.4f}".format(phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == "val" and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print(
"Training complete in {:.0f}m {:.0f}s".format(
time_elapsed // 60, time_elapsed % 60
)
)
print("Best val Acc: {:4f}".format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
"""
Runs inference on a defined number of images with the specified model. Plots the images with the model's predictions.
Input:
model : model to run inference with
num_images : number of validation set images to make predictions on
Returns : Plotted images and predictions
"""
def visualize_predictions(model, num_images=6):
was_training = model.training
model.eval()
images_so_far = 0
fig = plt.figure()
with torch.no_grad():
for i, (inputs, labels) in enumerate(dataloaders["val"]):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images // 2, 2, images_so_far)
ax.axis("off")
ax.set_title(
"predicted: {} with".format(class_names[preds[j]])
+ " Actual class: {}".format(class_names[labels[j]])
)
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)
"""
Plot images
"""
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
First we need to load our data into our pipeline. Since our dataset is not very big we are going to apply some data augmentation in order to increase the generalization power of the network. Lastly, we are going to normalize our training and validation data for better performance and accuracy.
# Defining some Augmentation techniques
data_transforms = {
# We are going to use Compose,in order to chain together multiple transformations
"train": transforms.Compose(
[
transforms.RandomResizedCrop((224, 224)),
transforms.RandomHorizontalFlip(),
# Converting images to tensors. PyTorch needs input in tensor form.
transforms.ToTensor(),
# Normalizing inputs, these values are porposed by pytorch for ResNet
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
]
),
"val": transforms.Compose(
[
transforms.Resize((224, 224)),
transforms.CenterCrop(224),
# Converting images to tensors. PyTorch needs input in tensor form.
transforms.ToTensor(),
# Normalizing inputs, these values are porposed by pytorch for ResNet
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
]
),
}
# Loading the datasets
data_dir = notebook_dataset
image_datasets = {
x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x])
for x in ["train", "val"]
}
dataloaders = {
x: torch.utils.data.DataLoader(
image_datasets[x], batch_size=4, shuffle=True, num_workers=1
)
for x in ["train", "val"]
}
dataset_sizes = {x: len(image_datasets[x]) for x in ["train", "val"]}
class_names = image_datasets["train"].classes
print("Class names :", class_names)
print("Dataset_sizes : ", dataset_sizes)
# Setting up device either cuda GPU or CPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
Class names : ['Black nightsade-220519-Weed-zz-V1-20210225102034', 'Cotton-100619-Healthy-zz-V1-20210225102300', 'Tomato-240519-Healthy-zz-V1-20210225103740', 'Velvet leaf-220519-Weed-zz-V1-20210225104123'] Dataset_sizes : {'train': 399, 'val': 100}
# Get a batch of training data
inputs, classes = next(iter(dataloaders["train"]))
print("Images' size: ", inputs.size())
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
Images' size: torch.Size([4, 3, 224, 224])
# Loading Resnet through torchvision API with pretrained weights.
conv_net = torchvision.models.resnet18(pretrained=True)
# Freeze parameters so that their gradients are not computed in backward propagation.
for param in conv_net.parameters():
param.requires_grad = False
# Reshaping the last layers of the Network
num_ftrs = conv_net.fc.in_features
print("Num of features :", num_ftrs)
# This is a manual process, check
# https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html for more details.
num_classes = 4
# Adding a Fully Connected layer on top of the pretrained network
conv_net.fc = nn.Linear(num_ftrs, num_classes)
# Moving computations to GPU
conv_net = conv_net.to(device)
# Defining loss function to be used on training
criterion = nn.CrossEntropyLoss()
# Optimize only the final layers
optimizer = optim.SGD(conv_net.fc.parameters(), lr=0.001, momentum=0.9)
# Decay Lr by a factor of 0.1 every 10 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1, verbose=True)
Num of features : 512 Adjusting learning rate of group 0 to 1.0000e-03.
# This is a way to avoid errors for corrupted images
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
conv_net = train_model(conv_net, criterion, optimizer, exp_lr_scheduler, num_epochs=20)
Epoch 0/19 ---------- Adjusting learning rate of group 0 to 1.0000e-03. train Loss: 1.2371 Acc: 0.5088 val Loss: 0.7453 Acc: 0.6800 Epoch 1/19 ---------- Adjusting learning rate of group 0 to 1.0000e-03. train Loss: 0.9323 Acc: 0.6190 val Loss: 0.5919 Acc: 0.7800 Epoch 2/19 ---------- Adjusting learning rate of group 0 to 1.0000e-03. train Loss: 0.6292 Acc: 0.7318 val Loss: 0.5564 Acc: 0.7700 Epoch 3/19 ---------- Adjusting learning rate of group 0 to 1.0000e-03. train Loss: 0.5477 Acc: 0.7895 val Loss: 0.5064 Acc: 0.8300 Epoch 4/19 ---------- Adjusting learning rate of group 0 to 1.0000e-03. train Loss: 0.6696 Acc: 0.7293 val Loss: 0.3128 Acc: 0.8500 Epoch 5/19 ---------- Adjusting learning rate of group 0 to 1.0000e-03. train Loss: 0.5818 Acc: 0.7644 val Loss: 0.4980 Acc: 0.7900 Epoch 6/19 ---------- Adjusting learning rate of group 0 to 1.0000e-03. train Loss: 0.5619 Acc: 0.7920 val Loss: 0.6248 Acc: 0.7500 Epoch 7/19 ---------- Adjusting learning rate of group 0 to 1.0000e-03. train Loss: 0.5491 Acc: 0.8120 val Loss: 0.5920 Acc: 0.8100 Epoch 8/19 ---------- Adjusting learning rate of group 0 to 1.0000e-03. train Loss: 0.6269 Acc: 0.7544 val Loss: 0.3171 Acc: 0.8500 Epoch 9/19 ---------- Adjusting learning rate of group 0 to 1.0000e-04. train Loss: 0.5201 Acc: 0.8095 val Loss: 0.3249 Acc: 0.8900 Epoch 10/19 ---------- Adjusting learning rate of group 0 to 1.0000e-04. train Loss: 0.5775 Acc: 0.7945 val Loss: 0.4130 Acc: 0.8300 Epoch 11/19 ---------- Adjusting learning rate of group 0 to 1.0000e-04. train Loss: 0.4967 Acc: 0.8195 val Loss: 0.3674 Acc: 0.8400 Epoch 12/19 ---------- Adjusting learning rate of group 0 to 1.0000e-04. train Loss: 0.5315 Acc: 0.8095 val Loss: 0.3660 Acc: 0.8700 Epoch 13/19 ---------- Adjusting learning rate of group 0 to 1.0000e-04. train Loss: 0.5024 Acc: 0.8070 val Loss: 0.2591 Acc: 0.9000 Epoch 14/19 ---------- Adjusting learning rate of group 0 to 1.0000e-04. train Loss: 0.4519 Acc: 0.8070 val Loss: 0.3814 Acc: 0.8500 Epoch 15/19 ---------- Adjusting learning rate of group 0 to 1.0000e-04. train Loss: 0.4382 Acc: 0.8396 val Loss: 0.3054 Acc: 0.8600 Epoch 16/19 ---------- Adjusting learning rate of group 0 to 1.0000e-04. train Loss: 0.4600 Acc: 0.8095 val Loss: 0.2740 Acc: 0.8800 Epoch 17/19 ---------- Adjusting learning rate of group 0 to 1.0000e-04. train Loss: 0.4654 Acc: 0.8070 val Loss: 0.3624 Acc: 0.8700 Epoch 18/19 ---------- Adjusting learning rate of group 0 to 1.0000e-04. train Loss: 0.4763 Acc: 0.8421 val Loss: 0.4194 Acc: 0.8500 Epoch 19/19 ---------- Adjusting learning rate of group 0 to 1.0000e-05. train Loss: 0.4027 Acc: 0.8672 val Loss: 0.3007 Acc: 0.9100 Training complete in 32m 15s Best val Acc: 0.910000
visualize_predictions(conv_net, 10)
plt.ioff()
plt.show()
We are now going to use pytorch exporter to export our model in ONNX format. Then we are going to load our model on the onnx runtime and do inference on some images.
# Running inference with a dummy tensor to fix input sizes of the network
x = torch.randn(1, 3, 224, 224, requires_grad=True)
x = x.to(device)
torch_out = conv_net(x)
# Export the model
torch.onnx.export(
conv_net, # model being run
x, # model input (or a tuple for multiple inputs)
"Plant_classifier.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=13,
) # the ONNX version to export the model to
# load the model with the onnx API
onnx_model = onnx.load("Plant_classifier.onnx")
onnx.checker.check_model(onnx_model)
# Print a human readable representation of the onnx graph
print(onnx.helper.printable_graph(onnx_model.graph))
graph torch-jit-export ( %input.1[FLOAT, 1x3x224x224] ) initializers ( %fc.weight[FLOAT, 4x512] %fc.bias[FLOAT, 4] %193[FLOAT, 64x3x7x7] %194[FLOAT, 64] %196[FLOAT, 64x64x3x3] %197[FLOAT, 64] %199[FLOAT, 64x64x3x3] %200[FLOAT, 64] %202[FLOAT, 64x64x3x3] %203[FLOAT, 64] %205[FLOAT, 64x64x3x3] %206[FLOAT, 64] %208[FLOAT, 128x64x3x3] %209[FLOAT, 128] %211[FLOAT, 128x128x3x3] %212[FLOAT, 128] %214[FLOAT, 128x64x1x1] %215[FLOAT, 128] %217[FLOAT, 128x128x3x3] %218[FLOAT, 128] %220[FLOAT, 128x128x3x3] %221[FLOAT, 128] %223[FLOAT, 256x128x3x3] %224[FLOAT, 256] %226[FLOAT, 256x256x3x3] %227[FLOAT, 256] %229[FLOAT, 256x128x1x1] %230[FLOAT, 256] %232[FLOAT, 256x256x3x3] %233[FLOAT, 256] %235[FLOAT, 256x256x3x3] %236[FLOAT, 256] %238[FLOAT, 512x256x3x3] %239[FLOAT, 512] %241[FLOAT, 512x512x3x3] %242[FLOAT, 512] %244[FLOAT, 512x256x1x1] %245[FLOAT, 512] %247[FLOAT, 512x512x3x3] %248[FLOAT, 512] %250[FLOAT, 512x512x3x3] %251[FLOAT, 512] ) { %192 = Conv[dilations = [1, 1], group = 1, kernel_shape = [7, 7], pads = [3, 3, 3, 3], strides = [2, 2]](%input.1, %193, %194) %125 = Relu(%192) %126 = MaxPool[ceil_mode = 0, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%125) %195 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%126, %196, %197) %129 = Relu(%195) %198 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%129, %199, %200) %132 = Add(%198, %126) %133 = Relu(%132) %201 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%133, %202, %203) %136 = Relu(%201) %204 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%136, %205, %206) %139 = Add(%204, %133) %140 = Relu(%139) %207 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%140, %208, %209) %143 = Relu(%207) %210 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%143, %211, %212) %213 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [2, 2]](%140, %214, %215) %148 = Add(%210, %213) %149 = Relu(%148) %216 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%149, %217, %218) %152 = Relu(%216) %219 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%152, %220, %221) %155 = Add(%219, %149) %156 = Relu(%155) %222 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%156, %223, %224) %159 = Relu(%222) %225 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%159, %226, %227) %228 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [2, 2]](%156, %229, %230) %164 = Add(%225, %228) %165 = Relu(%164) %231 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%165, %232, %233) %168 = Relu(%231) %234 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%168, %235, %236) %171 = Add(%234, %165) %172 = Relu(%171) %237 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [2, 2]](%172, %238, %239) %175 = Relu(%237) %240 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%175, %241, %242) %243 = Conv[dilations = [1, 1], group = 1, kernel_shape = [1, 1], pads = [0, 0, 0, 0], strides = [2, 2]](%172, %244, %245) %180 = Add(%240, %243) %181 = Relu(%180) %246 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%181, %247, %248) %184 = Relu(%246) %249 = Conv[dilations = [1, 1], group = 1, kernel_shape = [3, 3], pads = [1, 1, 1, 1], strides = [1, 1]](%184, %250, %251) %187 = Add(%249, %181) %188 = Relu(%187) %189 = GlobalAveragePool(%188) %190 = Flatten[axis = 1](%189) %191 = Gemm[alpha = 1, beta = 1, transB = 1](%190, %fc.weight, %fc.bias) return %191 }
# Pre-processing an image to feed it as input in the model.
# Point the Path to one of the images in your system
IMAGE_PATH = "/home/air/Desktop/EDEN-REPO/eden_library_notebooks/eden-library-datasets/pytorch-onnx/val/Cotton-100619-Healthy-zz-V1-20210225102300/DSC_0653.JPG"
img = Image.open(
IMAGE_PATH
) # Define an image path for inference.
resize = transforms.Resize([224, 224])
img = resize(img)
to_tensor = transforms.ToTensor()
img = to_tensor(img)
img.unsqueeze_(0)
tensor([[[[0.5255, 0.5216, 0.5216, ..., 0.8314, 0.8275, 0.7961], [0.5020, 0.5216, 0.5529, ..., 0.8353, 0.8431, 0.8392], [0.4980, 0.5490, 0.5922, ..., 0.7882, 0.8157, 0.8353], ..., [0.6588, 0.6627, 0.6471, ..., 0.7529, 0.7490, 0.7961], [0.6745, 0.6784, 0.6627, ..., 0.7843, 0.7843, 0.8196], [0.6784, 0.6824, 0.6745, ..., 0.7961, 0.7882, 0.8118]], [[0.4824, 0.4902, 0.4902, ..., 0.7765, 0.7686, 0.7412], [0.4706, 0.4902, 0.5137, ..., 0.7804, 0.7882, 0.7765], [0.4706, 0.5098, 0.5569, ..., 0.7373, 0.7569, 0.7686], ..., [0.6235, 0.6275, 0.6157, ..., 0.7098, 0.6980, 0.7333], [0.6314, 0.6314, 0.6235, ..., 0.7333, 0.7255, 0.7569], [0.6353, 0.6314, 0.6275, ..., 0.7451, 0.7373, 0.7569]], [[0.4157, 0.4275, 0.4196, ..., 0.6824, 0.6745, 0.6549], [0.3922, 0.4078, 0.4275, ..., 0.6824, 0.6941, 0.6863], [0.3804, 0.4196, 0.4627, ..., 0.6549, 0.6784, 0.6902], ..., [0.5686, 0.5725, 0.5569, ..., 0.6353, 0.6275, 0.6667], [0.5725, 0.5725, 0.5569, ..., 0.6627, 0.6588, 0.6941], [0.5765, 0.5725, 0.5686, ..., 0.6745, 0.6627, 0.6941]]]])
# Executing inference on the model
sess = onnxruntime.InferenceSession("Plant_classifier.onnx")
def to_numpy(tensor):
return (
tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
)
# compute ONNX Runtime output prediction
sess_inputs = {sess.get_inputs()[0].name: to_numpy(img)}
sess_outs = sess.run(None, sess_inputs)
final_pred = np.argmax(np.array(sess_outs))
print(class_names)
print(sess_outs, "\n")
print("Model's prediction : {}".format(class_names[final_pred]))
['Black nightsade-220519-Weed-zz-V1-20210225102034', 'Cotton-100619-Healthy-zz-V1-20210225102300', 'Tomato-240519-Healthy-zz-V1-20210225103740', 'Velvet leaf-220519-Weed-zz-V1-20210225104123'] [array([[ 1.3752831, 1.7617769, -2.3463416, -1.4205171]], dtype=float32)] Model's prediction : Cotton-100619-Healthy-zz-V1-20210225102300
Too, E.C., Yujian, L., Njuki, S., & Ying-chun, L. (2019). A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric., 161, 272-279.
Suh, H.K., IJsselmuiden, J., Hofstee, J.W., van Henten, E.J., (2018). Transfer learning for the classification of sugar beet and volunteer potato under field conditions. Biosystems Engineering; 174:50–65.
Espejo-Garcia, B., Mylonas, N., Athanasakos, L., & Fountas, S., (2020). Improving Weeds Identification with a Repository of Agricultural Pre-trained Deep Neural Networks. Computers and Electronics in Agriculture; 175 (August).
https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html
https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html
https://www.oreilly.com/library/view/programming-pytorch-for/9781492045342/ch04.html