PP-YOLO and YOLO-NAS are both advancements in the field of object detection that build upon the original YOLO (You Only Look Once) framework, which is known for its speed and efficiency in detecting objects in images or video streams.
PP-YOLO, or PaddlePaddle YOLO, is an enhanced version of the YOLO model developed by Baidu. It aims to improve the performance of the original YOLOv3 model in terms of both speed and accuracy. PP-YOLO incorporates several modifications and optimizations such as:
These improvements make PP-YOLO a competitive choice for real-time object detection, offering a good balance between speed and accuracy.
YOLO-NAS (Neural Architecture Search for YOLO) is an approach that applies neural architecture search (NAS) to the YOLO framework to automatically discover optimal network architectures for object detection tasks. NAS is a technique in machine learning that involves using algorithms (often reinforcement learning or evolutionary algorithms) to automate the design of neural network architectures.
YOLO-NAS aims to optimize various aspects of the YOLO architecture, such as the backbone network, the feature pyramid networks (FPNs), and the head of the network for predicting bounding boxes and class probabilities. By searching for the most effective configurations and structures, YOLO-NAS seeks to enhance the performance of YOLO models without significantly increasing computational complexity. This can lead to more efficient models that maintain high accuracy while being faster or requiring fewer resources, making them suitable for deployment in environments with limited computational capacity.
Both PP-YOLO and YOLO-NAS illustrate the ongoing efforts to improve object detection models by making them faster, more accurate, and more efficient, catering to the increasing demands of applications in surveillance, autonomous vehicles, and many other areas.
!pip install -q super-gradients
!pip install -q supervision
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.0/12.0 MB 44.4 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 139.3/139.3 kB 1.5 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 684.5/684.5 kB 24.1 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.9/2.9 MB 56.1 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.8/2.8 MB 83.4 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 408.6/408.6 kB 33.8 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 154.5/154.5 kB 16.9 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 79.5/79.5 kB 8.6 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.5/4.5 MB 52.9 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.5/13.5 MB 40.5 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 68.0/68.0 kB 7.0 MB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Preparing metadata (setup.py) ... done Preparing metadata (setup.py) ... done Preparing metadata (setup.py) ... done ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.0/17.0 MB 52.9 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.4/3.4 MB 51.6 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 29.4 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 459.5/459.5 kB 41.7 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.0/46.0 kB 5.9 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.9/11.9 MB 101.3 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 82.1/82.1 kB 10.4 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 108.3/108.3 kB 14.5 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 176.0/176.0 kB 22.4 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 433.2/433.2 kB 44.4 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 109.4/109.4 kB 15.4 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 278.2/278.2 kB 33.8 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.8/2.8 MB 56.5 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 913.9/913.9 kB 63.2 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 117.0/117.0 kB 16.0 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 575.5/575.5 kB 46.7 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.8/2.8 MB 97.2 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 121.1/121.1 kB 15.7 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.8/86.8 kB 10.9 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 120.0/120.0 kB 17.0 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 83.5/83.5 kB 11.0 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 99.2/99.2 kB 13.8 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.4/89.4 kB 12.2 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 92.7/92.7 kB 12.4 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 105.0/105.0 kB 13.0 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.2/46.2 kB 5.8 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 194.6/194.6 kB 25.2 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.1/58.1 kB 7.7 MB/s eta 0:00:00 Building wheel for pycocotools (pyproject.toml) ... done Building wheel for termcolor (setup.py) ... done Building wheel for treelib (setup.py) ... done Building wheel for coverage (setup.py) ... done Building wheel for xhtml2pdf (setup.py) ... done Building wheel for antlr4-python3-runtime (setup.py) ... done Building wheel for stringcase (setup.py) ... done Building wheel for svglib (setup.py) ... done ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lida 0.0.10 requires fastapi, which is not installed. lida 0.0.10 requires kaleido, which is not installed. lida 0.0.10 requires python-multipart, which is not installed. lida 0.0.10 requires uvicorn, which is not installed. tensorflow 2.15.0 requires numpy<2.0.0,>=1.23.5, but you have numpy 1.23.0 which is incompatible. ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 77.5/77.5 kB 1.9 MB/s eta 0:00:00 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 49.1/49.1 MB 29.9 MB/s eta 0:00:00
Cell to avoid some temporal problems on Google Colab
import os
import locale
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
locale.getpreferredencoding = lambda: "UTF-8"
import torch
from super_gradients.training import models
from super_gradients.common.object_names import Models
from super_gradients.training import dataloaders
from super_gradients.training.dataloaders.dataloaders import coco_detection_yolo_format_train, coco_detection_yolo_format_val
from super_gradients.training import Trainer
from super_gradients.training.losses import YoloXDetectionLoss
from super_gradients.training.losses import PPYoloELoss
from super_gradients.training.metrics import DetectionMetrics_050
from super_gradients.training.models.detection_models.pp_yolo_e import PPYoloEPostPredictionCallback
import supervision as sv
from tqdm import tqdm
import cv2
import random
import shutil
from IPython.display import clear_output
random.seed(2024)
[2024-01-17 13:09:14] INFO - crash_tips_setup.py - Crash tips is enabled. You can set your environment variable to CRASH_HANDLER=FALSE to disable it
The console stream is logged into /root/sg_logs/console.log
[2024-01-17 13:09:14] WARNING - __init__.py - Failed to import pytorch_quantization [2024-01-17 13:09:14] INFO - utils.py - NumExpr defaulting to 8 threads. [2024-01-17 13:09:21] WARNING - calibrator.py - Failed to import pytorch_quantization [2024-01-17 13:09:21] WARNING - export.py - Failed to import pytorch_quantization [2024-01-17 13:09:21] WARNING - selective_quantization_utils.py - Failed to import pytorch_quantization [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: boto3 required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: deprecated required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: coverage required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: sphinx-rtd-theme required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: torchmetrics required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: hydra-core required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: omegaconf required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: onnxruntime required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: onnx required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: einops required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: treelib required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: stringcase required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: rapidfuzz required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: json-tricks required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: onnx-simplifier required but not found [2024-01-17 13:09:21] WARNING - env_sanity_check.py - Failed to verify installed packages: data-gradients required but not found
device = 'cuda' if torch.cuda.is_available() else "cpu"
Cell to avoid some temporal problems on Google Colab
from google.colab import drive
drive.mount("/content/gdrive/")
Mounted at /content/gdrive/
Select your own YOLO-structure FOLDER
Dataset NOT included in the shared folder
%%capture
!unzip "/content/gdrive/MyDrive/Colab Notebooks/adata/detection/smart-droplets-rumex-yolo.zip"
!rm -rf __MACOSX
Detection Hyper-parameters
NUM_EXPERIMENTS = 1
CONFIDENCE_TRESHOLD = 0.6
MAX_IMAGE_COUNT = 10
BATCH_SIZE = 16
NUM_EPOCHS = 30
Filesystem Metadata
dataset_params = {
'data_dir':'/content/smart-droplets-rumex-yolo',
'train_images_dir':'train/images',
'train_labels_dir':'train/labels',
'val_images_dir':'valid/images',
'val_labels_dir':'valid/labels',
'test_images_dir':'test/images',
'test_labels_dir':'test/labels',
'classes': ['Rumex']
}
Dataset and DataLoaders Metadata
train_data = coco_detection_yolo_format_train(
dataset_params={
'data_dir': dataset_params['data_dir'],
'images_dir': dataset_params['train_images_dir'],
'labels_dir': dataset_params['train_labels_dir'],
'classes': dataset_params['classes']
},
dataloader_params={
'batch_size':BATCH_SIZE,
'num_workers':2
}
)
val_data = coco_detection_yolo_format_val(
dataset_params={
'data_dir': dataset_params['data_dir'],
'images_dir': dataset_params['val_images_dir'],
'labels_dir': dataset_params['val_labels_dir'],
'classes': dataset_params['classes']
},
dataloader_params={
'batch_size':BATCH_SIZE,
'num_workers':2
}
)
test_data = coco_detection_yolo_format_val(
dataset_params={
'data_dir': dataset_params['data_dir'],
'images_dir': dataset_params['test_images_dir'],
'labels_dir': dataset_params['test_labels_dir'],
'classes': dataset_params['classes']
},
dataloader_params={
'batch_size':BATCH_SIZE,
'num_workers':2
}
)
clear_output()
#train_data.dataset.dataset_params['transforms'][1]['DetectionRandomAffine']['degrees'] = 10.42
Training Hyper-parameters
train_params = {
# ENABLING SILENT MODE
'silent_mode': True,
"average_best_models":True,
"warmup_mode": "linear_epoch_step",
"warmup_initial_lr": 1e-5, #1e-6,
"lr_warmup_epochs": 2,
"initial_lr": 1e-3,
"lr_mode": "cosine",
"cosine_final_lr_ratio": 0.1,
"optimizer": "Adam",
"optimizer_params": {"weight_decay": 0.0001},
"zero_weight_decay_on_bias_and_bn": True,
"ema": True,
"ema_params": {"decay": 0.9, "decay_type": "threshold"},
"max_epochs": NUM_EPOCHS,
"mixed_precision": True,
"loss": PPYoloELoss(
use_static_assigner=False,
num_classes=len(dataset_params['classes']),
reg_max=16
),
"valid_metrics_list": [
DetectionMetrics_050(
score_thres=0.1,
top_k_predictions=300,
# NOTE: num_classes needs to be defined here
num_cls=len(dataset_params['classes']),
normalize_targets=True,
post_prediction_callback=PPYoloEPostPredictionCallback(
score_threshold=0.01,
nms_top_k=1000,
max_predictions=300,
nms_threshold=0.5
)
)
],
"metric_to_watch": 'mAP@0.50'
}
Indexing dataset annotations: 100%|██████████| 49/49 [00:00<00:00, 4040.60it/s]
Available Detectors
#Models.YOLOX_L
#Models.YOLO_NAS_L
#Models.PP_YOLOE_L
Auxiliar Fuctions
def get_predictions(ds, best_model):
predictions = {}
for image_name, image in tqdm(ds.images.items()):
result = list(best_model.predict(cv2.cvtColor(image, cv2.COLOR_BGR2RGB),
conf=CONFIDENCE_TRESHOLD))[0]
detections = sv.Detections(
xyxy=result.prediction.bboxes_xyxy,
confidence=result.prediction.confidence,
class_id=result.prediction.labels.astype(int)
)
predictions[image_name] = detections
return predictions
def get_images_titles(ds, predictions):
n = min(MAX_IMAGE_COUNT, len(ds.images))
keys = list(ds.images.keys())
keys = random.sample(keys, n)
box_annotator = sv.BoxAnnotator(thickness=20,
color=sv.ColorPalette.from_hex(['#ff0000', '#00ff00', '#0000ff']),
text_thickness=5,
text_scale=5)
images = []
titles = []
for key in keys:
frame_with_annotations = box_annotator.annotate(
scene=ds.images[key].copy(),
detections=ds.annotations[key],
skip_label=True
)
images.append(frame_with_annotations)
titles.append('annotations')
labels = [f"{confidence:0.2f}" for _, mask, confidence, class_id, _ in predictions[key]]
frame_with_predictions = box_annotator.annotate(
scene=ds.images[key].copy(),
detections=predictions[key],
labels=labels
)
images.append(frame_with_predictions)
titles.append('predictions')
return images, titles, n
Fine-tuning the YOLO-based model
CHECKPOINT_DIR = '0_checkpoints'
EXPERIMENT_NAME = 'pp-yolo-l_run'
trainer = Trainer(experiment_name=EXPERIMENT_NAME, ckpt_root_dir=CHECKPOINT_DIR)
model = models.get(Models.PP_YOLOE_L,
num_classes=len(dataset_params['classes']),
pretrained_weights="coco"
)
trainer.train(model=model,
training_params=train_params,
train_loader=train_data,
valid_loader=val_data)
Downloading: "https://deci-pretrained-models.s3.amazonaws.com/ppyolo_e/CSPResNetb_l_pretrained.pth" to /root/.cache/torch/hub/checkpoints/CSPResNetb_l_pretrained.pth 100%|██████████| 80.9M/80.9M [00:07<00:00, 11.5MB/s] Downloading: "https://sghub.deci.ai/models/ppyoloe_l_coco.pth" to /root/.cache/torch/hub/checkpoints/ppyoloe_l_coco.pth 100%|██████████| 827M/827M [00:51<00:00, 16.8MB/s] [2024-01-17 13:11:35] INFO - checkpoint_utils.py - Successfully loaded pretrained weights for architecture ppyoloe_l [2024-01-17 13:11:36] INFO - sg_trainer.py - Starting a new run with `run_id=RUN_20240117_131136_044001` [2024-01-17 13:11:36] INFO - sg_trainer.py - Checkpoints directory: 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001 [2024-01-17 13:11:36] INFO - sg_trainer.py - Using EMA with params {'decay': 0.9, 'decay_type': 'threshold'} /usr/local/lib/python3.10/dist-packages/super_gradients/common/registry/registry.py:72: DeprecationWarning: Object name `linear_epoch_step` is now deprecated. Please replace it with `LinearEpochLRWarmup`. warnings.warn(f"Object name `{name}` is now deprecated. Please replace it with `{deprecated_names[name]}`.", DeprecationWarning) /usr/local/lib/python3.10/dist-packages/super_gradients/training/utils/optimizer_utils.py:107: DeprecationWarning: initialize_param_groups and update_param_groups usages are deprecated since 3.4.0, will be removed in 3.5.0 and have no effect. Assign different learning rates by passing a mapping of layer name prefixes to lr values through initial_lr training hyperparameter (i.e initial_lr={'backbone': 0.01, 'default':0.1}) warnings.warn(
The console stream is now moved to 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/console_Jan17_13_11_36.txt
[2024-01-17 13:12:05] INFO - sg_trainer_utils.py - TRAINING PARAMETERS: - Mode: Single GPU - Number of GPUs: 1 (1 available on the machine) - Full dataset size: 90 (len(train_set)) - Batch size per GPU: 16 (batch_size) - Batch Accumulate: 1 (batch_accumulate) - Total batch size: 16 (num_gpus * batch_size) - Effective Batch size: 16 (num_gpus * batch_size * batch_accumulate) - Iterations per epoch: 5 (len(train_loader)) - Gradient updates per epoch: 5 (len(train_loader) / batch_accumulate) - Model: PPYoloE_L (53.12M parameters, 53.12M optimized) - Learning Rates and Weight Decays: - default: (53.12M parameters). LR: 0.001 (53.12M parameters) WD: 0.0, (73.23K parameters), WD: 0.0001, (53.05M parameters) [2024-01-17 13:13:17] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:13:17] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.00024468082119710743 [2024-01-17 13:15:49] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:15:49] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.2213224619626999 [2024-01-17 13:17:07] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:17:07] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.4011504352092743 [2024-01-17 13:19:49] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:19:49] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.4383465349674225 [2024-01-17 13:23:59] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:23:59] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.472430020570755 [2024-01-17 13:25:58] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:25:58] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.5044029355049133 [2024-01-17 13:28:01] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:28:01] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.5134310722351074 [2024-01-17 13:31:20] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:31:20] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.5297877192497253 [2024-01-17 13:33:25] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:33:25] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.5791633725166321 [2024-01-17 13:35:24] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:35:24] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.6034108996391296 [2024-01-17 13:38:56] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:38:56] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.6145164966583252 [2024-01-17 13:41:28] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:41:28] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.6230497360229492 [2024-01-17 13:44:45] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:44:45] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.6318655610084534 [2024-01-17 13:46:11] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:46:11] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.6335486173629761 [2024-01-17 13:47:35] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:47:35] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.6427244544029236 [2024-01-17 13:48:59] INFO - base_sg_logger.py - Checkpoint saved in 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth [2024-01-17 13:48:59] INFO - sg_trainer.py - Best checkpoint overriden: validation mAP@0.50: 0.671730637550354
Evaluation the Detector
best_model = models.get(Models.PP_YOLOE_L,
num_classes=len(dataset_params['classes']),
checkpoint_path=f'0_checkpoints/{EXPERIMENT_NAME}/{sorted(os.listdir(os.path.join(CHECKPOINT_DIR, EXPERIMENT_NAME)))[-1]}/ckpt_best.pth')
ds = sv.DetectionDataset.from_yolo(
images_directory_path=f"{dataset_params['data_dir']}/test/images",
annotations_directory_path=f"{dataset_params['data_dir']}/test/labels",
data_yaml_path=f"{dataset_params['data_dir']}/data.yaml",
force_masks=False
)
predictions = get_predictions(ds, best_model)
images, titles, n = get_images_titles(ds, predictions)
%matplotlib inline
sv.plot_images_grid(images=images, titles=titles, grid_size=(n, 2), size=(2 * 8, n * 8))
[2024-01-17 13:59:08] INFO - checkpoint_utils.py - Successfully loaded model weights from 0_checkpoints/pp-yolo-l_run/RUN_20240117_131136_044001/ckpt_best.pth EMA checkpoint. 0%| | 0/49 [00:00<?, ?it/s][2024-01-17 13:59:20] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 2%|▏ | 1/49 [00:01<01:02, 1.30s/it][2024-01-17 13:59:22] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 4%|▍ | 2/49 [00:02<00:49, 1.05s/it][2024-01-17 13:59:22] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 6%|▌ | 3/49 [00:03<00:44, 1.04it/s][2024-01-17 13:59:23] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 8%|▊ | 4/49 [00:03<00:41, 1.07it/s][2024-01-17 13:59:24] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 10%|█ | 5/49 [00:04<00:39, 1.10it/s][2024-01-17 13:59:25] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 12%|█▏ | 6/49 [00:05<00:38, 1.12it/s][2024-01-17 13:59:26] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 14%|█▍ | 7/49 [00:06<00:37, 1.12it/s][2024-01-17 13:59:27] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 16%|█▋ | 8/49 [00:07<00:41, 1.00s/it][2024-01-17 13:59:28] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 18%|█▊ | 9/49 [00:08<00:39, 1.01it/s][2024-01-17 13:59:29] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 20%|██ | 10/49 [00:09<00:38, 1.00it/s][2024-01-17 13:59:30] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 22%|██▏ | 11/49 [00:10<00:38, 1.00s/it][2024-01-17 13:59:31] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 24%|██▍ | 12/49 [00:12<00:39, 1.07s/it][2024-01-17 13:59:32] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 27%|██▋ | 13/49 [00:12<00:36, 1.01s/it][2024-01-17 13:59:33] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 29%|██▊ | 14/49 [00:13<00:33, 1.03it/s][2024-01-17 13:59:34] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 31%|███ | 15/49 [00:14<00:31, 1.07it/s][2024-01-17 13:59:35] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 33%|███▎ | 16/49 [00:15<00:30, 1.09it/s][2024-01-17 13:59:36] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 35%|███▍ | 17/49 [00:16<00:28, 1.11it/s][2024-01-17 13:59:37] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 37%|███▋ | 18/49 [00:17<00:27, 1.11it/s][2024-01-17 13:59:37] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 39%|███▉ | 19/49 [00:18<00:26, 1.12it/s][2024-01-17 13:59:38] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 41%|████ | 20/49 [00:18<00:25, 1.13it/s][2024-01-17 13:59:39] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 43%|████▎ | 21/49 [00:19<00:24, 1.13it/s][2024-01-17 13:59:40] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 45%|████▍ | 22/49 [00:20<00:23, 1.14it/s][2024-01-17 13:59:41] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 47%|████▋ | 23/49 [00:21<00:25, 1.02it/s][2024-01-17 13:59:42] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 49%|████▉ | 24/49 [00:22<00:24, 1.00it/s][2024-01-17 13:59:43] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 51%|█████ | 25/49 [00:24<00:25, 1.05s/it][2024-01-17 13:59:44] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 53%|█████▎ | 26/49 [00:25<00:25, 1.10s/it][2024-01-17 13:59:46] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 55%|█████▌ | 27/49 [00:26<00:22, 1.04s/it][2024-01-17 13:59:47] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 57%|█████▋ | 28/49 [00:27<00:20, 1.00it/s][2024-01-17 13:59:47] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 59%|█████▉ | 29/49 [00:28<00:19, 1.05it/s][2024-01-17 13:59:48] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 61%|██████ | 30/49 [00:28<00:17, 1.08it/s][2024-01-17 13:59:49] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 63%|██████▎ | 31/49 [00:29<00:16, 1.09it/s][2024-01-17 13:59:50] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 65%|██████▌ | 32/49 [00:30<00:15, 1.11it/s][2024-01-17 13:59:51] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 67%|██████▋ | 33/49 [00:31<00:14, 1.11it/s][2024-01-17 13:59:52] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 69%|██████▉ | 34/49 [00:32<00:13, 1.12it/s][2024-01-17 13:59:53] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 71%|███████▏ | 35/49 [00:33<00:12, 1.12it/s][2024-01-17 13:59:54] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 73%|███████▎ | 36/49 [00:34<00:12, 1.01it/s][2024-01-17 13:59:55] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 76%|███████▌ | 37/49 [00:35<00:11, 1.05it/s][2024-01-17 13:59:56] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 78%|███████▊ | 38/49 [00:36<00:10, 1.02it/s][2024-01-17 13:59:57] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 80%|███████▉ | 39/49 [00:37<00:09, 1.02it/s][2024-01-17 13:59:58] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 82%|████████▏ | 40/49 [00:38<00:08, 1.01it/s][2024-01-17 13:59:59] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 84%|████████▎ | 41/49 [00:39<00:07, 1.01it/s][2024-01-17 14:00:00] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 86%|████████▌ | 42/49 [00:40<00:06, 1.04it/s][2024-01-17 14:00:01] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 88%|████████▊ | 43/49 [00:41<00:05, 1.07it/s][2024-01-17 14:00:01] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 90%|████████▉ | 44/49 [00:42<00:04, 1.09it/s][2024-01-17 14:00:02] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 92%|█████████▏| 45/49 [00:42<00:03, 1.10it/s][2024-01-17 14:00:03] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 94%|█████████▍| 46/49 [00:43<00:02, 1.11it/s][2024-01-17 14:00:04] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 96%|█████████▌| 47/49 [00:44<00:01, 1.12it/s][2024-01-17 14:00:05] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 98%|█████████▊| 48/49 [00:45<00:00, 1.13it/s][2024-01-17 14:00:06] INFO - pipelines.py - Fusing some of the model's layers. If this takes too much memory, you can deactivate it by setting `fuse_model=False` 100%|██████████| 49/49 [00:46<00:00, 1.05it/s]