Finally, we also release our datasets and models.
Model | Description | Link |
---|---|---|
11_vs_11_selfplay_last | EDG agent | https://storage.googleapis.com/narya-bucket-1/models/11_vs_11_selfplay_last |
deep_homo_model.h5 | Direct Homography estimation Architecture | https://storage.googleapis.com/narya-bucket-1/models/deep_homo_model.h5 |
deep_homo_model_1.h5 | Direct Homography estimation Weights | https://storage.googleapis.com/narya-bucket-1/models/deep_homo_model_1.h5 |
keypoint_detector.h5 | Keypoints detection Weights | https://storage.googleapis.com/narya-bucket-1/models/keypoint_detector.h5 |
player_reid.pth | Player Embedding Weights | https://storage.googleapis.com/narya-bucket-1/models/player_reid.pth |
player_tracker.params | Player & Ball detection Weights | https://storage.googleapis.com/narya-bucket-1/models/player_tracker.params |
The datasets can be downloaded at:
Dataset | Description | Link |
---|---|---|
homography_dataset.zip | Homography Dataset (image,homography) | https://storage.googleapis.com/narya-bucket-1/dataset/homography_dataset.zip |
keypoints_dataset.zip | Keypoint Dataset (image,list of mask) | https://storage.googleapis.com/narya-bucket-1/dataset/keypoints_dataset.zip |
tracking_dataset.zip | Tracking Dataset in VOC format (image, bounding boxes for players/ball) | https://storage.googleapis.com/narya-bucket-1/dataset/tracking_dataset.zip |
Homography Dataset: The homography dataset is made of pair of images,matrix in a .jpg,.npy format. The matrix is the homography associated with the image. They are normalized, meaning that homography[2,2] == 1.
Keypoints Dataset: We give here pair of images,xml file. The .xml files are made of the coordinates of each available keypoints on the image. We built utils function to read these files, and do so automaticaly in our Dataset class.
Tracking Dataset: Pair of images,xml files in a VOC format.
Finally, we give here a quick tour of our training scripts.
We start by creating a model:
full_model = KeypointDetectorModel(
backbone=opt.backbone, num_classes=29, input_shape=(320, 320),
)
if opt.weights is not None:
full_model.load_weights(opt.weights)
We then create a loss function and an optimizer:
# define optomizer
optim = keras.optimizers.Adam(opt.lr)
# define loss function
dice_loss = sm.losses.DiceLoss()
focal_loss = sm.losses.CategoricalFocalLoss()
total_loss = dice_loss + (1 * focal_loss)
metrics = [sm.metrics.IOUScore(threshold=0.5), sm.metrics.FScore(threshold=0.5)]
# compile keras model with defined optimozer, loss and metrics
model.compile(optim, total_loss, metrics)
callbacks = [
keras.callbacks.ModelCheckpoint(
name_model, save_weights_only=True, save_best_only=True, mode="min"
),
keras.callbacks.ReduceLROnPlateau(
patience=10, verbose=1, cooldown=10, min_lr=0.00000001
),
]
model.summary()
We can easily build a Dataset and a Dataloader (handling batches):
x_train_dir = os.path.join(opt.data_dir, opt.x_train_dir)
kp_train_dir = os.path.join(opt.data_dir, opt.y_train_dir)
x_test_dir = os.path.join(opt.data_dir, opt.x_test_dir)
kp_test_dir = os.path.join(opt.data_dir, opt.y_test_dir)
full_dataset = KeyPointDatasetBuilder(
img_train_dir=x_train_dir,
img_test_dir=x_test_dir,
mask_train_dir=kp_train_dir,
mask_test_dir=kp_test_dir,
batch_size=opt.batch_size,
preprocess_input=preprocessing_fn,
)
train_dataloader, valid_dataloader = full_dataset._get_dataloader()
Finally, easily launch a training with:
model.fit_generator(
train_dataloader,
steps_per_epoch=len(train_dataloader),
epochs=opt.epochs,
callbacks=callbacks,
validation_data=valid_dataloader,
validation_steps=len(valid_dataloader),
)