%matplotlib inline
import IPython
import openpifpaf
openpifpaf.show.Canvas.show = True
We provide two pretrained models for predicting the 133 keypoints of the COCO WholeBody dataset. The models can be called with --checkpoint=shufflenetv2k16-wholebody
or --checkpoint=shufflenetv2k30-wholebody
. Below an example prediction is shown.
%%bash
python -m openpifpaf.predict wholebody/soccer.jpeg \
--checkpoint=shufflenetv2k30-wholebody --line-width=2 --image-output
IPython.display.Image('wholebody/soccer.jpeg.predictions.jpeg')
Image credit: Photo by Lokomotive74 which is licensed under CC-BY-4.0.
Original MS COCO skeleton / COCO WholeBody skeleton
# HIDE CODE
# first make an annotation
ann_coco = openpifpaf.Annotation.from_cif_meta(
openpifpaf.plugins.coco.CocoKp().head_metas[0])
ann_wholebody = openpifpaf.Annotation.from_cif_meta(
openpifpaf.plugins.wholebody.Wholebody().head_metas[0])
# visualize the annotation
openpifpaf.show.KeypointPainter.show_joint_scales = False
openpifpaf.show.KeypointPainter.line_width = 3
keypoint_painter = openpifpaf.show.KeypointPainter()
with openpifpaf.show.Canvas.annotation(ann_wholebody, ncols=2) as (ax1, ax2):
keypoint_painter.annotation(ax1, ann_coco)
keypoint_painter.annotation(ax2, ann_wholebody)
If you don't want to use the pre-trained model, you can train a model from scratch. To train you first need to download the wholebody into your MS COCO dataset folder:
wget https://github.com/DuncanZauss/openpifpaf_assets/releases/download/v0.1.0/person_keypoints_train2017_wholebody_pifpaf_style.json -O /<PathToYourMSCOCO>/data-mscoco/annotations
wget https://github.com/DuncanZauss/openpifpaf_assets/releases/download/v0.1.0/person_keypoints_val2017_wholebody_pifpaf_style.json -O /<PathToYourMSCOCO>/data-mscoco/annotations
Note: The pifpaf style annotation files were create with Get_annotations_from_coco_wholebody.py. If you want to create your own annotation files from coco wholebody, you need to download the original files from the COCO WholeBody page and then create the pifpaf readable json files with Get_annotations_from_coco_wholebody.py. This can be useful if you for example only want to use a subset of images for training.
Finally you can train the model (Note: This can take several days, even on the good GPUs):
python3 -m openpifpaf.train --lr=0.0001 --momentum=0.95 --b-scale=3.0 --epochs=150 --lr-decay 130 140 --lr-decay-epochs=10 --batch-size=16 --weight-decay=1e-5 --dataset=wholebody --wholebody-upsample=2 --basenet=shufflenetv2k16 --loader-workers=16 --wholebody-train-annotations=<PathToYourMSCOCO>/data-mscoco/annotations/person_keypoints_train2017_wholebody_pifpaf_style.json --wholebody-val-annotations=<PathToYourMSCOCO>/data-mscoco/annotations/person_keypoints_val2017_wholebody_pifpaf_style.json --wholebody-train-image-dir=<COCO_train_image_dir> --wholebody-val-image-dir=<COCO_val_image_dir>
To evaluate your network you can use the following command. Important note: For evaluation you will need the original validation annotation file from COCO WholeBody, which has a different format than the files that are used for training. We use this different annotation format as we use the extended pycocotools for evaluation as proposed by the authors of COCO WholeBody. You can run the evaluation with:
python3 -m openpifpaf.eval --dataset=wholebody --checkpoint=shufflenetv2k16-wholebody --force-complete-pose --seed-threshold=0.2 --loader-workers=16 --wholebody-val-annotations=<PathToTheOriginalCOCOWholeBodyAnnotations>/coco_wholebody_val_v1.0.json --wholebody-val-image-dir=<COCO_val_image_dir>
If you only want to train on a subset of keypoints, e.g. if you do not need the facial keypoints and only want to train on the body, foot and hand keypoints, it should be fairly easy to just train on this subset. You will need to:
ann_types
to the keypoints that you would like to use and create the train and val json file. You can use Visualize_annotations.py to verify that the json file was created correctly.