Programmatically use OpenPifPaf to run multi-person pose estimation on an image.
The API is for more advanced use cases. Please read {doc}predict_cli
as well.
import io
import numpy as np
import openpifpaf
import PIL
import requests
import torch
%matplotlib inline
openpifpaf.show.Canvas.show = True
device = torch.device('cpu')
# device = torch.device('cuda') # if cuda is available
print(openpifpaf.__version__)
print(torch.__version__)
Image credit: "Learning to surf" by fotologic which is licensed under CC-BY-2.0.
image_response = requests.get('https://raw.githubusercontent.com/vita-epfl/openpifpaf/master/docs/coco/000000081988.jpg')
pil_im = PIL.Image.open(io.BytesIO(image_response.content)).convert('RGB')
im = np.asarray(pil_im)
with openpifpaf.show.image_canvas(im) as ax:
pass
net_cpu, _ = openpifpaf.network.Factory(checkpoint='shufflenetv2k16', download_progress=False).factory()
net = net_cpu.to(device)
openpifpaf.decoder.utils.CifSeeds.threshold = 0.5
openpifpaf.decoder.utils.nms.Keypoints.keypoint_threshold = 0.2
openpifpaf.decoder.utils.nms.Keypoints.instance_threshold = 0.2
processor = openpifpaf.decoder.factory([hn.meta for hn in net_cpu.head_nets])
Specify the image preprocossing. Beyond the default transforms, we also use CenterPadTight(16)
which adds padding to the image such that both the height and width are multiples of 16 plus 1. With this padding, the feature map covers the entire image. Without it, there would be a gap on the right and bottom of the image that the feature map does not cover.
preprocess = openpifpaf.transforms.Compose([
openpifpaf.transforms.NormalizeAnnotations(),
openpifpaf.transforms.CenterPadTight(16),
openpifpaf.transforms.EVAL_TRANSFORM,
])
data = openpifpaf.datasets.PilImageList([pil_im], preprocess=preprocess)
loader = torch.utils.data.DataLoader(
data, batch_size=1, pin_memory=True,
collate_fn=openpifpaf.datasets.collate_images_anns_meta)
annotation_painter = openpifpaf.show.AnnotationPainter()
for images_batch, _, __ in loader:
predictions = processor.batch(net, images_batch, device=device)[0]
with openpifpaf.show.image_canvas(im) as ax:
annotation_painter.annotations(ax, predictions)
Each prediction in the predictions
list above is of type Annotation
. You can access the joint coordinates in the data
attribute. It is a numpy array that contains the $x$ and $y$ coordinates and the confidence for every joint:
predictions[0].data
Below are visualizations of the fields. When using the API here, the visualization types are individually enabled. Then, the index for every field to visualize must be specified. In the example below, the fifth CIF (left shoulder) and the fifth CAF (left shoulder to left hip) are activated.
These plots are also accessible from the command line: use --debug-indices cif:5 caf:5
to select which joints and connections to visualize.
openpifpaf.visualizer.Base.set_all_indices(['cif,caf:5:confidence'])
for images_batch, _, __ in loader:
predictions = processor.batch(net, images_batch, device=device)[0]
openpifpaf.visualizer.Base.set_all_indices(['cif,caf:5:regression'])
for images_batch, _, __ in loader:
predictions = processor.batch(net, images_batch, device=device)[0]
From the CIF field, a high resolution accumulation (in the code it's called CifHr
) is generated.
This is also the basis for the seeds. Both are shown below.
openpifpaf.visualizer.Base.set_all_indices(['cif:5:hr', 'seeds'])
for images_batch, _, __ in loader:
predictions = processor.batch(net, images_batch, device=device)[0]
Starting from a seed, the poses are constructed. At every joint position, an occupancy map marks whether a previous pose was already constructed here. This reduces the number of poses that are constructed from multiple seeds for the same person. The final occupancy map is below:
openpifpaf.visualizer.Base.set_all_indices(['occupancy:5'])
for images_batch, _, __ in loader:
predictions = processor.batch(net, images_batch, device=device)[0]