In this notebook we test interoperabilty of Pytorch tensors with clesperanto.
import torch
import numpy as np
import pyclesperanto_prototype as cle
tensor = torch.zeros((10, 10))
tensor[1:3, 1:3] = 1
tensor[5:7, 5:7] = 1
cle_tensor = cle.push(tensor)
cle_tensor
|
cle._ image
|
... turns the tensor into an OpenCL-Array
type(cle_tensor)
pyclesperanto_prototype._tier0._pycl.OCLArray
You can also just pass a tensor as argument to clesperanto functions. The tensor will be pushed to the GPU implicitly anyway.
cle_labels = cle.label(tensor)
cle_labels
|
cle._ image
|
To turn the OpenCL image into a tensor, you need to call the get()
function. Furthermore, in case of label images, you need to convert them into a pixel type that is accepted by pytorch, for example signed 32-bit integer.
labels_tensor = torch.tensor(cle_labels.astype(np.int32).get())
type(labels_tensor)
torch.Tensor
Tensors that are stored on the GPU and managed by Pytorch need to be transferred back to the CPU memory before pushing them back to OpenCL/GPU memory. This happens transparently under the hood but may cause performance leaks due to memory transfer times.
torch.cuda.is_available()
True
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
cuda:0
tensor.is_cuda
False
cuda_tensor = tensor.to(device)
cuda_tensor.is_cuda
True
cle.push(cuda_tensor)
|
cle._ image
|
cle.label(cuda_tensor)
|
cle._ image
|