In pytorch
, we use torch.Tensor
object to represent data matrix. It is a lot like numpy
array but not quite the same. torch
provide APIs to easily convert data between numpy
array and torch.Tensor
. Let's play a little bit.
from __future__ import print_function
import numpy as np
import torch
Converting a numpy into torch.Tensor
can be done by a constructor. Bringing it back into numpy
array is also a simple numpy()
function call. The torch.Tensor
constructor takes any python array-like objects (of the same types), so we can also construct from a list of integers.
# Create numpy array
data_np = np.zeros([10,10],dtype=np.float32)
# Fill something
np.fill_diagonal(data_np,1.)
print('Numpy data\n',data_np)
# Create torch.Tensor
data_torch = torch.Tensor(data_np)
print('\ntorch.Tensor data\n',data_torch)
# Bringing back into numpy array
data_np = data_torch.numpy()
print('\nNumpy data (converted back from torch.Tensor)\n',data_np)
# One can make also from a list
data_list = [1,2,3]
data_list_torch = torch.Tensor(data_list)
print('\nPython list :',data_list)
print('torch.Tensor:',data_list_torch)
Numpy data [[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 1. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 1. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 1. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 1. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 1. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 1. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]] torch.Tensor data tensor([[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 1., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 1., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 1., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0., 1.]]) Numpy data (converted back from torch.Tensor) [[1. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 1. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 1. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 1. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 1. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 1. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 1. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 1. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]] Python list : [1, 2, 3] torch.Tensor: tensor([1., 2., 3.])
Ordinary operations to an array also exists like numpy
. A single scalar value can be extracted using a function item()
.
# mean & std
print('mean',data_torch.mean().item(),'std',data_torch.std().item(),'sum',data_torch.sum())
mean 0.10000000149 std 0.301511347294 sum tensor(10.)
Common operations include element-wise multiplication, matrix multiplication, and reshaping. Read the documentation to find the right function for what you want to do!
# Two matrices
data_a = np.zeros([3,3],dtype=np.float32)
data_b = np.zeros([3,3],dtype=np.float32)
np.fill_diagonal(data_a,1.)
data_b[0,:]=1.
# print them
print('Two numpy matrices')
print(data_a)
print(data_b,'\n')
# Make torch.Tensor
torch_a = torch.Tensor(data_a)
torch_b = torch.Tensor(data_b)
print('torch.Tensor element-wise multiplication:')
print(torch_a*torch_b)
print('\ntorch.Tensor matrix multiplication:')
print(torch_a.matmul(torch_b))
Two numpy matrices [[1. 0. 0.] [0. 1. 0.] [0. 0. 1.]] [[1. 1. 1.] [0. 0. 0.] [0. 0. 0.]] torch.Tensor element-wise multiplication: tensor([[1., 0., 0.], [0., 0., 0.], [0., 0., 0.]]) torch.Tensor matrix multiplication: tensor([[1., 1., 1.], [0., 0., 0.], [0., 0., 0.]])
Putting torch.Tensor
on GPU is as easy as calling .cuda()
function (and if you want to bring it back to cpu, call .cpu()
on a cuda.Tensor
). Let's do a simple speed comparison!
# Create 10000x10000 matrix
data_np=np.zeros([10000,10000],dtype=np.float32)
data_cpu = torch.Tensor(data_np).cpu()
data_gpu = torch.Tensor(data_np).cuda()
# Compute time in CPU
import time
t0=time.time()
mean = (data_cpu * data_cpu).mean().item()
print('Using CPU:',time.time()-t0,'[s]')
# Compute time in GPU
t0=time.time()
mean = (data_gpu * data_gpu).mean().item()
print('Using GPU:',time.time()-t0,'[s]')
Using CPU: 0.0393888950348 [s] Using GPU: 0.00360798835754 [s]
Using, GPU, we get more than x10 acceleration :) Play with torch.Tensor
. It would be very useful when you want to build a custom network architecture! If you have questions, contact me.