This module contains all the basic functions we need in other modules of the fastai library (split with core
that contains the ones not requiring pytorch). Its documentation can easily be skipped at a first read, unless you want to know what a given function does.
from fastai.imports import *
from fastai.gen_doc.nbdoc import *
from fastai.layers import *
from fastai.torch_core import *
AdamW = partial(optim.Adam, betas=(0.9,0.99))
bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)
defaults.device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
If you are trying to make fastai run on the CPU, simply change the default device: defaults.device = 'cpu'
.
Alternatively, if not using wildcard imports: fastai.torch_core.defaults.device = 'cpu'
.
show_doc(batch_to_half)
show_doc(flatten_model, full_name='flatten_model')
flatten_model
[source][test]
flatten_model
(m
)
No tests found for <lambda>
. To contribute a test please refer to this guide and this discussion.
Flattens all the layers of m
into an array. This allows for easy access to the layers of the model and allows you to manipulate the model as if it was an array.
m = simple_cnn([3,6,12])
m
Sequential( (0): Sequential( (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (1): ReLU(inplace) ) (1): Sequential( (0): Conv2d(6, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) (1): ReLU(inplace) ) (2): Sequential( (0): AdaptiveAvgPool2d(output_size=1) (1): Flatten() ) )
flatten_model(m)
[Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)), ReLU(inplace), Conv2d(6, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)), ReLU(inplace), AdaptiveAvgPool2d(output_size=1), Flatten()]
show_doc(model2half)
Converting model parameters to half precision allows us to leverage fast FP16
arithmetic which can speed up the computations by 2-8 times. It also reduces memory consumption allowing us to train deeper models.
Note: Batchnorm layers are not converted to half precision as that may lead to instability in training.
m = simple_cnn([3,6,12], bn=True)
def show_params_dtype(state_dict):
"""Simple function to pretty print the dtype of the model params"""
for wt_name, param in state_dict.items():
print("{:<30}: {}".format(wt_name, str(param.dtype)))
print()
print("dtypes of model parameters before model2half: ")
show_params_dtype(m.state_dict())
# Converting model to half precision
m_half = model2half(m)
print("dtypes of model parameters after model2half: ")
show_params_dtype(m_half.state_dict())
dtypes of model parameters before model2half: 0.0.weight : torch.float32 0.2.weight : torch.float32 0.2.bias : torch.float32 0.2.running_mean : torch.float32 0.2.running_var : torch.float32 0.2.num_batches_tracked : torch.int64 1.0.weight : torch.float32 1.0.bias : torch.float32 dtypes of model parameters after model2half: 0.0.weight : torch.float16 0.2.weight : torch.float32 0.2.bias : torch.float32 0.2.running_mean : torch.float32 0.2.running_var : torch.float32 0.2.num_batches_tracked : torch.int64 1.0.weight : torch.float16 1.0.bias : torch.float16
show_doc(np2model_tensor)
It is a wrapper on top of Pytorch's torch.as_tensor
which converts numpy array to torch tensor, and additionally attempts to map all floats to torch.float32
and all integers to torch.int64
for consistencies in model data. Below is an example demonstrating it's functionality for floating number, similar functionality applies to integer as well.
a1 = np.ones((2, 3)).astype(np.float16)
a2 = np.ones((2, 3)).astype(np.float32)
a3 = np.ones((2, 3)).astype(np.float64)
b1 = np2model_tensor(a1) # Maps to torch.float32
b2 = np2model_tensor(a2) # Maps to torch.float32
b3 = np2model_tensor(a3) # Maps to torch.float32
print(f"Datatype of as': {a1.dtype}, {a2.dtype}, {a3.dtype}")
print(f"Datatype of bs': {b1.dtype}, {b2.dtype}, {b3.dtype}")
Datatype of as': float16, float32, float64 Datatype of bs': torch.float32, torch.float32, torch.float32
show_doc(requires_grad)
requires_grad
[source][test]
requires_grad
(m
:Module
,b
:Optional
[bool
]=*None
*) →Optional
[bool
]
If b
is not set return requires_grad
of first param, else set requires_grad
on all params as b
Performs both getting and setting of requires_grad
parameter of the tensors, which decided whether to accumulate gradients or not.
If b
is None
: The function gets the requires_grad
for the model parameter, to be more specific it returns the requires_grad
of the first element in the model.
Else if b
is passed (a boolean value), requires_grad
of all parameters of the model is set to b
.
# Any Pytorch model
m = simple_cnn([3, 6, 12], bn=True)
# Get the requires_grad of model
print("requires_grad of model: {}".format(requires_grad(m)))
# Set requires_grad of all params in model to false
requires_grad(m, False)
# Get the requires_grad of model
print("requires_grad of model: {}".format(requires_grad(m)))
requires_grad of model: True requires_grad of model: False
show_doc(tensor)
tensor
[source][test]
tensor
(x
:Any
, ***rest
**) →Tensor
Tests found for tensor
:
pytest -sv tests/test_torch_core.py::test_tensor_with_list
[source]pytest -sv tests/test_torch_core.py::test_tensor_with_ndarray
[source]pytest -sv tests/test_torch_core.py::test_tensor_with_tensor
[source]Direct tests:
pytest -sv tests/test_torch_core.py::test_np2model_tensor
[source]pytest -sv tests/test_torch_core.py::test_tensor_array_monkey_patch
[source]To run tests please refer to this guide.
Like torch.as_tensor
, but handle lists too, and can pass multiple vector elements directly.
Handy function when you want to convert any list type object to tensor, initialize your weights manually, and other similar cases.
NB: When passing multiple vectors, all vectors must be of same dimensions. (Obvious but can be forgotten sometimes)
# Conversion from any numpy array
b = tensor(np.array([1, 2, 3]))
print(b, type(b))
# Passing as multiple parameters
b = tensor(1, 2, 3)
print(b, type(b))
# Passing a single list
b = tensor([1, 2, 3])
print(b, type(b))
# Can work with multiple vectors / lists
b = tensor([1, 2], [3, 4])
print(b, type(b))
tensor([1, 2, 3]) <class 'torch.Tensor'> tensor([1, 2, 3]) <class 'torch.Tensor'> tensor([1, 2, 3]) <class 'torch.Tensor'> tensor([[1, 2], [3, 4]]) <class 'torch.Tensor'>
show_doc(to_cpu)
to_cpu
[source][test]
to_cpu
(b
:ItemsList
)
No tests found for to_cpu
. To contribute a test please refer to this guide and this discussion.
Recursively map lists of tensors in b
to the cpu.
A wrapper on top of Pytorch's torch.Tensor.cpu()
function, which creates and returns a copy of a tensor or even a list of tensors in the CPU. As described in Pytorch's docs, if the tensor or list of tensor is already on the CPU, the exact data is returned and no copy is made.
Useful to convert all the list of parameters of the model to CPU in a single call.
if torch.cuda.is_available():
a = [torch.randn((1, 1)).cuda() for i in range(3)]
print(a)
print("Id of tensors in a: ")
for i in a: print(id(i))
# Getting a CPU version of the tensors in GPU
b = to_cpu(a)
print(b)
print("Id of tensors in b:")
for i in b: print(id(i))
# Trying to perform to_cpu on a list of tensor already in CPU
c = to_cpu(b)
print(c)
# The tensors in c has exact id as that of b. No copy performed.
print("Id of tensors in c:")
for i in c: print(id(i))
[tensor([[-0.5932]], device='cuda:0'), tensor([[-0.2867]], device='cuda:0'), tensor([[-1.0616]], device='cuda:0')] Id of tensors in a: 139974954993416 139977016149120 139974955521008 [tensor([[-0.5932]]), tensor([[-0.2867]]), tensor([[-1.0616]])] Id of tensors in b: 139974954963016 139974955458280 139974955521152 [tensor([[-0.5932]]), tensor([[-0.2867]]), tensor([[-1.0616]])] Id of tensors in c: 139974954963016 139974955458280 139974955521152
show_doc(to_data)
to_data
[source][test]
to_data
(b
:ItemsList
)
No tests found for to_data
. To contribute a test please refer to this guide and this discussion.
Recursively map lists of items in b
to their wrapped data.
Returns the data attribute from the object or collection of objects that inherits from ItemBase
class. Useful to examine the exact values of the data, could be used to work with the data outside of fastai
classes.
# Default example examined
from fastai import *
from fastai.vision import *
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
# Examin the labels
ys = list(data.y)
print("Category display names: ", [ys[0], ys[-1]])
print("Unique classes internally represented as: ", to_data([ys[0], ys[-1]]))
Category display names: [Category 7, Category 3] Unique classes internally represented as: [1, 0]
show_doc(to_detach)
to_detach
[source][test]
to_detach
(b
:Tensors
,cpu
:bool
=*True
*)
No tests found for to_detach
. To contribute a test please refer to this guide and this discussion.
Recursively detach lists of tensors in b
; put them on the CPU if cpu=True
.
show_doc(to_device)
to_device
[source][test]
to_device
(b
:Tensors
,device
:device
)
No tests found for to_device
. To contribute a test please refer to this guide and this discussion.
Recursively put b
on device
.
show_doc(to_half)
Converts the tensor or list of FP16
, resulting in less memory consumption and faster computations with the tensor. It does not convert torch.int
types to half precision.
a1 = torch.tensor([1, 2], dtype=torch.int64)
a2 = torch.tensor([1, 2], dtype=torch.int32)
a3 = torch.tensor([1, 2], dtype=torch.int16)
a4 = torch.tensor([1, 2], dtype=torch.float64)
a5 = torch.tensor([1, 2], dtype=torch.float32)
a6 = torch.tensor([1, 2], dtype=torch.float16)
print("dtype of as: ", a1.dtype, a2.dtype, a3.dtype, a4.dtype, a5.dtype, a6.dtype, sep="\t")
b1, b2, b3, b4, b5, b6 = to_half([a1, a2, a3, a4, a5, a6])
print("dtype of bs: ", b1.dtype, b2.dtype, b3.dtype, b4.dtype, b5.dtype, b6.dtype, sep="\t")
dtype of as: torch.int64 torch.int32 torch.int16 torch.float64 torch.float32 torch.float16 dtype of bs: torch.int64 torch.int32 torch.int16 torch.float16 torch.float16 torch.float16
show_doc(to_np)
to_np
[source][test]
to_np
(x
)
No tests found for to_np
. To contribute a test please refer to this guide and this discussion.
Convert a tensor to a numpy array.
Internally puts the data to CPU, and converts to numpy.ndarray
equivalent of torch.tensor
by calling torch.Tensor.numpy()
.
a = torch.tensor([1, 2], dtype=torch.float64)
if torch.cuda.is_available():
a = a.cuda()
print(a, type(a), a.device)
b = to_np(a)
print(b, type(b))
tensor([1., 2.], dtype=torch.float64) <class 'torch.Tensor'> cpu [1. 2.] <class 'numpy.ndarray'>
show_doc(try_int)
try_int
[source][test]
try_int
(o
:Any
) →Any
No tests found for try_int
. To contribute a test please refer to this guide and this discussion.
Try to convert o
to int, default to o
if not possible.
# Converts floating point numbers to integer
print(try_int(12.5), type(try_int(12.5)))
# This is a Rank-1 ndarray, which ideally should not be converted to int
print(try_int(np.array([1.5])), try_int(np.array([1.5])).dtype)
# Numpy array with a single elements are converted to int
print(try_int(np.array(1.5)), type(try_int(np.array(1.5))))
print(try_int(torch.tensor(2.5)), type(try_int(torch.tensor(2.5))))
# Strings are not converted to int (of course)
print(try_int("12.5"), type(try_int("12.5")))
12 <class 'int'> [1.5] float64 1 <class 'int'> 2 <class 'int'> 12.5 <class 'str'>
show_doc(apply_init)
show_doc(apply_leaf)
show_doc(cond_init)
cond_init
[source][test]
cond_init
(m
:Module
,init_func
:LayerFunc
)
No tests found for cond_init
. To contribute a test please refer to this guide and this discussion.
Initialize the non-batchnorm layers of m
with init_func
.
show_doc(in_channels)
show_doc(init_default)
init_default
[source][test]
init_default
(m
:Module
,func
:LayerFunc
=*'kaiming_normal_'
*)
No tests found for init_default
. To contribute a test please refer to this guide and this discussion.
Initialize m
weights with func
and set bias
to 0.
show_doc(children)
show_doc(children_and_parameters)
children_and_parameters
[source][test]
children_and_parameters
(m
:Module
)
No tests found for children_and_parameters
. To contribute a test please refer to this guide and this discussion.
Return the children of m
and its direct parameters not registered in modules.
show_doc(first_layer)
first_layer
[source][test]No tests found for first_layer
. To contribute a test please refer to this guide and this discussion.
Retrieve first layer in a module m
.
show_doc(last_layer)
last_layer
[source][test]No tests found for last_layer
. To contribute a test please refer to this guide and this discussion.
Retrieve last layer in a module m
.
show_doc(num_children)
num_children
[source][test]
num_children
(m
:Module
) →int
No tests found for num_children
. To contribute a test please refer to this guide and this discussion.
Get number of children modules in m
.
show_doc(one_param)
one_param
[source][test]
one_param
(m
:Module
) →Tensor
No tests found for one_param
. To contribute a test please refer to this guide and this discussion.
Return the first parameter of m
.
show_doc(range_children)
show_doc(trainable_params)
trainable_params
[source][test]
trainable_params
(m
:Module
) →ParamList
No tests found for trainable_params
. To contribute a test please refer to this guide and this discussion.
Return list of trainable params in m
.
show_doc(bn2float)
bn2float
[source][test]No tests found for bn2float
. To contribute a test please refer to this guide and this discussion.
If module
is batchnorm don't use half precision.
show_doc(set_bn_eval)
show_doc(split_no_wd_params)
This is used by the optimizer to determine which params should be applied weight decay when using the option bn_wd=False
is used in a Learner
.
show_doc(log_uniform)
log_uniform
[source][test]
log_uniform
(low
,high
,size
:Optional
[List
[int
]]=*None
*) →FloatOrTensor
No tests found for log_uniform
. To contribute a test please refer to this guide and this discussion.
Draw 1 or shape=size
random floats from uniform dist: min=log(low
), max=log(high
).
log_uniform(0.5,2,(8,))
tensor([0.5775, 0.7902, 0.6087, 0.5730, 0.8057, 0.8845, 0.8975, 0.5585])
show_doc(rand_bool)
rand_bool
[source][test]
rand_bool
(p
:float
,size
:Optional
[List
[int
]]=*None
*) →BoolOrTensor
No tests found for rand_bool
. To contribute a test please refer to this guide and this discussion.
Draw 1 or shape=size
random booleans (True
occuring with probability p
).
rand_bool(0.5, 8)
tensor([1, 1, 0, 1, 0, 0, 1, 0], dtype=torch.uint8)
show_doc(uniform)
uniform
[source][test]
uniform
(low
:Number
,high
:Number
=*None
,size
:Optional
[List
[int
]]=None
*) →FloatOrTensor
No tests found for uniform
. To contribute a test please refer to this guide and this discussion.
Draw 1 or shape=size
random floats from uniform dist: min=low
, max=high
.
uniform(0,1,(8,))
tensor([0.6432, 0.3110, 0.7588, 0.7058, 0.7121, 0.8552, 0.3352, 0.2620])
show_doc(uniform_int)
uniform_int
[source][test]
uniform_int
(low
:int
,high
:int
,size
:Optional
[List
[int
]]=*None
*) →IntOrTensor
No tests found for uniform_int
. To contribute a test please refer to this guide and this discussion.
Generate int or tensor size
of ints between low
and high
(included).
uniform_int(0,2,(8,))
tensor([0, 1, 1, 2, 1, 1, 1, 2])
show_doc(ModelOnCPU, title_level=3)
class
ModelOnCPU
[source][test]
ModelOnCPU
(model
:Module
)
No tests found for ModelOnCPU
. To contribute a test please refer to this guide and this discussion.
A context manager to evaluate model
on the CPU inside.
show_doc(NoneReduceOnCPU, title_level=3)
show_doc(ParameterModule, title_level=3)
class
ParameterModule
[source][test]No tests found for ParameterModule
. To contribute a test please refer to this guide and this discussion.
Register a lone parameter p
in a module.
show_doc(data_collate)
data_collate
[source][test]
data_collate
(batch
:ItemsList
) →Tensor
No tests found for data_collate
. To contribute a test please refer to this guide and this discussion.
Convert batch
items to tensor data.
show_doc(get_model)
get_model
[source][test]
get_model
(model
:Module
)
No tests found for get_model
. To contribute a test please refer to this guide and this discussion.
Return the model maybe wrapped inside model
.
show_doc(grab_idx)
grab_idx
[source][test]
grab_idx
(x
,i
,batch_first
:bool
=*True
*)
No tests found for grab_idx
. To contribute a test please refer to this guide and this discussion.
Grab the i
-th batch in x
, batch_first
stating the batch dimension.
show_doc(logit)
logit
[source][test]
logit
(x
:Tensor
) →Tensor
No tests found for logit
. To contribute a test please refer to this guide and this discussion.
Logit of x
, clamped to avoid inf.
show_doc(logit_)
logit_
[source][test]
logit_
(x
:Tensor
) →Tensor
No tests found for logit_
. To contribute a test please refer to this guide and this discussion.
Inplace logit of x
, clamped to avoid inf
show_doc(model_type)
model_type
[source][test]
model_type
(dtype
)
No tests found for model_type
. To contribute a test please refer to this guide and this discussion.
Return the torch type corresponding to dtype
.
show_doc(np_address)
show_doc(split_model)
If splits
are layers, the model is split at those (not included) sequentially. If want_idxs
is True, the corresponding indexes are returned. If splits
are lists of layers, the model is split according to those.
show_doc(split_model_idx)
split_model_idx
[source][test]
split_model_idx
(model
:Module
,idxs
:Collection
[int
]) →ModuleList
No tests found for split_model_idx
. To contribute a test please refer to this guide and this discussion.
Split model
according to the indexes in idxs
.
show_doc(trange_of)
trange_of
[source][test]
trange_of
(x
)
No tests found for trange_of
. To contribute a test please refer to this guide and this discussion.
Create a tensor from range_of(x)
.
show_doc(tensor__array__)
tensor__array__
[source][test]
tensor__array__
(dtype
=*None
*)
No tests found for tensor__array__
. To contribute a test please refer to this guide and this discussion.
show_doc(ParameterModule.forward)
forward
[source][test]
forward
(x
)
No tests found for forward
. To contribute a test please refer to this guide and this discussion.
Defines the computation performed at every call. Should be overridden by all subclasses.
.. note::
Although the recipe for forward pass needs to be defined within
this function, one should call the :class:Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
show_doc(to_float)
to_float
[source][test]
to_float
(b
:Collection
[Tensor
]) →Collection
[Tensor
]
No tests found for to_float
. To contribute a test please refer to this guide and this discussion.
Recursively map lists of tensors in b
to FP16.
show_doc(flatten_check)
flatten_check
[source][test]
flatten_check
(out
:Tensor
,targ
:Tensor
) →Tensor
No tests found for flatten_check
. To contribute a test please refer to this guide and this discussion.
Check that out
and targ
have the same number of elements and flatten them.