This module contains all the basic functions we need in other modules of the fastai library (split with core
that contains the ones not requiring pytorch). Its documentation can easily be skipped at a first read, unless you want to know what a given fuction does.
from fastai.gen_doc.nbdoc import *
from fastai.torch_core import *
AdamW = partial(optim.Adam, betas=(0.9,0.99))
bn_types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)
default_device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
show_doc(flatten_model, full_name='flatten')
flatten
[source]
flatten
(m
)
Flattens all the layers of m
.
show_doc(model2half)
show_doc(np2model_tensor)
show_doc(requires_grad, doc_string=False)
If b
is None, returns the requires_grad
state of the first layer of m
. Otherwise, sets requires_grad=b
in all children of m
.
show_doc(tensor)
tensor
[source]
tensor
(x
:Any
,rest
) →Tensor
Like torch.as_tensor
, but handle lists too, and can pass multiple vector elements directly
Ensures x
is a torch Tensor
.
show_doc(to_data)
show_doc(to_detach)
to_detach
[source]
to_detach
(b
:Tensors
,cpu
:bool
=True
)
Recursively detach lists of tensors in b
, puts them on the CPU if cpu=True
.
show_doc(to_device)
show_doc(to_half, doc_string=False)
to_half
[source]
to_half
(b
:Collection
[Tensor
]) →Collection
[Tensor
]
Put the input of the batch b
in half precision.
show_doc(to_np)
to_np
[source]
to_np
(x
)
Convert x
to a numpy array.
show_doc(apply_init)
apply_init
[source]
apply_init
(m
,init_func
:LayerFunc
)
Initialize all non-batchnorm layers of m
with init_func
.
show_doc(apply_leaf)
show_doc(cond_init)
show_doc(in_channels)
show_doc(first_layer)
show_doc(last_layer)
show_doc(num_children)
show_doc(range_children)
show_doc(trainable_params)
show_doc(bn2float)
show_doc(set_bn_eval)
show_doc(split_bn_bias)
split_bn_bias
[source]
split_bn_bias
(layer_groups
:ModuleList
) →ModuleList
Sort each layer in layer_groups
into batchnorm (bn_types
) and non-batchnorm groups.
show_doc(calc_loss)
calc_loss
[source]
calc_loss
(y_pred
:Tensor
,y_true
:Tensor
,loss_func
:LossFunction
)
Calculate loss between y_pred
and y_true
using loss_class
and bs
.
show_doc(data_collate)
show_doc(model_type)
show_doc(np_address)
show_doc(split_model, doc_string=False)
Splits the model
according to the layer in splits
. If splits
are layers, the model is split at those (not included) sequentially. If want_idxs
is True, the corresponding indexes are returned. If splits
are lists of layers, the model is split according to those.
show_doc(split_model_idx)
show_doc(trange_of)
trange_of
[source]
trange_of
(x
)
Return a tensor from a range that has the same length as x
.
show_doc(tensor__array__)
tensor__array__
[source]
tensor__array__
(dtype
=None
)
show_doc(init_default)
show_doc(log_uniform)
log_uniform
[source]
log_uniform
(low
,high
,size
:Optional
[List
[int
]]=None
) →FloatOrTensor
Draw 1 or shape=size
random floats from uniform dist: min=log(low
), max=log(high
).
show_doc(grab_idx)
grab_idx
[source]
grab_idx
(x
,i
,batch_first
:bool
=True
)
show_doc(uniform_int)
uniform_int
[source]
uniform_int
(low
:int
,high
:int
,size
:Optional
[List
[int
]]=None
) →IntOrTensor
Generate int or tensor size
of ints between low
and high
(included).
show_doc(to_cpu)
show_doc(logit)
show_doc(FloatItem)
show_doc(logit_)
show_doc(rand_bool)
rand_bool
[source]
rand_bool
(p
:float
,size
:Optional
[List
[int
]]=None
) →BoolOrTensor
Draw 1 or shape=size
random booleans (True occuring probability p
).
show_doc(uniform)
uniform
[source]
uniform
(low
:Number
,high
:Number
=None
,size
:Optional
[List
[int
]]=None
) →FloatOrTensor
Draw 1 or shape=size
random floats from uniform dist: min=low
, max=high
.