from fastai.gen_doc.nbdoc import *
from fastai.train import *
from fastai.vision import *
from fastai import *
These methods are automatically added to all Learner
objects created after importing this module. They provide convenient access to a number of callbacks, without requiring them to be manually created.
show_doc(fit_one_cycle)
fit_one_cycle
[source]
fit_one_cycle
(learn
:Learner
,cyc_len
:int
,max_lr
:Union
[float
,Collection
[float
],slice
]=slice(None, 0.003, None)
,moms
:Point
=(0.95, 0.85)
,div_factor
:float
=25.0
,pct_start
:float
=0.3
,wd
:float
=None
,callbacks
:Optional
[Collection
[Callback
]]=None
,kwargs
)
Fit a model following the 1cycle policy.
Fit a model with 1cycle training. See OneCycleScheduler
for details.
show_doc(lr_find)
See LRFinder
for details.
show_doc(to_fp16)
See MixedPrecision
for details.
show_doc(mixup)
mixup
[source]
mixup
(learn
:Learner
,alpha
:float
=0.4
,stack_x
:bool
=False
,stack_y
:bool
=True
) →Learner
Add mixup https://arxiv.org/abs/1710.09412 to learn
.
See MixUpCallback
for more details.
A last extension method comes from the module tta.
show_doc(Learner.TTA, full_name='TTA')
TTA
[source]
TTA
(learn
:Learner
,beta
:float
=0.4
,scale
:float
=1.35
,ds_type
:DatasetType
=<DatasetType.Valid: 2>
,with_loss
:bool
=False
) →Tensors
Applies Test Time Augmentation to learn
on the dataset ds_type
. We take the average of our regular predictions (with a weight beta
) with the average of predictions obtained thourh augmented versions of the training set (with a weight 1-beta
). The transforms decided for the training set are applied with a few changes scale
controls the scale for zoom (which isn't random), the cropping isn't random but we make sure to get the four corners of the image. Flipping isn't random but applied once on each of those corner images (so that makes 8 augmented versions total).
We'll show examples below using our MNIST sample.
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
show_doc(ShowGraph)
class
ShowGraph
[source]
ShowGraph
(learn
:Learner
) ::LearnerCallback
Update a graph of learner stats and metrics after each epoch.
learn = create_cnn(data, models.resnet18, metrics=accuracy, callback_fns=ShowGraph)
learn.fit(3)
show_doc(ShowGraph.on_epoch_end, doc_string=False)
on_epoch_end
[source]
on_epoch_end
(n_epochs
:int
,last_metrics
:MetricsList
,kwargs
) →bool
If we have last_metrics
, plot them in self.pbar
. Set the size of the graph with n_epochs
.
show_doc(GradientClipping)
class
GradientClipping
[source]
GradientClipping
(learn
:Learner
,clip
:float
) ::LearnerCallback
To do gradient clipping during training.
Clips gradient at a maximum absolute value of clip
during training. For instance:
learn = create_cnn(data, models.resnet18, metrics=accuracy,
callback_fns=partial(GradientClipping, clip=0.1))
learn.fit(1)
VBox(children=(HBox(children=(IntProgress(value=0, max=1), HTML(value='0.00% [0/1 00:00<00:00]'))), HTML(value…
Total time: 00:11 epoch train loss valid loss accuracy 0 0.086958 0.038721 0.989696 (00:11)
show_doc(GradientClipping.on_backward_end, doc_string=False)
on_backward_end
[source]
on_backward_end
(kwargs
)
Clip the gradients after they are computed but before the optimizer step.
show_doc(BnFreeze)
class
BnFreeze
[source]
BnFreeze
(learn
:Learner
) ::LearnerCallback
Freeze moving average statistics in all non-trainable batchnorm layers.
For batchnorm layers where requires_grad==False
, you generally don't want to update their moving average statistics, in order to avoid the model's statistics getting out of sync with its pre-trained weights. You can add this callback to automate this freezing of statistics (internally, it calls eval
on these layers).
learn = create_cnn(data, models.resnet18, metrics=accuracy, callback_fns=BnFreeze)
learn.fit(1)
VBox(children=(HBox(children=(IntProgress(value=0, max=1), HTML(value='0.00% [0/1 00:00<00:00]'))), HTML(value…
Total time: 00:07 epoch train loss valid loss accuracy 0 0.079278 0.041832 0.985280 (00:07)
show_doc(BnFreeze.on_epoch_begin, doc_string=False)
on_epoch_begin
[source]
on_epoch_begin
(kwargs
:Any
)
Set back the batchnorm layers on eval
mode after the model has been set to train
.
show_doc(one_cycle_scheduler)
one_cycle_scheduler
[source]
one_cycle_scheduler
(lr_max
:float
,kwargs
:Any
) →OneCycleScheduler