from fastai.gen_doc.nbdoc import *
from fastai.train import *
from fastai.vision import *
These methods are automatically added to all Learner
objects created after importing this module. They provide convenient access to a number of callbacks, without requiring them to be manually created.
show_doc(fit_one_cycle)
fit_one_cycle
[source]
fit_one_cycle
(learn
:Learner
,cyc_len
:int
,max_lr
:Union
[float
,Collection
[float
],slice
]=*slice(None, 0.003, None)
,moms
:Point
=(0.95, 0.85)
,div_factor
:float
=25.0
,pct_start
:float
=0.3
,wd
:float
=None
,callbacks
:Optional
[Collection
[Callback
]]=None
,tot_epochs
:int
=None
,start_epoch
:int
=1
*)
Fit a model following the 1cycle policy.
show_doc(one_cycle_scheduler)
one_cycle_scheduler
[source]
one_cycle_scheduler
(lr_max
:float
, ****kwargs
**:Any
) →OneCycleScheduler
Instantiate a OneCycleScheduler
with lr_max
.
See OneCycleScheduler
for details.
show_doc(lr_find)
See LRFinder
for details.
show_doc(to_fp16)
See MixedPrecision
for details.
show_doc(to_fp32)
show_doc(mixup)
mixup
[source]
mixup
(learn
:Learner
,alpha
:float
=*0.4
,stack_x
:bool
=False
,stack_y
:bool
=True
*) →Learner
Add mixup https://arxiv.org/abs/1710.09412 to learn
.
show_doc(ClassificationInterpretation)
class
ClassificationInterpretation
[source]
ClassificationInterpretation
(learn
:Learner
,probs
:Tensor
,y_true
:Tensor
,losses
:Tensor
,ds_type
:DatasetType
=*<DatasetType.Valid: 2>
*)
Interpretation methods for classification models.
See MixUpCallback
for more details.
We'll show examples below using our MNIST sample. As usual the on_something
methods are directly called by the fastai library, no need to call them yourself.
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
show_doc(ShowGraph, title_level=3)
class
ShowGraph
[source]
ShowGraph
(learn
) ::LearnerCallback
Update a graph of learner stats and metrics after each epoch.
learn = create_cnn(data, models.resnet18, metrics=accuracy, callback_fns=ShowGraph)
learn.fit(3)
show_doc(ShowGraph.on_epoch_end)
on_epoch_end
[source]
on_epoch_end
(n_epochs
:int
,last_metrics
:MetricsList
, ****kwargs
**) →bool
If we have last_metrics
plot them in our pbar graph
show_doc(GradientClipping)
class
GradientClipping
[source]
GradientClipping
(learn
:Learner
,clip
:float
=*0.0
*) ::LearnerCallback
Gradient clipping during training.
learn = create_cnn(data, models.resnet18, metrics=accuracy,
callback_fns=partial(GradientClipping, clip=0.1))
learn.fit(1)
epoch | train_loss | valid_loss | accuracy |
---|---|---|---|
1 | 0.131133 | 0.078190 | 0.973013 |
show_doc(GradientClipping.on_backward_end)
show_doc(BnFreeze)
class
BnFreeze
[source]
BnFreeze
(learn
) ::LearnerCallback
Freeze moving average statistics in all non-trainable batchnorm layers.
For batchnorm layers where requires_grad==False
, you generally don't want to update their moving average statistics, in order to avoid the model's statistics getting out of sync with its pre-trained weights. You can add this callback to automate this freezing of statistics (internally, it calls eval
on these layers).
learn = create_cnn(data, models.resnet18, metrics=accuracy, callback_fns=BnFreeze)
learn.fit(1)
epoch | train_loss | valid_loss | accuracy |
---|---|---|---|
1 | 0.132564 | 0.078910 | 0.972031 |
show_doc(BnFreeze.on_epoch_begin)
on_epoch_begin
[source]
on_epoch_begin
(****kwargs
**:Any
)
Put bn layers in eval mode just after model.train()
.
show_doc(ClassificationInterpretation.plot_top_losses)
_cl_int_plot_top_losses
[source]
_cl_int_plot_top_losses
(k
,largest
=*True
,figsize
=(12, 12)
,heatmap
:bool
=True
*)
Show images in top_losses
along with their prediction, actual, loss, and probability of predicted class.
show_doc(ClassificationInterpretation.from_learner)
_cl_int_from_learner
[source]
_cl_int_from_learner
(learn
:Learner
,ds_type
:DatasetType
=*<DatasetType.Valid: 2>
,tta
=False
*)
Create an instance of ClassificationInterpretation
. tta
indicates if we want to use Test Time Augmentation.
show_doc(ClassificationInterpretation.top_losses)
top_losses
[source]
top_losses
(k
:int
=*None
,largest
=True
*)
k
largest(/smallest) losses and indexes, defaulting to all losses (sorted by largest
).
show_doc(ClassificationInterpretation.confusion_matrix)
show_doc(ClassificationInterpretation.most_confused)
most_confused
[source]
most_confused
(min_val
:int
=*0
,slice_size
:int
=1
*) →Collection
[Tuple
[str
,str
,int
]]
Sorted descending list of largest non-diagonal entries of confusion matrix, presented as actual, predicted, number of occurrences.
show_doc(ClassificationInterpretation.plot_confusion_matrix)
plot_confusion_matrix
[source]
plot_confusion_matrix
(normalize
:bool
=*False
,title
:str
='Confusion matrix'
,cmap
:Any
='Blues'
,slice_size
:int
=1
,norm_dec
:int
=2
, ***kwargs
**)
Plot the confusion matrix, with title
and using cmap
.
show_doc(ClassificationInterpretation.plot_multi_top_losses)
_cl_int_plot_multi_top_losses
[source]
_cl_int_plot_multi_top_losses
(samples
:int
=*3
,figsz
:Tuple
[int
,int
]=(8, 8)
,save_misclassified
:bool
=False
*)
Show images in top_losses
along with their prediction, actual, loss, and probability of predicted class in a multilabeled dataset.