from fastai.gen_doc.nbdoc import *
from fastai.vision import *
from fastai.callbacks import *
This module regroups the callbacks that track one of the metrics computed at the end of each epoch to take some decision about training. To show examples of use, we'll use our sample of MNIST and a simple cnn model.
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
show_doc(TerminateOnNaNCallback)
Sometimes, training diverges and the loss goes to nan. In that case, there's no point continuing, so this callback stops the training.
model = simple_cnn((3,16,16,2))
learn = Learner(data, model, metrics=[accuracy])
learn.fit_one_cycle(1,1e4)
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
1 | nan | nan | 0.495584 | 00:02 |
Using it prevents that situation to happen.
model = simple_cnn((3,16,16,2))
learn = Learner(data, model, metrics=[accuracy], callbacks=[TerminateOnNaNCallback()])
learn.fit(2,1e4)
epoch | train_loss | valid_loss | accuracy | time |
---|
Epoch/Batch (0/5): Invalid loss, terminating training.
You don't call these yourself - they're called by fastai's Callback
system automatically to enable the class's functionality.
show_doc(TerminateOnNaNCallback.on_batch_end)
on_batch_end
[source]
on_batch_end
(last_loss
,epoch
,num_batch
, ****kwargs
**:Any
)
Test if last_loss
is NaN and interrupts training.
show_doc(TerminateOnNaNCallback.on_epoch_end)
show_doc(EarlyStoppingCallback)
class
EarlyStoppingCallback
[source]
EarlyStoppingCallback
(learn
:Learner
,monitor
:str
=*'val_loss'
,mode
:str
='auto'
,min_delta
:int
=0
,patience
:int
=0
*) ::TrackerCallback
A TrackerCallback
that terminates training when monitored quantity stops improving.
This callback tracks the quantity in monitor
during the training of learn
. mode
can be forced to 'min' or 'max' but will automatically try to determine if the quantity should be the lowest possible (validation loss) or the highest possible (accuracy). Will stop training after patience
epochs if the quantity hasn't improved by min_delta
.
model = simple_cnn((3,16,16,2))
learn = Learner(data, model, metrics=[accuracy],
callback_fns=[partial(EarlyStoppingCallback, monitor='accuracy', min_delta=0.01, patience=3)])
learn.fit(50,1e-42)
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
1 | 0.696629 | 0.696266 | 0.362610 | 00:02 |
2 | 0.696333 | 0.696266 | 0.362610 | 00:02 |
3 | 0.696307 | 0.696266 | 0.362610 | 00:02 |
4 | 0.696467 | 0.696266 | 0.362610 | 00:02 |
Epoch 5: early stopping
You don't call these yourself - they're called by fastai's Callback
system automatically to enable the class's functionality.
show_doc(EarlyStoppingCallback.on_train_begin)
show_doc(EarlyStoppingCallback.on_epoch_end)
on_epoch_end
[source]
on_epoch_end
(epoch
, ****kwargs
**:Any
)
Compare the value monitored to its best score and maybe stop training.
show_doc(SaveModelCallback)
class
SaveModelCallback
[source]
SaveModelCallback
(learn
:Learner
,monitor
:str
=*'val_loss'
,mode
:str
='auto'
,every
:str
='improvement'
,name
:str
='bestmodel'
*) ::TrackerCallback
A TrackerCallback
that saves the model when monitored quantity is best.
This callback tracks the quantity in monitor
during the training of learn
. mode
can be forced to 'min' or 'max' but will automatically try to determine if the quantity should be the lowest possible (validation loss) or the highest possible (accuracy). Will save the model in name
whenever determined by every
('improvement' or 'epoch'). Loads the best model at the end of training is every='improvement'
.
model = simple_cnn((3,16,16,2))
learn = Learner(data, model, metrics=[accuracy])
learn.fit_one_cycle(5,1e-4, callbacks=[SaveModelCallback(learn, every='epoch', monitor='accuracy', name='model')])
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
1 | 0.678338 | 0.666926 | 0.659470 | 00:02 |
2 | 0.563476 | 0.515598 | 0.907753 | 00:02 |
3 | 0.370079 | 0.337353 | 0.933268 | 00:02 |
4 | 0.281564 | 0.272560 | 0.936212 | 00:02 |
5 | 0.260385 | 0.263720 | 0.936703 | 00:02 |
Choosing every='epoch'
saves an individual model at the end of each epoch.
!ls ~/.fastai/data/mnist_sample/models
bestmodel_1.pth model_1.pth model_4.pth stage-1.pth bestmodel_2.pth model_2.pth model_5.pth tmp.pth bestmodel_3.pth model_3.pth one_epoch.pth trained_model.pth
learn.fit_one_cycle(5,1e-4, callbacks=[SaveModelCallback(learn, every='improvement', monitor='accuracy', name='best')])
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
1 | 0.238711 | 0.226684 | 0.939156 | 00:02 |
2 | 0.181980 | 0.176078 | 0.940628 | 00:02 |
3 | 0.159314 | 0.163088 | 0.942100 | 00:02 |
4 | 0.160453 | 0.159423 | 0.943081 | 00:02 |
5 | 0.159717 | 0.159017 | 0.943081 | 00:02 |
Better model found at epoch 1 with accuracy value: 0.9391560554504395. Better model found at epoch 2 with accuracy value: 0.9406280517578125. Better model found at epoch 3 with accuracy value: 0.9421001076698303. Better model found at epoch 4 with accuracy value: 0.9430814385414124.
Choosing every='improvement'
saves the single best model out of all epochs during training.
!ls ~/.fastai/data/mnist_sample/models
best.pth bestmodel_3.pth model_3.pth one_epoch.pth trained_model.pth bestmodel_1.pth model_1.pth model_4.pth stage-1.pth bestmodel_2.pth model_2.pth model_5.pth tmp.pth
You don't call these yourself - they're called by fastai's Callback
system automatically to enable the class's functionality.
show_doc(SaveModelCallback.on_epoch_end)
on_epoch_end
[source]
on_epoch_end
(epoch
, ****kwargs
**:Any
)
Compare the value monitored to its best score and maybe save the model.
show_doc(SaveModelCallback.on_train_end)
show_doc(ReduceLROnPlateauCallback)
class
ReduceLROnPlateauCallback
[source]
ReduceLROnPlateauCallback
(learn
:Learner
,monitor
:str
=*'val_loss'
,mode
:str
='auto'
,patience
:int
=0
,factor
:float
=0.2
,min_delta
:int
=0
*) ::TrackerCallback
A TrackerCallback
that reduces learning rate when a metric has stopped improving.
This callback tracks the quantity in monitor
during the training of learn
. mode
can be forced to 'min' or 'max' but will automatically try to determine if the quantity should be the lowest possible (validation loss) or the highest possible (accuracy). Will reduce the learning rate by factor
after patience
epochs if the quantity hasn't improved by min_delta
.
You don't call these yourself - they're called by fastai's Callback
system automatically to enable the class's functionality.
show_doc(ReduceLROnPlateauCallback.on_train_begin)
show_doc(ReduceLROnPlateauCallback.on_epoch_end)
on_epoch_end
[source]
on_epoch_end
(epoch
, ****kwargs
**:Any
)
Compare the value monitored to its best and maybe reduce lr.
show_doc(TrackerCallback)
class
TrackerCallback
[source]
TrackerCallback
(learn
:Learner
,monitor
:str
=*'val_loss'
,mode
:str
='auto'
*) ::LearnerCallback
A LearnerCallback
that keeps track of the best value in monitor
.
show_doc(TrackerCallback.get_monitor_value)
You don't call these yourself - they're called by fastai's Callback
system automatically to enable the class's functionality.
show_doc(TrackerCallback.on_train_begin)