Creates a scheduler that lets you train a model with following different TrainingPhase
.
from fastai.gen_doc.nbdoc import *
from fastai.callbacks.general_sched import *
from fastai.vision import *
show_doc(TrainingPhase, doc_string=False)
class
TrainingPhase
[source][test]
TrainingPhase
(length
:int
,lrs
:Floats
,moms
:Floats
,lr_anneal
:AnnealFunc
=*None
,mom_anneal
:AnnealFunc
=None
*)
No tests found for TrainingPhase
. To contribute a test please refer to this guide and this discussion.
Create a phase for training a model during length
iterations, following a schedule given by lrs
and lr_anneal
, moms
and mom_anneal
. More specifically, the phase will make the learning rate (or momentum) vary from the first value of lrs
(or moms
) to the second, following lr_anneal
(or mom_anneal
). If an annealing function is specified but lrs
or moms
is a float, it will decay to 0. If no annealing function is specified, the default is a linear annealing if lrs
(or moms
) is a tuple, a constant parameter if it's a float.
jekyll_note("""If you want to use discriminative learning rates, you can pass an numpy array of learning rate (or a tuple
of them for start and stop).""")
Let's make an example by using this to code SGD with warm restarts.
def fit_sgd_warm(learn, n_cycles, lr, mom, cycle_len, cycle_mult):
n = len(learn.data.train_dl)
phases = [TrainingPhase(n * (cycle_len * cycle_mult**i), lr, mom, lr_anneal=annealing_cos) for i in range(n_cycles)]
sched = GeneralScheduler(learn, phases)
learn.callbacks.append(sched)
if cycle_mult != 1:
total_epochs = int(cycle_len * (1 - (cycle_mult)**n_cycles)/(1-cycle_mult))
else: total_epochs = n_cycles * cycle_len
learn.fit(total_epochs)
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
learn = Learner(data, simple_cnn((3,16,16,2)), metrics=accuracy)
fit_sgd_warm(learn, 3, 1e-3, 0.9, 1, 2)
epoch | train_loss | valid_loss | accuracy |
---|---|---|---|
1 | 0.185262 | 0.164344 | 0.945044 |
2 | 0.140157 | 0.129574 | 0.954367 |
3 | 0.124761 | 0.123591 | 0.958292 |
4 | 0.109466 | 0.107876 | 0.964671 |
5 | 0.099668 | 0.091696 | 0.966143 |
6 | 0.087345 | 0.085187 | 0.970069 |
7 | 0.085803 | 0.084836 | 0.971050 |
learn.recorder.plot_lr()
show_doc(GeneralScheduler)
class
GeneralScheduler
[source][test]
GeneralScheduler
(learn
:Learner
,phases
:Collection
[TrainingPhase
],start_epoch
:int
=*None
*) ::LearnerCallback
No tests found for GeneralScheduler
. To contribute a test please refer to this guide and this discussion.
Schedule multiple TrainingPhase
for a Learner
.
You don't call these yourself - they're called by fastai's Callback
system automatically to enable the class's functionality.
show_doc(GeneralScheduler.on_batch_end, doc_string=False)
on_batch_end
[source][test]
on_batch_end
(train
, ****kwargs
**:Any
)
No tests found for on_batch_end
. To contribute a test please refer to this guide and this discussion.
Takes a step in the current phase and prepare the hyperparameters for the next batch.
show_doc(GeneralScheduler.on_train_begin, doc_string=False)
on_train_begin
[source][test]
on_train_begin
(epoch
:int
, ****kwargs
**:Any
)
No tests found for on_train_begin
. To contribute a test please refer to this guide and this discussion.
Initiates the hyperparameters to the start values of the first phase.