In [1]:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]="0" 
import sys
import ktrain
from ktrain import vision as vis
Using TensorFlow backend.

Download the Dogs vs. Cats dataset from here and set DATADIR to the extracted folder.

In [2]:
DATADIR = 'data/dogscats'
(train_data, val_data, preproc) = vis.images_from_folder(
                                              datadir=DATADIR,
                                              data_aug = vis.get_data_aug(horizontal_flip=True),
                                              train_test_names=['train', 'valid'], 
                                              target_size=(224,224), color_mode='rgb')
model = vis.image_classifier('pretrained_resnet50', train_data, val_data, freeze_layers=15)
Found 23000 images belonging to 2 classes.
Found 23000 images belonging to 2 classes.
Found 2000 images belonging to 2 classes.
The normalization scheme has been changed for use with a pretrained_resnet50 model. If you decide to use a different model, please reload your dataset with a ktrain.vision.data.images_from* function.

Is Multi-Label? False
pretrained_resnet50 model created.
In [3]:
learner = ktrain.get_learner(model=model, train_data=train_data, val_data=val_data, 
                             workers=8, use_multiprocessing=False, batch_size=64)
In [4]:
learner.fit_onecycle(1e-4, 3)

begin training using onecycle policy with max lr of 0.0001...
Epoch 1/3
  2/359 [..............................] - ETA: 29:33 - loss: 1.9746 - acc: 0.5000
/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:122: UserWarning: Method on_batch_end() is slow compared to the batch update (0.436140). Check your callbacks.
  % delta_t_median)
359/359 [==============================] - 133s 371ms/step - loss: 0.3377 - acc: 0.9052 - val_loss: 0.0678 - val_acc: 0.9820
Epoch 2/3
359/359 [==============================] - 124s 345ms/step - loss: 0.1049 - acc: 0.9723 - val_loss: 0.0361 - val_acc: 0.9865
Epoch 3/3
359/359 [==============================] - 124s 346ms/step - loss: 0.0578 - acc: 0.9819 - val_loss: 0.0253 - val_acc: 0.9920
Out[4]:
<keras.callbacks.History at 0x7f77fc6c5400>
In [5]:
learner.fit_onecycle(1e-4, 3)

begin training using onecycle policy with max lr of 0.0001...
Epoch 1/3
359/359 [==============================] - 124s 347ms/step - loss: 0.0449 - acc: 0.9866 - val_loss: 0.0256 - val_acc: 0.9925
Epoch 2/3
359/359 [==============================] - 125s 348ms/step - loss: 0.0551 - acc: 0.9816 - val_loss: 0.0260 - val_acc: 0.9890
Epoch 3/3
359/359 [==============================] - 124s 345ms/step - loss: 0.0365 - acc: 0.9881 - val_loss: 0.0179 - val_acc: 0.9935
Out[5]:
<keras.callbacks.History at 0x7f77f6ea7da0>
In [6]:
learner.fit_onecycle(1e-4, 1)

begin training using onecycle policy with max lr of 0.0001...
Epoch 1/1
359/359 [==============================] - 125s 347ms/step - loss: 0.0353 - acc: 0.9879 - val_loss: 0.0275 - val_acc: 0.9925
Out[6]:
<keras.callbacks.History at 0x7f77f6e24240>
In [7]:
learner.fit_onecycle(1e-4/5, 1)

begin training using onecycle policy with max lr of 2e-05...
Epoch 1/1
359/359 [==============================] - 124s 345ms/step - loss: 0.0223 - acc: 0.9916 - val_loss: 0.0170 - val_acc: 0.9955
Out[7]:
<keras.callbacks.History at 0x7f77fce04940>