%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import ktrain
from ktrain import vision as vis
Using TensorFlow backend.
DATADIR = 'data/planet'
ORIGINAL_DATA = DATADIR+'/train_v2.csv'
CONVERTED_DATA = DATADIR+'/train_v2-CONVERTED.csv'
labels = vis.preprocess_csv(ORIGINAL_DATA,
CONVERTED_DATA,
x_col='image_name', y_col='tags', suffix='.jpg')
trn, val, preproc = vis.images_from_csv(
CONVERTED_DATA,
'image_name',
directory=DATADIR+'/train-jpg',
val_filepath = None,
label_columns = labels,
data_aug=vis.get_data_aug(horizontal_flip=True, vertical_flip=True))
Found 40479 images belonging to 1 classes. Found 36357 validated image filenames. Found 4122 validated image filenames.
model = vis.image_classifier('pretrained_resnet50', trn, val_data=val)
learner = ktrain.get_learner(model, train_data=trn, val_data=val,
batch_size=64, workers=8, use_multiprocessing=False)
learner.freeze(2)
The normalization scheme has been changed for use with a pretrained_resnet50 model. If you decide to use a different model, please reload your dataset with a ktrain.vision.data.images_from* function. Is Multi-Label? True pretrained_resnet50 model created.
learner.lr_find()
learner.lr_plot()
simulating training for different learning rates... this may take a few moments... 568 5 2840 Epoch 1/5 568/568 [==============================] - 213s 375ms/step - loss: 0.6386 - acc: 0.7609 Epoch 2/5 568/568 [==============================] - 201s 353ms/step - loss: 0.2260 - acc: 0.9303 Epoch 3/5 380/568 [===================>..........] - ETA: 1:05 - loss: 0.3707 - acc: 0.9044 done. Please invoke the Learner.lr_plot() method to visually inspect the loss plot to help identify the maximal learning rate associated with falling loss.
learner.fit_onecycle(1e-4, 20)
begin training using onecycle policy with max lr of 0.0001... Epoch 1/20 568/568 [==============================] - 206s 363ms/step - loss: 0.1980 - acc: 0.9299 - val_loss: 0.6752 - val_acc: 0.6485 Epoch 2/20 568/568 [==============================] - 206s 363ms/step - loss: 0.1426 - acc: 0.9492 - val_loss: 0.1381 - val_acc: 0.9508 Epoch 3/20 568/568 [==============================] - 206s 362ms/step - loss: 0.1248 - acc: 0.9547 - val_loss: 0.1074 - val_acc: 0.9616 Epoch 4/20 568/568 [==============================] - 206s 363ms/step - loss: 0.1159 - acc: 0.9577 - val_loss: 0.0996 - val_acc: 0.9637 Epoch 5/20 568/568 [==============================] - 207s 364ms/step - loss: 0.1090 - acc: 0.9601 - val_loss: 0.0977 - val_acc: 0.9643 Epoch 6/20 568/568 [==============================] - 206s 363ms/step - loss: 0.1054 - acc: 0.9609 - val_loss: 0.0987 - val_acc: 0.9629 Epoch 7/20 568/568 [==============================] - 206s 362ms/step - loss: 0.1038 - acc: 0.9616 - val_loss: 0.1052 - val_acc: 0.9622 Epoch 8/20 568/568 [==============================] - 206s 363ms/step - loss: 0.1030 - acc: 0.9617 - val_loss: 0.0936 - val_acc: 0.9650 Epoch 9/20 568/568 [==============================] - 206s 363ms/step - loss: 0.1003 - acc: 0.9627 - val_loss: 0.0967 - val_acc: 0.9651 Epoch 10/20 568/568 [==============================] - 207s 365ms/step - loss: 0.0998 - acc: 0.9628 - val_loss: 0.0957 - val_acc: 0.9651 Epoch 11/20 568/568 [==============================] - 207s 364ms/step - loss: 0.0977 - acc: 0.9635 - val_loss: 0.0918 - val_acc: 0.9660 Epoch 12/20 568/568 [==============================] - 207s 364ms/step - loss: 0.0961 - acc: 0.9638 - val_loss: 0.0953 - val_acc: 0.9655 Epoch 13/20 568/568 [==============================] - 207s 365ms/step - loss: 0.0938 - acc: 0.9646 - val_loss: 0.0961 - val_acc: 0.9649 Epoch 14/20 568/568 [==============================] - 207s 365ms/step - loss: 0.0920 - acc: 0.9654 - val_loss: 0.0908 - val_acc: 0.9660 Epoch 15/20 568/568 [==============================] - 207s 365ms/step - loss: 0.0904 - acc: 0.9659 - val_loss: 0.0916 - val_acc: 0.9664 Epoch 16/20 568/568 [==============================] - 207s 364ms/step - loss: 0.0885 - acc: 0.9664 - val_loss: 0.0904 - val_acc: 0.9662 Epoch 17/20 568/568 [==============================] - 207s 364ms/step - loss: 0.0867 - acc: 0.9672 - val_loss: 0.0901 - val_acc: 0.9668 Epoch 18/20 568/568 [==============================] - 207s 364ms/step - loss: 0.0856 - acc: 0.9674 - val_loss: 0.0902 - val_acc: 0.9666 Epoch 19/20 568/568 [==============================] - 207s 364ms/step - loss: 0.0833 - acc: 0.9683 - val_loss: 0.0912 - val_acc: 0.9666 Epoch 20/20 568/568 [==============================] - 207s 364ms/step - loss: 0.0813 - acc: 0.9688 - val_loss: 0.0901 - val_acc: 0.9670
<keras.callbacks.History at 0x7f445b61ef28>
from sklearn.metrics import fbeta_score
import numpy as np
import warnings
def f2(preds, targs, start=0.17, end=0.24, step=0.01):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return max([fbeta_score(targs, (preds>th), 2, average='samples')
for th in np.arange(start,end,step)])
y_pred = learner.model.predict_generator(val)
y_true = val.labels
f2(y_pred, y_true)
0.9249264279306654