- 🤖 See full list of Machine Learning Experiments on GitHub
- ▶️ Interactive Demo: try this model and other machine learning experiments in action
In this experiment we will build a Multilayer Perceptron (MLP) model using Tensorflow to recognize handwritten digits.
A multilayer perceptron (MLP) is a class of feedforward artificial neural network. An MLP consists of, at least, three layers of nodes: an input layer, a hidden layer and an output layer. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training. Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. It can distinguish data that is not linearly separable.
# Selecting Tensorflow version v2 (the command is relevant for Colab only).
%tensorflow_version 2.x
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sn
import numpy as np
import pandas as pd
import math
import datetime
import platform
print('Python version:', platform.python_version())
print('Tensorflow version:', tf.__version__)
print('Keras version:', tf.keras.__version__)
We will use Tensorboard to debug the model later.
# Load the TensorBoard notebook extension.
# %reload_ext tensorboard
%load_ext tensorboard
# Clear any logs from previous runs.
!rm -rf ./.logs/
The training dataset consists of 60000 28x28px images of hand-written digits from 0
to 9
.
The test dataset consists of 10000 28x28px images.
mnist_dataset = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist_dataset.load_data()
print('x_train:', x_train.shape)
print('y_train:', y_train.shape)
print('x_test:', x_test.shape)
print('y_test:', y_test.shape)
Here is how each image in the dataset looks like. It is a 28x28 matrix of integers (from 0
to 255
). Each integer represents a color of a pixel.
pd.DataFrame(x_train[0])
This matrix of numbers may be drawn as follows:
plt.imshow(x_train[0], cmap=plt.cm.binary)
plt.show()
Let's print some more training examples to get the feeling of how the digits were written.
numbers_to_display = 25
num_cells = math.ceil(math.sqrt(numbers_to_display))
plt.figure(figsize=(10,10))
for i in range(numbers_to_display):
plt.subplot(num_cells, num_cells, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_train[i], cmap=plt.cm.binary)
plt.xlabel(y_train[i])
plt.show()
Here we're just trying to move from values range of [0...255]
to [0...1]
.
x_train_normalized = x_train / 255
x_test_normalized = x_test / 255
with pd.option_context('display.float_format', '{:,.2f}'.format):
display(pd.DataFrame(x_train_normalized[0]))
Let's see how the digits look like after normalization. We're expecting it to look similar to original.
plt.imshow(x_train_normalized[0], cmap=plt.cm.binary)
plt.show()
We will use Sequential Keras model with 4 layers:
128
neurons and ReLU activation.128
neurons and ReLU activation.10
Softmax outputs. The output represents the network guess. The 0-th output represents a probability that the input digit is 0
, the 1-st output represents a probability that the input digit is 1
and so on...In this example we will use kernel_regularizer
parameter of the layer to control overfitting of the model. Another common approach to fight overfitting though might be using a dropout layers (i.e. tf.keras.layers.Dropout(0.2)
).
model = tf.keras.models.Sequential()
# Input layers.
model.add(tf.keras.layers.Flatten(input_shape=x_train_normalized.shape[1:]))
model.add(tf.keras.layers.Dense(
units=128,
activation=tf.keras.activations.relu,
kernel_regularizer=tf.keras.regularizers.l2(0.002)
))
# Hidden layers.
model.add(tf.keras.layers.Dense(
units=128,
activation=tf.keras.activations.relu,
kernel_regularizer=tf.keras.regularizers.l2(0.002)
))
# Output layers.
model.add(tf.keras.layers.Dense(
units=10,
activation=tf.keras.activations.softmax
))
Here is our model summary so far.
model.summary()
In order to plot the model the graphviz
should be installed. For Mac OS it may be installed using brew
like brew install graphviz
.
tf.keras.utils.plot_model(
model,
show_shapes=True,
show_layer_names=True,
)
adam_optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(
optimizer=adam_optimizer,
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy']
)
log_dir=".logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
training_history = model.fit(
x_train_normalized,
y_train,
epochs=10,
validation_data=(x_test_normalized, y_test),
callbacks=[tensorboard_callback]
)
Let's see how the loss function was changing during the training. We expect it to get smaller and smaller on every next epoch.
plt.xlabel('Epoch Number')
plt.ylabel('Loss')
plt.plot(training_history.history['loss'], label='training set')
plt.plot(training_history.history['val_loss'], label='test set')
plt.legend()
plt.xlabel('Epoch Number')
plt.ylabel('Accuracy')
plt.plot(training_history.history['accuracy'], label='training set')
plt.plot(training_history.history['val_accuracy'], label='test set')
plt.legend()
We need to compare the accuracy of our model on training set and on test set. We expect our model to perform similarly on both sets. If the performance on a test set will be poor comparing to a training set it would be an indicator for us that the model is overfitted and we have a "high variance" issue.
%%capture
train_loss, train_accuracy = model.evaluate(x_train_normalized, y_train)
print('Training loss: ', train_loss)
print('Training accuracy: ', train_accuracy)
%%capture
validation_loss, validation_accuracy = model.evaluate(x_test_normalized, y_test)
print('Validation loss: ', validation_loss)
print('Validation accuracy: ', validation_accuracy)
We will save the entire model to a HDF5
file. The .h5
extension of the file indicates that the model should be saved in Keras format as HDF5 file. To use this model on the front-end we will convert it (later in this notebook) to Javascript understandable format (tfjs_layers_model
with .json and .bin files) using tensorflowjs_converter as it is specified in the main README.
model_name = 'digits_recognition_mlp.h5'
model.save(model_name, save_format='h5')
loaded_model = tf.keras.models.load_model(model_name)
To use the model that we've just trained for digits recognition we need to call predict()
method.
predictions_one_hot = loaded_model.predict([x_test_normalized])
print('predictions_one_hot:', predictions_one_hot.shape)
Each prediction consists of 10 probabilities (one for each number from 0
to 9
). We need to pick the digit with the highest probability since this would be a digit that our model most confident with.
# Predictions in form of one-hot vectors (arrays of probabilities).
pd.DataFrame(predictions_one_hot)
# Let's extract predictions with highest probabilites and detect what digits have been actually recognized.
predictions = np.argmax(predictions_one_hot, axis=1)
pd.DataFrame(predictions)
So our model is predicting that the first example from the test set is 7
.
print(predictions[0])
Let's print the first image from a test set to see if model's prediction is correct.
plt.imshow(x_test_normalized[0], cmap=plt.cm.binary)
plt.show()
We see that our model made a correct prediction and it successfully recognized digit 7
. Let's print some more test examples and correspondent predictions to see how model performs and where it does mistakes.
numbers_to_display = 196
num_cells = math.ceil(math.sqrt(numbers_to_display))
plt.figure(figsize=(15, 15))
for plot_index in range(numbers_to_display):
predicted_label = predictions[plot_index]
plt.xticks([])
plt.yticks([])
plt.grid(False)
color_map = 'Greens' if predicted_label == y_test[plot_index] else 'Reds'
plt.subplot(num_cells, num_cells, plot_index + 1)
plt.imshow(x_test_normalized[plot_index], cmap=color_map)
plt.xlabel(predicted_label)
plt.subplots_adjust(hspace=1, wspace=0.5)
plt.show()
Confusion matrix shows what numbers are recognized well by the model and what numbers the model usually confuses to recognize correctly. You may see that the model performs really well but sometimes (28 times out of 10000) it may confuse number 5
with 3
or number 2
with 3
.
confusion_matrix = tf.math.confusion_matrix(y_test, predictions)
f, ax = plt.subplots(figsize=(9, 7))
sn.heatmap(
confusion_matrix,
annot=True,
linewidths=.5,
fmt="d",
square=True,
ax=ax
)
plt.show()
TensorBoard is a tool for providing the measurements and visualizations needed during the machine learning workflow. It enables tracking experiment metrics like loss and accuracy, visualizing the model graph, projecting embeddings to a lower dimensional space, and much more.
%tensorboard --logdir .logs/fit
To use this model on the web we need to convert it into the format that will be understandable by tensorflowjs. To do so we may use tfjs-converter as following:
tensorflowjs_converter --input_format keras \
./experiments/digits_recognition_mlp/digits_recognition_mlp.h5 \
./demos/public/models/digits_recognition_mlp
You find this experiment in the Demo app and play around with it right in you browser to see how the model performs in real life.