In the videos you looked at how you would improve Fashion MNIST using Convolutions. For your exercise see if you can improve MNIST to 99.8% accuracy or more using only a single convolutional layer and a single MaxPooling 2D. You should stop training once the accuracy goes above this amount. It should happen in less than 20 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your layers.
I've started the code for you -- you need to finish it!
When 99.8% accuracy has been hit, you should print out the string "Reached 99.8% accuracy so cancelling training!"
import tensorflow as tf
from os import path, getcwd, chdir
# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab mnist.npz from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/mnist.npz"
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
# GRADED FUNCTION: train_mnist_conv
def train_mnist_conv():
# Please write your code only where you are indicated.
# please do not remove model fitting inline comments.
# YOUR CODE STARTS HERE
print(tf.__version__)
# YOUR CODE ENDS HERE
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data(path=path)
# YOUR CODE STARTS HERE
training_images = training_images.reshape(60000,28,28,1)
training_images = training_images /255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_labels = test_labels /255.0
# YOUR CODE ENDS HERE
model = tf.keras.models.Sequential([
# YOUR CODE STARTS HERE
tf.keras.layers.Conv2D(64,(3,3),activation='relu',input_shape=(28,28,1)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dense(10,activation='softmax')
# YOUR CODE ENDS HERE
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# model fitting
history = model.fit(
# YOUR CODE STARTS HERE
training_images,training_labels,epochs=18
# YOUR CODE ENDS HERE
)
# model fitting
return history.epoch, history.history['acc'][-1]
_, _ = train_mnist_conv()
1.14.0 Epoch 1/18 60000/60000 [==============================] - 14s 227us/sample - loss: 0.1244 - acc: 0.9620 Epoch 2/18 60000/60000 [==============================] - 13s 222us/sample - loss: 0.0412 - acc: 0.9872 Epoch 3/18 60000/60000 [==============================] - 13s 225us/sample - loss: 0.0281 - acc: 0.9914 Epoch 4/18 60000/60000 [==============================] - 13s 222us/sample - loss: 0.0202 - acc: 0.9934 Epoch 5/18 60000/60000 [==============================] - 14s 227us/sample - loss: 0.0148 - acc: 0.9952 Epoch 6/18 60000/60000 [==============================] - 13s 225us/sample - loss: 0.0125 - acc: 0.9959 Epoch 7/18 60000/60000 [==============================] - 14s 225us/sample - loss: 0.0091 - acc: 0.9969 Epoch 8/18 60000/60000 [==============================] - 14s 225us/sample - loss: 0.0077 - acc: 0.9974 Epoch 9/18 60000/60000 [==============================] - 14s 228us/sample - loss: 0.0079 - acc: 0.9974 Epoch 10/18 60000/60000 [==============================] - 14s 228us/sample - loss: 0.0065 - acc: 0.9980 Epoch 11/18 60000/60000 [==============================] - 14s 230us/sample - loss: 0.0046 - acc: 0.9985 Epoch 12/18 60000/60000 [==============================] - 13s 220us/sample - loss: 0.0054 - acc: 0.9982 Epoch 13/18 60000/60000 [==============================] - 14s 227us/sample - loss: 0.0032 - acc: 0.9990 Epoch 14/18 60000/60000 [==============================] - 13s 222us/sample - loss: 0.0046 - acc: 0.9985 Epoch 15/18 60000/60000 [==============================] - 13s 223us/sample - loss: 0.0042 - acc: 0.9986 Epoch 16/18 60000/60000 [==============================] - 14s 227us/sample - loss: 0.0047 - acc: 0.9986 Epoch 17/18 60000/60000 [==============================] - 14s 227us/sample - loss: 0.0032 - acc: 0.9991 Epoch 18/18 60000/60000 [==============================] - 13s 223us/sample - loss: 0.0033 - acc: 0.9990
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);