Duke Community Standard: By typing your name below, you are certifying that you have adhered to the Duke Community Standard in completing this assignment.
Name: [YOUR NAME HERE]
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Helper functions for creating weight variables
def weight_variable(shape):
"""weight_variable generates a weight variable of a given shape."""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
"""bias_variable generates a bias variable of a given shape."""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
And here's the forward pass of the computation graph definition of the completed TensorFlow MLP assignment:
# Model Inputs
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 10])
# Define the graph
# First fully connected layer
W_fc1 = weight_variable([784, 500])
b_fc1 = bias_variable([500])
# h_fc1 = tf.nn.sigmoid(tf.matmul(x, W_fc1) + b_fc1)
h_fc1 = tf.nn.relu(tf.matmul(x, W_fc1) + b_fc1)
# Second fully connected layer
W_fc2 = weight_variable([500, 10])
b_fc2 = bias_variable([10])
y_mlp = tf.matmul(h_fc1, W_fc2) + b_fc2
# Loss
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_mlp))
# Evaluation
correct_prediction = tf.equal(tf.argmax(y_mlp, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Instead of the optimizer being given though, let's try out a few. Here we have optimizers implementing algorithms for Stochastic Gradient Descent (SGD), Stochastic Gradient Descent with Momentum (momentum), and Adaptive Moments (ADAM). Try out different parameter settings (e.g. learning rate) for each of them.
# Optimizers: Try out a few different parameters for SGD and SGD momentum
train_step_SGD = tf.train.GradientDescentOptimizer(learning_rate=**PICK_ONE**).minimize(cross_entropy)
train_step_momentum = tf.train.MomentumOptimizer(learning_rate=**PICK_ONE**, momentum=**mass_x_velocity**).minimize(cross_entropy)
train_step_ADAM = tf.train.AdamOptimizer().minimize(cross_entropy)
# Op for initializing all variables
initialize_all = tf.global_variables_initializer()
Because we'll be repeating training a few times, let's move our training regimen into function. Note that we pass which optimization algorithm we're running as an argument. In addition to printing out the validation accuracy and final test accuracy, we'll also return the lists of accuracies at each validation step and the training losses at each iteration.
def train_MLP(train_step_optimizer, iterations=4000):
with tf.Session() as sess:
# Initialize (or reset) all variables
sess.run(initialize_all)
# Initialize arrays to track losses and validation accuracies
valid_accs = []
losses = []
for i in range(iterations):
# Validate every 250th batch
if i % 250 == 0:
validation_accuracy = 0
for v in range(10):
batch = mnist.validation.next_batch(50)
validation_accuracy += (1/10) * accuracy.eval(feed_dict={x: batch[0], y_: batch[1]})
print('step %d, validation accuracy %g' % (i, validation_accuracy))
valid_accs.append(validation_accuracy)
# Train
batch = mnist.train.next_batch(50)
loss, _ = sess.run([cross_entropy, train_step_optimizer], feed_dict={x: batch[0], y_: batch[1]})
losses.append(loss)
print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
return valid_accs, losses
Finally, let's train the MLP using all three optimizers and compare the results:
print("SGD:")
valid_accs_SGD, losses_SGD = train_MLP(train_step_SGD)
print("Momentum:")
valid_accs_momentum, losses_momentum = train_MLP(train_step_momentum)
print("ADAM:")
valid_accs_ADAM, losses_ADAM = train_MLP(train_step_ADAM)
Plotting things:
fig, ax = plt.subplots(1, 2)
fig.tight_layout()
ax[0].plot(valid_accs_SGD)
ax[0].plot(valid_accs_momentum)
ax[0].plot(valid_accs_ADAM)
ax[0].set_ylabel('Validation Accuracy')
ax[0].legend(['SGD', 'Momentum', 'ADAM'], loc='lower right')
ax[1].plot(losses_SGD)
ax[1].plot(losses_momentum)
ax[1].plot(losses_ADAM)
ax[1].set_ylabel('Cross Entropy')
ax[1].legend(['SGD', 'Momentum', 'ADAM'], loc='upper right')
# ax[1].set_ylim([0,1.5]) # <- Use this to change y-axis limits
How do SGD, SGD with momentum, and ADAM compare in performance? Ease of tuning parameters?
[Your answer here]
Adapt the MLP code above to train a CNN instead (Hint: you can adapt the code from the 01D_MLP_CNN_Assignment_Solutions.ipynb for the CNN just like I did for the MLP), and again compare the optimizers. The more complex nature of the CNN parameter space means that the differences between optimizers should be much more significant.