Now that we have characterized multilayer perceptrons (MLPs) mathematically, let us try to implement one ourselves. To compare against our previous results achieved with softmax regression (Section 3.6), we will continue to work with the Fashion-MNIST image classification dataset (Section 3.5).
import torch
from torch import nn
from d2l import torch as d2l
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
Recall that Fashion-MNIST contains 10 classes, and that each image consists of a $28 \times 28 = 784$ grid of grayscale pixel values. Again, we will disregard the spatial structure among the pixels for now, so we can think of this as simply a classification dataset with 784 input features and 10 classes. To begin, we will implement an MLP with one hidden layer and 256 hidden units. Note that we can regard both of these quantities as hyperparameters. Typically, we choose layer widths in powers of 2, which tend to be computationally efficient because of how memory is allocated and addressed in hardware.
Again, we will represent our parameters with several tensors. Note that for every layer, we must keep track of one weight matrix and one bias vector. As always, we allocate memory for the gradients of the loss with respect to these parameters.
num_inputs, num_outputs, num_hiddens = 784, 10, 256
W1 = nn.Parameter(
torch.randn(num_inputs, num_hiddens, requires_grad=True) * 0.01)
b1 = nn.Parameter(torch.zeros(num_hiddens, requires_grad=True))
W2 = nn.Parameter(
torch.randn(num_hiddens, num_outputs, requires_grad=True) * 0.01)
b2 = nn.Parameter(torch.zeros(num_outputs, requires_grad=True))
params = [W1, b1, W2, b2]
To make sure we know how everything works,
we will implement the ReLU activation ourselves
using the maximum function rather than
invoking the built-in relu
function directly.
def relu(X):
a = torch.zeros_like(X)
return torch.max(X, a)
Because we are disregarding spatial structure,
we reshape
each two-dimensional image into
a flat vector of length num_inputs
.
Finally, we implement our model
with just a few lines of code.
def net(X):
X = X.reshape((-1, num_inputs))
H = relu(X @ W1 + b1) # Here '@' stands for matrix multiplication
return (H @ W2 + b2)
To ensure numerical stability, and because we already implemented the softmax function from scratch (Section 3.6), we leverage the integrated function from high-level APIs for calculating the softmax and cross-entropy loss. Recall our earlier discussion of these intricacies in Section 3.7.2. We encourage the interested reader to examine the source code for the loss function to deepen their knowledge of implementation details.
loss = nn.CrossEntropyLoss()
Fortunately, the training loop for MLPs
is exactly the same as for softmax regression.
Leveraging the d2l
package again,
we call the train_ch3
function
(see Section 3.6),
setting the number of epochs to 10
and the learning rate to 0.1.
num_epochs, lr = 10, 0.1
updater = torch.optim.SGD(params, lr=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, updater)
To evaluate the learned model, we apply it on some test data.
d2l.predict_ch3(net, test_iter)
num_hiddens
and see how this hyperparameter influences your results. Determine the best value of this hyperparameter, keeping all others constant.