# Exercise 5.2 - Solution¶

## Interpolation¶

In this task, we implement a simple NN to learn a complicated function.

In [2]:
import numpy as np
from tensorflow import keras
import matplotlib.pyplot as plt

layers = keras.layers


### Generation of data¶

In [2]:
def some_complicated_function(x):
return (
(np.abs(x)) ** 0.5
+ 0.1 * x
+ 0.01 * x ** 2
+ 1
- np.sin(x)
+ 0.5 * np.exp(x / 10.0)
) / (0.5 + np.abs(np.cos(x)))


Let's simulate the train data

In [3]:
N_train = 10 ** 4  # number of training samples
# Note: "[:, np.newaxis]" reshapes array to (N,1) as required by our DNN (we input one feature per sample)
xtrain = np.random.uniform(-10, 10, N_train)[:, np.newaxis]
ytrain = some_complicated_function(xtrain) + np.random.randn(xtrain.shape[0])  # train data includes some noise

In [4]:
print("xtrain.shape", xtrain.shape)
print("ytrain.shape", ytrain.shape)

xtrain.shape (10000, 1)
ytrain.shape (10000, 10000)


Simulate test data

In [5]:
N_test = 10000  # number of testing samples
xtest = np.linspace(-10, 10, N_test)
ytest = some_complicated_function(xtest)

In [6]:
print("xtest.shape", xtest.shape)
print("ytest.shape", ytest.shape)

xtest.shape (10000,)
ytest.shape (10000,)


### Define Model¶

In this case, we make use of a simple network with 5 layers. As activation function the ReLU is used. We further add parameter norm penalties (L1 and L2) as regularization strategy.

In [7]:
nb_nodes = 32
nb_layers = 4
activation = "relu"
reg_strategy = keras.regularizers.l1_l2(l1=0.01, l2=0.01)  # use L1 and L2 regularization

model = keras.models.Sequential(name="1Dfit")
input_dim=xtrain.shape[1]))

for i in range(nb_layers - 1):

print(model.summary())

Model: "1Dfit"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense (Dense)                (None, 32)                64
_________________________________________________________________
dense_1 (Dense)              (None, 32)                1056
_________________________________________________________________
dense_2 (Dense)              (None, 32)                1056
_________________________________________________________________
dense_3 (Dense)              (None, 32)                1056
_________________________________________________________________
dense_4 (Dense)              (None, 1)                 33
=================================================================
Total params: 3,265
Trainable params: 3,265
Non-trainable params: 0
_________________________________________________________________
None


### Compile the model (set an objective and choose an optimizer)¶

In [8]:
adam = keras.optimizers.Adam(lr=0.001)  # learning rate = 0.001


### Train the model¶

In [9]:
epochs = 100
save_period = 20  # after how many epochs the model should be saved?

chkpnt_saver = keras.callbacks.ModelCheckpoint("weights-{epoch:02d}.hdf5", save_weights_only=True, save_freq=save_period)

results = model.fit(
xtrain,
ytrain,
batch_size=64,
epochs=epochs,
verbose=1,
callbacks=[chkpnt_saver]
)

Epoch 1/100
157/157 [==============================] - 1s 5ms/step - loss: 15.2431
Epoch 2/100
157/157 [==============================] - 1s 5ms/step - loss: 6.8571
Epoch 3/100
157/157 [==============================] - 1s 5ms/step - loss: 5.8455
Epoch 4/100
157/157 [==============================] - 1s 5ms/step - loss: 5.3997
Epoch 5/100
157/157 [==============================] - 1s 5ms/step - loss: 5.0038
Epoch 6/100
157/157 [==============================] - 1s 5ms/step - loss: 4.6660
Epoch 7/100
157/157 [==============================] - 1s 5ms/step - loss: 4.5469
Epoch 8/100
157/157 [==============================] - 1s 4ms/step - loss: 4.3841
Epoch 9/100
157/157 [==============================] - 1s 4ms/step - loss: 4.2089
Epoch 10/100
157/157 [==============================] - 1s 4ms/step - loss: 4.0169
Epoch 11/100
157/157 [==============================] - 1s 4ms/step - loss: 3.9143
Epoch 12/100
157/157 [==============================] - 1s 4ms/step - loss: 3.7840
Epoch 13/100
157/157 [==============================] - 1s 4ms/step - loss: 3.8204
Epoch 14/100
157/157 [==============================] - 1s 4ms/step - loss: 3.6407
Epoch 15/100
157/157 [==============================] - 1s 4ms/step - loss: 3.5697
Epoch 16/100
157/157 [==============================] - 1s 4ms/step - loss: 3.5290
Epoch 17/100
157/157 [==============================] - 1s 4ms/step - loss: 3.5095
Epoch 18/100
157/157 [==============================] - 1s 4ms/step - loss: 3.4352
Epoch 19/100
157/157 [==============================] - 1s 4ms/step - loss: 3.3327
Epoch 20/100
157/157 [==============================] - 1s 4ms/step - loss: 3.3706
Epoch 21/100
157/157 [==============================] - 1s 4ms/step - loss: 3.3128
Epoch 22/100
157/157 [==============================] - 1s 4ms/step - loss: 3.3145
Epoch 23/100
157/157 [==============================] - 1s 4ms/step - loss: 3.2287
Epoch 24/100
157/157 [==============================] - 1s 4ms/step - loss: 3.1591
Epoch 25/100
157/157 [==============================] - 1s 4ms/step - loss: 3.2557
Epoch 26/100
157/157 [==============================] - 1s 4ms/step - loss: 3.2274
Epoch 27/100
157/157 [==============================] - 1s 4ms/step - loss: 3.1732
Epoch 28/100
157/157 [==============================] - 1s 4ms/step - loss: 3.1343
Epoch 29/100
157/157 [==============================] - 1s 4ms/step - loss: 3.1498
Epoch 30/100
157/157 [==============================] - 1s 4ms/step - loss: 3.1352
Epoch 31/100
157/157 [==============================] - 1s 4ms/step - loss: 3.0498
Epoch 32/100
157/157 [==============================] - 1s 4ms/step - loss: 3.0276
Epoch 33/100
157/157 [==============================] - 1s 4ms/step - loss: 3.0969
Epoch 34/100
157/157 [==============================] - 1s 4ms/step - loss: 3.1399
Epoch 35/100
157/157 [==============================] - 1s 4ms/step - loss: 3.0425
Epoch 36/100
157/157 [==============================] - 1s 5ms/step - loss: 3.1180
Epoch 37/100
157/157 [==============================] - 1s 5ms/step - loss: 3.1092
Epoch 38/100
157/157 [==============================] - 1s 4ms/step - loss: 3.0495
Epoch 39/100
157/157 [==============================] - 1s 4ms/step - loss: 2.9993
Epoch 40/100
157/157 [==============================] - 1s 4ms/step - loss: 2.9800
Epoch 41/100
157/157 [==============================] - 1s 4ms/step - loss: 3.0281
Epoch 42/100
157/157 [==============================] - 1s 4ms/step - loss: 2.9979
Epoch 43/100
157/157 [==============================] - 1s 4ms/step - loss: 2.9808
Epoch 44/100
157/157 [==============================] - 1s 4ms/step - loss: 3.0271
Epoch 45/100
157/157 [==============================] - 1s 4ms/step - loss: 2.9723
Epoch 46/100
157/157 [==============================] - 1s 5ms/step - loss: 2.9811
Epoch 47/100
157/157 [==============================] - 1s 4ms/step - loss: 2.9228
Epoch 48/100
157/157 [==============================] - 1s 4ms/step - loss: 2.9430
Epoch 49/100
157/157 [==============================] - 1s 5ms/step - loss: 2.9944
Epoch 50/100
157/157 [==============================] - 1s 4ms/step - loss: 2.8671
Epoch 51/100
157/157 [==============================] - 1s 4ms/step - loss: 2.8856
Epoch 52/100
157/157 [==============================] - 1s 4ms/step - loss: 2.9121
Epoch 53/100
157/157 [==============================] - 1s 4ms/step - loss: 2.9119
Epoch 54/100
157/157 [==============================] - 1s 4ms/step - loss: 2.8628
Epoch 55/100
157/157 [==============================] - 1s 4ms/step - loss: 2.8097
Epoch 56/100
157/157 [==============================] - 1s 4ms/step - loss: 2.9499
Epoch 57/100
157/157 [==============================] - 1s 4ms/step - loss: 2.8291
Epoch 58/100
157/157 [==============================] - 1s 4ms/step - loss: 2.8037
Epoch 59/100
157/157 [==============================] - 1s 4ms/step - loss: 2.8379
Epoch 60/100
157/157 [==============================] - 1s 4ms/step - loss: 2.8073
Epoch 61/100
157/157 [==============================] - 1s 4ms/step - loss: 2.8789
Epoch 62/100
157/157 [==============================] - 1s 4ms/step - loss: 2.7798
Epoch 63/100
157/157 [==============================] - 1s 4ms/step - loss: 2.7955
Epoch 64/100
157/157 [==============================] - 1s 4ms/step - loss: 2.6759
Epoch 65/100
157/157 [==============================] - 1s 4ms/step - loss: 2.7420
Epoch 66/100
157/157 [==============================] - 1s 4ms/step - loss: 2.6766
Epoch 67/100
157/157 [==============================] - 1s 4ms/step - loss: 2.6872
Epoch 68/100
157/157 [==============================] - 1s 4ms/step - loss: 2.6192
Epoch 69/100
157/157 [==============================] - 1s 4ms/step - loss: 2.4959
Epoch 70/100
157/157 [==============================] - 1s 4ms/step - loss: 2.4935
Epoch 71/100
157/157 [==============================] - 1s 4ms/step - loss: 2.4498
Epoch 72/100
157/157 [==============================] - 1s 4ms/step - loss: 2.3763
Epoch 73/100
157/157 [==============================] - 1s 4ms/step - loss: 2.3874
Epoch 74/100
157/157 [==============================] - 1s 4ms/step - loss: 2.3980
Epoch 75/100
157/157 [==============================] - 1s 4ms/step - loss: 2.3195
Epoch 76/100
157/157 [==============================] - 1s 4ms/step - loss: 2.2619
Epoch 77/100
157/157 [==============================] - 1s 4ms/step - loss: 2.2884
Epoch 78/100
157/157 [==============================] - 1s 4ms/step - loss: 2.2914
Epoch 79/100
157/157 [==============================] - 1s 4ms/step - loss: 2.2354
Epoch 80/100
157/157 [==============================] - 1s 4ms/step - loss: 2.2463
Epoch 81/100
157/157 [==============================] - 1s 4ms/step - loss: 2.1911
Epoch 82/100
157/157 [==============================] - 1s 4ms/step - loss: 2.1427
Epoch 83/100
157/157 [==============================] - 1s 4ms/step - loss: 2.1706
Epoch 84/100
157/157 [==============================] - 1s 4ms/step - loss: 2.1064
Epoch 85/100
157/157 [==============================] - 1s 4ms/step - loss: 2.0968
Epoch 86/100
157/157 [==============================] - 1s 4ms/step - loss: 2.1388
Epoch 87/100
157/157 [==============================] - 1s 4ms/step - loss: 2.1235
Epoch 88/100
157/157 [==============================] - 1s 4ms/step - loss: 2.1044
Epoch 89/100
157/157 [==============================] - 1s 4ms/step - loss: 2.1308
Epoch 90/100
157/157 [==============================] - 1s 4ms/step - loss: 2.0616
Epoch 91/100
157/157 [==============================] - 1s 4ms/step - loss: 2.0945
Epoch 92/100
157/157 [==============================] - 1s 4ms/step - loss: 2.0894
Epoch 93/100
157/157 [==============================] - 1s 4ms/step - loss: 2.1128
Epoch 94/100
157/157 [==============================] - 1s 3ms/step - loss: 2.0994
Epoch 95/100
157/157 [==============================] - 1s 3ms/step - loss: 2.0892
Epoch 96/100
157/157 [==============================] - 1s 3ms/step - loss: 2.0684
Epoch 97/100
157/157 [==============================] - 1s 3ms/step - loss: 2.0471
Epoch 98/100
157/157 [==============================] - 1s 3ms/step - loss: 2.0347
Epoch 99/100
157/157 [==============================] - 1s 4ms/step - loss: 2.0641
Epoch 100/100
157/157 [==============================] - 1s 4ms/step - loss: 2.0764


Compare the performance of the model during the training.

In [10]:
fig, (ax1, ax2) = plt.subplots(nrows=2, figsize=(12, 8))

ax1.plot(xtest, ytest, color="black", label="data")
saved_epochs = range(save_period, epochs + 1, save_period)

colors = [plt.cm.jet((i + 1) / float(len(saved_epochs) + 1)) for i in range(len(saved_epochs))]

for i, epoch in enumerate(saved_epochs):
ypredict = model.predict(xtest).squeeze()
ax1.plot(xtest.squeeze(), ypredict, color=colors[i], label=epoch)
ax2.plot(epoch, results.history["loss"][epoch - 1], color=colors[i], marker="o")

ax1.set(xlabel="x", ylabel="some_complicated_function(x)", xlim=(-10, 13), title="")
ax1.grid(True)
ax1.legend(loc="upper right", title="Epochs")

ax2.plot(results.history["loss"], color="black")
ax2.set(xlabel="epoch", ylabel="loss")
ax2.grid(True)
ax2.semilogy()

plt.show()


As can be seen, with an increasing number of iterations, the performance of the DNN improves. Additionally, it can be seen nicely that the ReLU activation function was used (sharp linear intervals).