# Exercise 4.2¶

## Linear regression¶

In this task we will design and train a linear model using Keras.

1. Complete the implemetation of the LinearLayer
2. Define a meaningful objective
3. Implement gradient descent and train the linear model for 80 epochs.
In :
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt

layers = keras.layers


### Simulation of data¶

Let's first simulate some noisy data

In :
np.random.seed(1904)
x = np.float32(np.linspace(-1, 1, 100)[:,np.newaxis])
y = np.float32(2 * x[:,0] + 0.3 * np.random.randn(100))
print("x.shape:", x.shape)
print("y.shape:", y.shape)

x.shape: (100, 1)
y.shape: (100,)


### Implement linear model¶

Now, we have to design a linear layer that maps from the input $x$ to the output $y$ using a single adaptive weight $w$:

$$y = w \cdot x$$

Complete the implementation of the LinearLayer by adding the linear transformation in the call function.

In [ ]:
class LinearLayer(layers.Layer):

def __init__(self, units=1, input_dim=1):  # when intializing the layer the weights have to be initialized
super(LinearLayer, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True)

def call(self, inputs):  # when calling the layer the linear transformation has to be performed
return ...


Build a model using the implemented layer.

In [ ]:
model = keras.models.Sequential()

In [ ]:
model.build((None, 1))
print(model.summary())


### Performance before the training¶

Plot data and model before the training

In [ ]:
y_pred = model(x)

fig, ax = plt.subplots(1)
ax.plot(x, y, 'bo', label='data')
ax.plot(x, y_pred, 'r-', label='model')
ax.set(xlabel='$x$', ylabel='$y$')
ax.grid()
ax.legend(loc='lower right')
plt.tight_layout()


### Task 2: Define the objective function¶

Define a meaningful objective here (regression task).
Note that you can use tf.reduce_mean() to average your loss estimate over the full data set (100 points).

In [ ]:
def loss(x, y):
return ....


'Train' the linear model for 80 epochs (or iterations) with a meaningful learning rate and implement gradient descent.
Hint: you can access the adaptive parameters using model.trainable_weights and perform $w' \rightarrow w-z$ using w.assign_sub(z)

In [ ]:
epochs = ...  # number of epochs
lr = ...  # learning rate

for epoch in range(epochs):

output = model(x, training=True)
# Compute loss value
loss_value = loss(tf.convert_to_tensor(y), output)

weight.assign_sub(...)

print("Current loss at epoch %d: %.4f" % (epoch, float(loss_value)))


### Performance of the fitted model¶

Plot data and model after the training

In [ ]:
fig, ax = plt.subplots(1)

y_pred = model(x)

ax.plot(x, y, 'bo', label='data')
ax.plot(x, y_pred, 'r-', label='model')
ax.set(xlabel='$x$', ylabel='$y$')
ax.grid()
ax.legend(loc='lower right')
plt.tight_layout()