In [2]:

```
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
layers = keras.layers
```

Let's first simulate some noisy data

In [2]:

```
np.random.seed(1904)
x = np.float32(np.linspace(-1, 1, 100)[:,np.newaxis])
y = np.float32(2 * x[:,0] + 0.3 * np.random.randn(100))
print("x.shape:", x.shape)
print("y.shape:", y.shape)
```

Now, we have to design a **linear layer** that maps from the input $x$ to the output $y$ using a single adaptive weight $w$:

Complete the implementation of the `LinearLayer`

by adding the linear transformation in the `call`

function.

In [3]:

```
class LinearLayer(layers.Layer):
def __init__(self, units=1, input_dim=1): # when intializing the layer the weights have to be initialized
super(LinearLayer, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True)
def call(self, inputs): # when calling the layer the linear transformation has to be performed
return tf.matmul(inputs, self.w)
```

Build a model using the implemented layer.

In [4]:

```
model = keras.models.Sequential()
model.add(LinearLayer(units=1, input_dim=1))
```

In [5]:

```
model.build((None, 1))
print(model.summary())
```

Plot data and model before the training

In [6]:

```
y_pred = model(x)
fig, ax = plt.subplots(1)
ax.plot(x, y, 'bo', label='data')
ax.plot(x, y_pred, 'r-', label='model')
ax.set(xlabel='$x$', ylabel='$y$')
ax.grid()
ax.legend(loc='lower right')
plt.tight_layout()
```

Define a meaningful objective here (regression task).

Note that you can use `tf.reduce_mean()`

to average your loss estimate over the full data set (100 points).

In [7]:

```
def loss(x, y):
return tf.reduce_mean((tf.squeeze(x)-tf.squeeze(y))**2)
```

'Train' the linear model for 80 epochs (or iterations) with a meaningful learning rate and implement gradient descent.

Hint: you can access the adaptive parameters using `model.trainable_weights`

and perform $w' \rightarrow w-z$ using `w.assign_sub(z)`

In [8]:

```
epochs = 80 # number of epochs
lr = 0.1 # learning rate
for epoch in range(epochs):
with tf.GradientTape() as tape:
output = model(x, training=True)
# Compute loss value
loss_value = loss(tf.convert_to_tensor(y), output)
grads = tape.gradient(loss_value, model.trainable_weights)
for weight, grad in zip(model.trainable_weights, grads):
weight.assign_sub(lr * grad)
print("Current loss at epoch %d: %.4f" % (epoch, float(loss_value)))
```

Plot data and model after the training

In [9]:

```
fig, ax = plt.subplots(1)
y_pred = model(x)
ax.plot(x, y, 'bo', label='data')
ax.plot(x, y_pred, 'r-', label='model')
ax.set(xlabel='$x$', ylabel='$y$')
ax.grid()
ax.legend(loc='lower right')
plt.tight_layout()
```