In Challenge 4, we built a simple neural network using Keras. In this challenge, we will manually experiment with hyperarameters.
This notebook contains the same workflow. Step 2 is the section that shows the simple 3-layer neural network from chapter 4. Section three is for you to change the same neural network by adding additional layers, neurons, and changing the number of epochs - then running the plot graph cell to see the results.
NOTE: Be sure to re-run the begining cells before working on Section 3.
The advertising dataset captures the sales revenue generated with respect to advertisement costs across numerous platforms like radio, TV, and newspapers.
# Import the necessary libraries
# For Data loading, Exploraotry Data Analysis, Graphing
import pandas as pd # Pandas for data processing libraries
import numpy as np # Numpy for mathematical functions
import matplotlib.pyplot as plt # Matplotlib for visualization tasks
import seaborn as sns # Seaborn for data visualization library based on matplotlib.
%matplotlib inline
import sklearn # ML tasks
from sklearn.model_selection import train_test_split # Split the dataset
from sklearn.metrics import mean_squared_error # Calculate Mean Squared Error
# Build the Network
from tensorflow import keras
from keras.models import Sequential
#from tensorflow.keras.models import Sequential
from keras.layers import Dense
# Next, you read the dataset into a Pandas dataframe.
url = 'https://github.com/LinkedInLearning/artificial-intelligence-foundations-neural-networks-4381282/blob/main/Advertising_2023.csv?raw=true'
advertising_df= pd.read_csv(url,index_col=0)
# Pandas info() function is used to get a concise summary of the dataframe.
advertising_df.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 1199 entries, 1 to 1197 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 digital 1199 non-null float64 1 TV 1199 non-null float64 2 radio 1199 non-null float64 3 newspaper 1199 non-null float64 4 sales 1199 non-null float64 dtypes: float64(5) memory usage: 56.2 KB
### Get summary of statistics of the data
advertising_df.describe()
digital | TV | radio | newspaper | sales | |
---|---|---|---|---|---|
count | 1199.000000 | 1199.00000 | 1199.000000 | 1199.000000 | 1199.000000 |
mean | 135.472394 | 146.61985 | 23.240617 | 30.529942 | 14.005505 |
std | 135.730821 | 85.61047 | 14.820827 | 21.712507 | 5.202804 |
min | 0.300000 | 0.70000 | 0.000000 | 0.300000 | 1.600000 |
25% | 24.250000 | 73.40000 | 9.950000 | 12.800000 | 10.300000 |
50% | 64.650000 | 149.70000 | 22.500000 | 25.600000 | 12.900000 |
75% | 256.950000 | 218.50000 | 36.500000 | 45.100000 | 17.400000 |
max | 444.600000 | 296.40000 | 49.600000 | 114.000000 | 27.000000 |
#shape of dataframe - 1199 rows, five columns
advertising_df.shape
(1199, 5)
Let's check for any null values.
# The isnull() method is used to check and manage NULL values in a data frame.
advertising_df.isnull().sum()
digital 0 TV 0 radio 0 newspaper 0 sales 0 dtype: int64
Let's create some simple plots to check out the data!
## Plot the heatmap so that the values are shown.
plt.figure(figsize=(10,5))
sns.heatmap(advertising_df.corr(),annot=True,vmin=0,vmax=1,cmap='ocean')
<Axes: >
#create a correlation matrix
corr = advertising_df.corr()
plt.figure(figsize=(10, 5))
sns.heatmap(corr[(corr >= 0.5) | (corr <= -0.7)],
cmap='viridis', vmax=1.0, vmin=-1.0, linewidths=0.1,
annot=True, annot_kws={"size": 8}, square=True)
plt.tight_layout()
display(plt.show())
None
advertising_df.corr()
digital | TV | radio | newspaper | sales | |
---|---|---|---|---|---|
digital | 1.000000 | 0.474256 | 0.041316 | 0.048023 | 0.380101 |
TV | 0.474256 | 1.000000 | 0.055697 | 0.055579 | 0.781824 |
radio | 0.041316 | 0.055697 | 1.000000 | 0.353096 | 0.576528 |
newspaper | 0.048023 | 0.055579 | 0.353096 | 1.000000 | 0.227039 |
sales | 0.380101 | 0.781824 | 0.576528 | 0.227039 | 1.000000 |
### Visualize Correlation
# Generate a mask for the upper triangle
mask = np.zeros_like(advertising_df.corr(), dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(advertising_df.corr(), mask=mask, cmap=cmap, vmax=.9, square=True, linewidths=.5, ax=ax)
<ipython-input-11-6c77a4103e7b>:4: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations mask = np.zeros_like(advertising_df.corr(), dtype=np.bool)
<Axes: >
Since Sales is our target variable, we should identify which variable correlates the most with Sales.
As we can see, TV has the highest correlation with Sales. Let's visualize the relationship of variables using scatterplots.
Rather than plot them separately, an efficient way to view the linear relationsips between variables is to use a "for loop" that plots all of the features at once.
It seems there's no clear linear relationships between the predictors.
At this point, we know that the variable TV will more likely give better prediction of Sales because of the high correlation and linearity of the two.
'''=== Show the linear relationship between features and sales Thus, it provides that how the scattered
they are and which features has more impact in prediction of house price. ==='''
# visiualize all variables with sales
from scipy import stats
#creates figure
plt.figure(figsize=(18, 18))
for i, col in enumerate(advertising_df.columns[0:13]): #iterates over all columns except for price column (last one)
plt.subplot(5, 3, i+1) # each row three figure
x = advertising_df[col] #x-axis
y = advertising_df['sales'] #y-axis
plt.plot(x, y, 'o')
# Create regression line
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1)) (np.unique(x)), color='red')
plt.xlabel(col) # x-label
plt.ylabel('sales') # y-label
Concluding results after observing the Graph The relation bw TV and Sales is stong and increases in linear fashion The relation bw Radio and Sales is less stong The relation bw TV and Sales is weak
Regression is a supervised machine learning process. It is similar to classification, but rather than predicting a label, you try to predict a continuous value. Linear regression defines the relationship between a target variable (y) and a set of predictive features (x). Simply stated, If you need to predict a number, then use regression.
Let's now begin to train your regression model! You will need to first split up your data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. You will toss out the Address column because it only has text info that the linear regression model can't use.
Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems.
X = advertising_df[['digital', 'TV', 'radio', 'newspaper']]
y = advertising_df['sales']
'''=== Noramlization the features. Since it is seen that features have different ranges, it is best practice to
normalize/standarize the feature before using them in the model ==='''
#feature normalization
normalized_feature = keras.utils.normalize(X.values)
Now let's split the data into a training and test set. Note: Best pracices is to split into three - training, validation, and test set.
By default - It splits the given data into 75-25 ratio
# Import train_test_split function from sklearn.model_selection
from sklearn.model_selection import train_test_split
# Split up the data into a training set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101)
print(X_train.shape,X_test.shape, y_train.shape, y_test.shape )
(719, 4) (480, 4) (719,) (480,)
## Build Model (Building a three layer network - with one hidden layer)
model = Sequential()
model.add(Dense(4,input_dim=4, activation='relu')) # You don't have to specify input size.Just define the hidden layers
model.add(Dense(3,activation='relu'))
model.add(Dense(1))
# Compile Model
model.compile(optimizer='adam', loss='mse',metrics=['mse'])
# Fit the Model
history = model.fit(X_train, y_train, validation_data = (X_test, y_test),
epochs = 32)
Epoch 1/32 23/23 [==============================] - 2s 14ms/step - loss: 6650.7524 - mse: 6650.7524 - val_loss: 5988.0024 - val_mse: 5988.0024 Epoch 2/32 23/23 [==============================] - 0s 3ms/step - loss: 5201.2021 - mse: 5201.2021 - val_loss: 4691.7046 - val_mse: 4691.7046 Epoch 3/32 23/23 [==============================] - 0s 3ms/step - loss: 4098.2744 - mse: 4098.2744 - val_loss: 3693.1416 - val_mse: 3693.1416 Epoch 4/32 23/23 [==============================] - 0s 3ms/step - loss: 3249.0540 - mse: 3249.0540 - val_loss: 2955.8945 - val_mse: 2955.8945 Epoch 5/32 23/23 [==============================] - 0s 2ms/step - loss: 2620.8323 - mse: 2620.8323 - val_loss: 2360.3127 - val_mse: 2360.3127 Epoch 6/32 23/23 [==============================] - 0s 3ms/step - loss: 2110.0891 - mse: 2110.0891 - val_loss: 1924.2250 - val_mse: 1924.2250 Epoch 7/32 23/23 [==============================] - 0s 3ms/step - loss: 1727.6505 - mse: 1727.6505 - val_loss: 1567.8118 - val_mse: 1567.8118 Epoch 8/32 23/23 [==============================] - 0s 3ms/step - loss: 1413.7540 - mse: 1413.7540 - val_loss: 1282.6582 - val_mse: 1282.6582 Epoch 9/32 23/23 [==============================] - 0s 3ms/step - loss: 1130.8163 - mse: 1130.8163 - val_loss: 934.6696 - val_mse: 934.6696 Epoch 10/32 23/23 [==============================] - 0s 2ms/step - loss: 713.6685 - mse: 713.6685 - val_loss: 451.1403 - val_mse: 451.1403 Epoch 11/32 23/23 [==============================] - 0s 2ms/step - loss: 347.5288 - mse: 347.5288 - val_loss: 211.8526 - val_mse: 211.8526 Epoch 12/32 23/23 [==============================] - 0s 3ms/step - loss: 201.1693 - mse: 201.1693 - val_loss: 135.8934 - val_mse: 135.8934 Epoch 13/32 23/23 [==============================] - 0s 3ms/step - loss: 150.9529 - mse: 150.9529 - val_loss: 112.7614 - val_mse: 112.7614 Epoch 14/32 23/23 [==============================] - 0s 3ms/step - loss: 135.2691 - mse: 135.2691 - val_loss: 104.1701 - val_mse: 104.1701 Epoch 15/32 23/23 [==============================] - 0s 3ms/step - loss: 127.9212 - mse: 127.9212 - val_loss: 100.5317 - val_mse: 100.5317 Epoch 16/32 23/23 [==============================] - 0s 3ms/step - loss: 124.3473 - mse: 124.3473 - val_loss: 97.4547 - val_mse: 97.4547 Epoch 17/32 23/23 [==============================] - 0s 2ms/step - loss: 121.0918 - mse: 121.0918 - val_loss: 94.7039 - val_mse: 94.7039 Epoch 18/32 23/23 [==============================] - 0s 3ms/step - loss: 118.3253 - mse: 118.3253 - val_loss: 91.4572 - val_mse: 91.4572 Epoch 19/32 23/23 [==============================] - 0s 3ms/step - loss: 115.4136 - mse: 115.4136 - val_loss: 88.4836 - val_mse: 88.4836 Epoch 20/32 23/23 [==============================] - 0s 3ms/step - loss: 112.5515 - mse: 112.5515 - val_loss: 84.9482 - val_mse: 84.9482 Epoch 21/32 23/23 [==============================] - 0s 3ms/step - loss: 108.7553 - mse: 108.7553 - val_loss: 80.1010 - val_mse: 80.1010 Epoch 22/32 23/23 [==============================] - 0s 3ms/step - loss: 103.7544 - mse: 103.7544 - val_loss: 75.1391 - val_mse: 75.1391 Epoch 23/32 23/23 [==============================] - 0s 3ms/step - loss: 98.0247 - mse: 98.0247 - val_loss: 68.1211 - val_mse: 68.1211 Epoch 24/32 23/23 [==============================] - 0s 2ms/step - loss: 91.0498 - mse: 91.0498 - val_loss: 61.5812 - val_mse: 61.5812 Epoch 25/32 23/23 [==============================] - 0s 3ms/step - loss: 84.9844 - mse: 84.9844 - val_loss: 55.5421 - val_mse: 55.5421 Epoch 26/32 23/23 [==============================] - 0s 2ms/step - loss: 79.5662 - mse: 79.5662 - val_loss: 50.0055 - val_mse: 50.0055 Epoch 27/32 23/23 [==============================] - 0s 3ms/step - loss: 73.8904 - mse: 73.8904 - val_loss: 45.9566 - val_mse: 45.9566 Epoch 28/32 23/23 [==============================] - 0s 2ms/step - loss: 69.5305 - mse: 69.5305 - val_loss: 42.5203 - val_mse: 42.5203 Epoch 29/32 23/23 [==============================] - 0s 2ms/step - loss: 65.9228 - mse: 65.9228 - val_loss: 39.2514 - val_mse: 39.2514 Epoch 30/32 23/23 [==============================] - 0s 3ms/step - loss: 62.9458 - mse: 62.9458 - val_loss: 36.5935 - val_mse: 36.5935 Epoch 31/32 23/23 [==============================] - 0s 2ms/step - loss: 60.0667 - mse: 60.0667 - val_loss: 34.5339 - val_mse: 34.5339 Epoch 32/32 23/23 [==============================] - 0s 2ms/step - loss: 57.6828 - mse: 57.6828 - val_loss: 32.6905 - val_mse: 32.6905
You can add more 'flavor' to the graph by making it bigger and adding labels and names, as shown below.
## Plot a graph of model loss # show the graph of model loss in trainig and validation
plt.figure(figsize=(15,8))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss (MSE) on Training and Validation Data')
plt.ylabel('Loss-Mean Squred Error')
plt.xlabel('Epoch')
plt.legend(['Val Loss', 'Train Loss'], loc='upper right')
plt.show()
Play with:
NOTE: After each change, run the cell to plot the graph and review the loss curves.
## Build Model
model = Sequential()
model.add(Dense(4,input_dim=4, activation='relu'))
model.add(Dense(4,activation='relu'))
model.add(Dense(3,activation='relu'))
model.add(Dense(1))
# Compile Model
model.compile(optimizer='adam', loss='mse',metrics=['mse'])
# Fit the Model
history = model.fit(X_train, y_train, validation_data = (X_test, y_test),
epochs = 100)
Epoch 1/100 23/23 [==============================] - 1s 7ms/step - loss: 222.9089 - mse: 222.9089 - val_loss: 232.9150 - val_mse: 232.9150 Epoch 2/100 23/23 [==============================] - 0s 2ms/step - loss: 217.7270 - mse: 217.7270 - val_loss: 230.9861 - val_mse: 230.9861 Epoch 3/100 23/23 [==============================] - 0s 2ms/step - loss: 216.5080 - mse: 216.5080 - val_loss: 230.1003 - val_mse: 230.1003 Epoch 4/100 23/23 [==============================] - 0s 2ms/step - loss: 215.7948 - mse: 215.7948 - val_loss: 229.3728 - val_mse: 229.3728 Epoch 5/100 23/23 [==============================] - 0s 3ms/step - loss: 215.1493 - mse: 215.1493 - val_loss: 228.6784 - val_mse: 228.6784 Epoch 6/100 23/23 [==============================] - 0s 3ms/step - loss: 214.5063 - mse: 214.5063 - val_loss: 227.9842 - val_mse: 227.9842 Epoch 7/100 23/23 [==============================] - 0s 2ms/step - loss: 213.8602 - mse: 213.8602 - val_loss: 227.3055 - val_mse: 227.3055 Epoch 8/100 23/23 [==============================] - 0s 3ms/step - loss: 213.2240 - mse: 213.2240 - val_loss: 226.6180 - val_mse: 226.6180 Epoch 9/100 23/23 [==============================] - 0s 2ms/step - loss: 212.5853 - mse: 212.5853 - val_loss: 225.9292 - val_mse: 225.9292 Epoch 10/100 23/23 [==============================] - 0s 3ms/step - loss: 211.9453 - mse: 211.9453 - val_loss: 225.2510 - val_mse: 225.2510 Epoch 11/100 23/23 [==============================] - 0s 3ms/step - loss: 211.3065 - mse: 211.3065 - val_loss: 224.5838 - val_mse: 224.5838 Epoch 12/100 23/23 [==============================] - 0s 3ms/step - loss: 210.6759 - mse: 210.6759 - val_loss: 223.9090 - val_mse: 223.9090 Epoch 13/100 23/23 [==============================] - 0s 2ms/step - loss: 210.0448 - mse: 210.0448 - val_loss: 223.2464 - val_mse: 223.2464 Epoch 14/100 23/23 [==============================] - 0s 2ms/step - loss: 209.4207 - mse: 209.4207 - val_loss: 222.5831 - val_mse: 222.5831 Epoch 15/100 23/23 [==============================] - 0s 3ms/step - loss: 208.8009 - mse: 208.8009 - val_loss: 221.9264 - val_mse: 221.9264 Epoch 16/100 23/23 [==============================] - 0s 3ms/step - loss: 208.1840 - mse: 208.1840 - val_loss: 221.2717 - val_mse: 221.2717 Epoch 17/100 23/23 [==============================] - 0s 3ms/step - loss: 207.5709 - mse: 207.5709 - val_loss: 220.6362 - val_mse: 220.6362 Epoch 18/100 23/23 [==============================] - 0s 3ms/step - loss: 206.9594 - mse: 206.9594 - val_loss: 219.9914 - val_mse: 219.9914 Epoch 19/100 23/23 [==============================] - 0s 3ms/step - loss: 206.3526 - mse: 206.3526 - val_loss: 219.3492 - val_mse: 219.3492 Epoch 20/100 23/23 [==============================] - 0s 3ms/step - loss: 205.7463 - mse: 205.7463 - val_loss: 218.7045 - val_mse: 218.7045 Epoch 21/100 23/23 [==============================] - 0s 2ms/step - loss: 205.1362 - mse: 205.1362 - val_loss: 218.0719 - val_mse: 218.0719 Epoch 22/100 23/23 [==============================] - 0s 2ms/step - loss: 204.5319 - mse: 204.5319 - val_loss: 217.4383 - val_mse: 217.4383 Epoch 23/100 23/23 [==============================] - 0s 3ms/step - loss: 203.9277 - mse: 203.9277 - val_loss: 216.8025 - val_mse: 216.8025 Epoch 24/100 23/23 [==============================] - 0s 2ms/step - loss: 203.3272 - mse: 203.3272 - val_loss: 216.1661 - val_mse: 216.1661 Epoch 25/100 23/23 [==============================] - 0s 2ms/step - loss: 202.7254 - mse: 202.7254 - val_loss: 215.5279 - val_mse: 215.5279 Epoch 26/100 23/23 [==============================] - 0s 2ms/step - loss: 202.1256 - mse: 202.1256 - val_loss: 214.8989 - val_mse: 214.8989 Epoch 27/100 23/23 [==============================] - 0s 2ms/step - loss: 201.5293 - mse: 201.5293 - val_loss: 214.2793 - val_mse: 214.2793 Epoch 28/100 23/23 [==============================] - 0s 2ms/step - loss: 200.9327 - mse: 200.9327 - val_loss: 213.6704 - val_mse: 213.6704 Epoch 29/100 23/23 [==============================] - 0s 3ms/step - loss: 200.3420 - mse: 200.3420 - val_loss: 213.0609 - val_mse: 213.0609 Epoch 30/100 23/23 [==============================] - 0s 3ms/step - loss: 199.7499 - mse: 199.7499 - val_loss: 212.4538 - val_mse: 212.4538 Epoch 31/100 23/23 [==============================] - 0s 3ms/step - loss: 199.1616 - mse: 199.1616 - val_loss: 211.8382 - val_mse: 211.8382 Epoch 32/100 23/23 [==============================] - 0s 2ms/step - loss: 198.5699 - mse: 198.5699 - val_loss: 211.2287 - val_mse: 211.2287 Epoch 33/100 23/23 [==============================] - 0s 3ms/step - loss: 197.9802 - mse: 197.9802 - val_loss: 210.6209 - val_mse: 210.6209 Epoch 34/100 23/23 [==============================] - 0s 3ms/step - loss: 197.3944 - mse: 197.3944 - val_loss: 210.0125 - val_mse: 210.0125 Epoch 35/100 23/23 [==============================] - 0s 3ms/step - loss: 196.8070 - mse: 196.8070 - val_loss: 209.4087 - val_mse: 209.4087 Epoch 36/100 23/23 [==============================] - 0s 2ms/step - loss: 196.2240 - mse: 196.2240 - val_loss: 208.8039 - val_mse: 208.8039 Epoch 37/100 23/23 [==============================] - 0s 3ms/step - loss: 195.6402 - mse: 195.6402 - val_loss: 208.2001 - val_mse: 208.2001 Epoch 38/100 23/23 [==============================] - 0s 2ms/step - loss: 195.0570 - mse: 195.0570 - val_loss: 207.6033 - val_mse: 207.6033 Epoch 39/100 23/23 [==============================] - 0s 2ms/step - loss: 194.4794 - mse: 194.4794 - val_loss: 206.9985 - val_mse: 206.9985 Epoch 40/100 23/23 [==============================] - 0s 2ms/step - loss: 193.8997 - mse: 193.8997 - val_loss: 206.4010 - val_mse: 206.4010 Epoch 41/100 23/23 [==============================] - 0s 2ms/step - loss: 193.3234 - mse: 193.3234 - val_loss: 205.8082 - val_mse: 205.8082 Epoch 42/100 23/23 [==============================] - 0s 3ms/step - loss: 192.7478 - mse: 192.7478 - val_loss: 205.2156 - val_mse: 205.2156 Epoch 43/100 23/23 [==============================] - 0s 3ms/step - loss: 192.1752 - mse: 192.1752 - val_loss: 204.6196 - val_mse: 204.6196 Epoch 44/100 23/23 [==============================] - 0s 3ms/step - loss: 191.5994 - mse: 191.5994 - val_loss: 204.0249 - val_mse: 204.0249 Epoch 45/100 23/23 [==============================] - 0s 3ms/step - loss: 191.0287 - mse: 191.0287 - val_loss: 203.4328 - val_mse: 203.4328 Epoch 46/100 23/23 [==============================] - 0s 3ms/step - loss: 190.4607 - mse: 190.4607 - val_loss: 202.8428 - val_mse: 202.8428 Epoch 47/100 23/23 [==============================] - 0s 2ms/step - loss: 189.8924 - mse: 189.8924 - val_loss: 202.2557 - val_mse: 202.2557 Epoch 48/100 23/23 [==============================] - 0s 3ms/step - loss: 189.3240 - mse: 189.3240 - val_loss: 201.6680 - val_mse: 201.6680 Epoch 49/100 23/23 [==============================] - 0s 3ms/step - loss: 188.7582 - mse: 188.7582 - val_loss: 201.0821 - val_mse: 201.0821 Epoch 50/100 23/23 [==============================] - 0s 3ms/step - loss: 188.1916 - mse: 188.1916 - val_loss: 200.5043 - val_mse: 200.5043 Epoch 51/100 23/23 [==============================] - 0s 2ms/step - loss: 187.6300 - mse: 187.6300 - val_loss: 199.9217 - val_mse: 199.9217 Epoch 52/100 23/23 [==============================] - 0s 2ms/step - loss: 187.0671 - mse: 187.0671 - val_loss: 199.3408 - val_mse: 199.3408 Epoch 53/100 23/23 [==============================] - 0s 3ms/step - loss: 186.5064 - mse: 186.5064 - val_loss: 198.7588 - val_mse: 198.7588 Epoch 54/100 23/23 [==============================] - 0s 3ms/step - loss: 185.9435 - mse: 185.9435 - val_loss: 198.1837 - val_mse: 198.1837 Epoch 55/100 23/23 [==============================] - 0s 3ms/step - loss: 185.3886 - mse: 185.3886 - val_loss: 197.6037 - val_mse: 197.6037 Epoch 56/100 23/23 [==============================] - 0s 3ms/step - loss: 184.8290 - mse: 184.8290 - val_loss: 197.0315 - val_mse: 197.0315 Epoch 57/100 23/23 [==============================] - 0s 3ms/step - loss: 184.2738 - mse: 184.2738 - val_loss: 196.4588 - val_mse: 196.4588 Epoch 58/100 23/23 [==============================] - 0s 3ms/step - loss: 183.7199 - mse: 183.7199 - val_loss: 195.8877 - val_mse: 195.8877 Epoch 59/100 23/23 [==============================] - 0s 3ms/step - loss: 183.1677 - mse: 183.1677 - val_loss: 195.3158 - val_mse: 195.3158 Epoch 60/100 23/23 [==============================] - 0s 2ms/step - loss: 182.6160 - mse: 182.6160 - val_loss: 194.7484 - val_mse: 194.7484 Epoch 61/100 23/23 [==============================] - 0s 3ms/step - loss: 182.0681 - mse: 182.0681 - val_loss: 194.1796 - val_mse: 194.1796 Epoch 62/100 23/23 [==============================] - 0s 3ms/step - loss: 181.5181 - mse: 181.5181 - val_loss: 193.6155 - val_mse: 193.6155 Epoch 63/100 23/23 [==============================] - 0s 3ms/step - loss: 180.9698 - mse: 180.9698 - val_loss: 193.0527 - val_mse: 193.0527 Epoch 64/100 23/23 [==============================] - 0s 3ms/step - loss: 180.4272 - mse: 180.4272 - val_loss: 192.4797 - val_mse: 192.4797 Epoch 65/100 23/23 [==============================] - 0s 3ms/step - loss: 179.8758 - mse: 179.8758 - val_loss: 191.9213 - val_mse: 191.9213 Epoch 66/100 23/23 [==============================] - 0s 3ms/step - loss: 179.3345 - mse: 179.3345 - val_loss: 191.3596 - val_mse: 191.3596 Epoch 67/100 23/23 [==============================] - 0s 3ms/step - loss: 178.7927 - mse: 178.7927 - val_loss: 190.8004 - val_mse: 190.8004 Epoch 68/100 23/23 [==============================] - 0s 3ms/step - loss: 178.2518 - mse: 178.2518 - val_loss: 190.2433 - val_mse: 190.2433 Epoch 69/100 23/23 [==============================] - 0s 3ms/step - loss: 177.7128 - mse: 177.7128 - val_loss: 189.6847 - val_mse: 189.6847 Epoch 70/100 23/23 [==============================] - 0s 3ms/step - loss: 177.1739 - mse: 177.1739 - val_loss: 189.1308 - val_mse: 189.1308 Epoch 71/100 23/23 [==============================] - 0s 3ms/step - loss: 176.6364 - mse: 176.6364 - val_loss: 188.5711 - val_mse: 188.5711 Epoch 72/100 23/23 [==============================] - 0s 3ms/step - loss: 176.0986 - mse: 176.0986 - val_loss: 188.0179 - val_mse: 188.0179 Epoch 73/100 23/23 [==============================] - 0s 3ms/step - loss: 175.5635 - mse: 175.5635 - val_loss: 187.4632 - val_mse: 187.4632 Epoch 74/100 23/23 [==============================] - 0s 3ms/step - loss: 175.0290 - mse: 175.0290 - val_loss: 186.9152 - val_mse: 186.9152 Epoch 75/100 23/23 [==============================] - 0s 3ms/step - loss: 174.4980 - mse: 174.4980 - val_loss: 186.3618 - val_mse: 186.3618 Epoch 76/100 23/23 [==============================] - 0s 3ms/step - loss: 173.9626 - mse: 173.9626 - val_loss: 185.8201 - val_mse: 185.8201 Epoch 77/100 23/23 [==============================] - 0s 2ms/step - loss: 173.4380 - mse: 173.4380 - val_loss: 185.2661 - val_mse: 185.2661 Epoch 78/100 23/23 [==============================] - 0s 3ms/step - loss: 172.9059 - mse: 172.9059 - val_loss: 184.7188 - val_mse: 184.7188 Epoch 79/100 23/23 [==============================] - 0s 3ms/step - loss: 172.3777 - mse: 172.3777 - val_loss: 184.1725 - val_mse: 184.1725 Epoch 80/100 23/23 [==============================] - 0s 4ms/step - loss: 171.8518 - mse: 171.8518 - val_loss: 183.6257 - val_mse: 183.6257 Epoch 81/100 23/23 [==============================] - 0s 3ms/step - loss: 171.3239 - mse: 171.3239 - val_loss: 183.0883 - val_mse: 183.0883 Epoch 82/100 23/23 [==============================] - 0s 3ms/step - loss: 170.8016 - mse: 170.8016 - val_loss: 182.5476 - val_mse: 182.5476 Epoch 83/100 23/23 [==============================] - 0s 4ms/step - loss: 170.2809 - mse: 170.2809 - val_loss: 182.0017 - val_mse: 182.0017 Epoch 84/100 23/23 [==============================] - 0s 3ms/step - loss: 169.7561 - mse: 169.7561 - val_loss: 181.4631 - val_mse: 181.4631 Epoch 85/100 23/23 [==============================] - 0s 3ms/step - loss: 169.2353 - mse: 169.2353 - val_loss: 180.9269 - val_mse: 180.9269 Epoch 86/100 23/23 [==============================] - 0s 3ms/step - loss: 168.7156 - mse: 168.7156 - val_loss: 180.3911 - val_mse: 180.3911 Epoch 87/100 23/23 [==============================] - 0s 3ms/step - loss: 168.1975 - mse: 168.1975 - val_loss: 179.8550 - val_mse: 179.8550 Epoch 88/100 23/23 [==============================] - 0s 3ms/step - loss: 167.6810 - mse: 167.6810 - val_loss: 179.3188 - val_mse: 179.3188 Epoch 89/100 23/23 [==============================] - 0s 3ms/step - loss: 167.1638 - mse: 167.1638 - val_loss: 178.7871 - val_mse: 178.7871 Epoch 90/100 23/23 [==============================] - 0s 3ms/step - loss: 166.6505 - mse: 166.6505 - val_loss: 178.2514 - val_mse: 178.2514 Epoch 91/100 23/23 [==============================] - 0s 5ms/step - loss: 166.1347 - mse: 166.1347 - val_loss: 177.7215 - val_mse: 177.7215 Epoch 92/100 23/23 [==============================] - 0s 3ms/step - loss: 165.6232 - mse: 165.6232 - val_loss: 177.1871 - val_mse: 177.1871 Epoch 93/100 23/23 [==============================] - 0s 4ms/step - loss: 165.1070 - mse: 165.1070 - val_loss: 176.6647 - val_mse: 176.6647 Epoch 94/100 23/23 [==============================] - 0s 4ms/step - loss: 164.6004 - mse: 164.6004 - val_loss: 176.1339 - val_mse: 176.1339 Epoch 95/100 23/23 [==============================] - 0s 4ms/step - loss: 164.0913 - mse: 164.0913 - val_loss: 175.6023 - val_mse: 175.6023 Epoch 96/100 23/23 [==============================] - 0s 3ms/step - loss: 163.5806 - mse: 163.5806 - val_loss: 175.0818 - val_mse: 175.0818 Epoch 97/100 23/23 [==============================] - 0s 4ms/step - loss: 163.0746 - mse: 163.0746 - val_loss: 174.5613 - val_mse: 174.5613 Epoch 98/100 23/23 [==============================] - 0s 3ms/step - loss: 162.5714 - mse: 162.5714 - val_loss: 174.0334 - val_mse: 174.0334 Epoch 99/100 23/23 [==============================] - 0s 3ms/step - loss: 162.0644 - mse: 162.0644 - val_loss: 173.5100 - val_mse: 173.5100 Epoch 100/100 23/23 [==============================] - 0s 4ms/step - loss: 161.5586 - mse: 161.5586 - val_loss: 172.9902 - val_mse: 172.9902
plt.figure(figsize=(15,8))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss (MSE) on Training and Validation Data')
plt.ylabel('Loss-Mean Squred Error')
plt.xlabel('Epoch')
plt.legend(['Val Loss', 'Train Loss'], loc='upper right')
plt.show()
## Build Model (Building a four layer network - with two hidden layers)
model = Sequential()
model.add(Dense(4,input_dim=4, activation='relu'))
model.add(Dense(4,activation='relu')) # You don't have to specify input size.Just define the hidden layers
model.add(Dense(4,activation='relu'))
model.add(Dense(1))
# Compile Model
opt = keras.optimizers.Adam(learning_rate=.001)
model.compile(optimizer=opt, loss='mse', metrics=['mse'])
# Fit the Model
history = model.fit(X_train, y_train, validation_data = (X_test, y_test),
epochs = 32, batch_size=32, verbose=0)
# Train the model, iterating on the data in batches of 32 samples
#Epoch - #number of epochs to train 32
#Batch size - amount of data each iteration in an epoch sees
plt.figure(figsize=(15,8))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss (MSE) on Training and Validation Data')
plt.ylabel('Loss-Mean Squred Error')
plt.xlabel('Epoch')
plt.legend(['Val Loss', 'Train Loss'], loc='upper right')
plt.show()