Notebook ini berdasarkan kursus Deep Learning A-Z™: Hands-On Artificial Neural Networks di Udemy. Lihat Kursus.
taruma_udemy_autoencoders
1.0.0
/20190801
3.6
1.1.0
#### NOTEBOOK DESCRIPTION
from datetime import datetime
NOTEBOOK_TITLE = 'taruma_udemy_autoencoders'
NOTEBOOK_VERSION = '1.0.0'
NOTEBOOK_DATE = 1 # Set 1, if you want add date classifier
NOTEBOOK_NAME = "{}_{}".format(
NOTEBOOK_TITLE,
NOTEBOOK_VERSION.replace('.','_')
)
PROJECT_NAME = "{}_{}{}".format(
NOTEBOOK_TITLE,
NOTEBOOK_VERSION.replace('.','_'),
"_" + datetime.utcnow().strftime("%Y%m%d_%H%M") if NOTEBOOK_DATE else ""
)
print(f"Nama Notebook: {NOTEBOOK_NAME}")
print(f"Nama Proyek: {PROJECT_NAME}")
Nama Notebook: taruma_udemy_autoencoders_1_0_0 Nama Proyek: taruma_udemy_autoencoders_1_0_0_20190801_0925
#### System Version
import sys, torch
print("versi python: {}".format(sys.version))
print("versi pytorch: {}".format(torch.__version__))
versi python: 3.6.8 (default, Jan 14 2019, 11:02:34) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] versi pytorch: 1.1.0
#### Load Notebook Extensions
%load_ext google.colab.data_table
#### Download dataset
# ref: https://grouplens.org/datasets/movielens/
!wget -O autoencoders.zip "https://sds-platform-private.s3-us-east-2.amazonaws.com/uploads/P16-AutoEncoders.zip"
!unzip autoencoders.zip
--2019-08-01 09:25:40-- https://sds-platform-private.s3-us-east-2.amazonaws.com/uploads/P16-AutoEncoders.zip Resolving sds-platform-private.s3-us-east-2.amazonaws.com (sds-platform-private.s3-us-east-2.amazonaws.com)... 52.219.80.168 Connecting to sds-platform-private.s3-us-east-2.amazonaws.com (sds-platform-private.s3-us-east-2.amazonaws.com)|52.219.80.168|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 17069342 (16M) [application/zip] Saving to: ‘autoencoders.zip’ autoencoders.zip 100%[===================>] 16.28M 34.2MB/s in 0.5s 2019-08-01 09:25:40 (34.2 MB/s) - ‘autoencoders.zip’ saved [17069342/17069342] Archive: autoencoders.zip creating: AutoEncoders/ inflating: AutoEncoders/ae.py creating: __MACOSX/ creating: __MACOSX/AutoEncoders/ inflating: __MACOSX/AutoEncoders/._ae.py inflating: AutoEncoders/ml-100k.zip inflating: AutoEncoders/ml-1m.zip
# Karena ada file .zip dalam direktori, harus diekstrak lagi.
# ref: https://askubuntu.com/q/399951
# ref: https://unix.stackexchange.com/q/12902
!find AutoEncoders -type f -name '*.zip' -exec unzip -d AutoEncoders {} \;
Archive: AutoEncoders/ml-100k.zip creating: AutoEncoders/ml-100k/ inflating: AutoEncoders/ml-100k/allbut.pl creating: AutoEncoders/__MACOSX/ creating: AutoEncoders/__MACOSX/ml-100k/ inflating: AutoEncoders/__MACOSX/ml-100k/._allbut.pl inflating: AutoEncoders/ml-100k/mku.sh inflating: AutoEncoders/__MACOSX/ml-100k/._mku.sh inflating: AutoEncoders/ml-100k/README inflating: AutoEncoders/__MACOSX/ml-100k/._README inflating: AutoEncoders/ml-100k/u.data inflating: AutoEncoders/__MACOSX/ml-100k/._u.data inflating: AutoEncoders/ml-100k/u.genre inflating: AutoEncoders/__MACOSX/ml-100k/._u.genre inflating: AutoEncoders/ml-100k/u.info inflating: AutoEncoders/__MACOSX/ml-100k/._u.info inflating: AutoEncoders/ml-100k/u.item inflating: AutoEncoders/__MACOSX/ml-100k/._u.item inflating: AutoEncoders/ml-100k/u.occupation inflating: AutoEncoders/__MACOSX/ml-100k/._u.occupation inflating: AutoEncoders/ml-100k/u.user inflating: AutoEncoders/__MACOSX/ml-100k/._u.user inflating: AutoEncoders/ml-100k/u1.base inflating: AutoEncoders/__MACOSX/ml-100k/._u1.base inflating: AutoEncoders/ml-100k/u1.test inflating: AutoEncoders/__MACOSX/ml-100k/._u1.test inflating: AutoEncoders/ml-100k/u2.base inflating: AutoEncoders/__MACOSX/ml-100k/._u2.base inflating: AutoEncoders/ml-100k/u2.test inflating: AutoEncoders/__MACOSX/ml-100k/._u2.test inflating: AutoEncoders/ml-100k/u3.base inflating: AutoEncoders/__MACOSX/ml-100k/._u3.base inflating: AutoEncoders/ml-100k/u3.test inflating: AutoEncoders/__MACOSX/ml-100k/._u3.test inflating: AutoEncoders/ml-100k/u4.base inflating: AutoEncoders/__MACOSX/ml-100k/._u4.base inflating: AutoEncoders/ml-100k/u4.test inflating: AutoEncoders/__MACOSX/ml-100k/._u4.test inflating: AutoEncoders/ml-100k/u5.base inflating: AutoEncoders/__MACOSX/ml-100k/._u5.base inflating: AutoEncoders/ml-100k/u5.test inflating: AutoEncoders/__MACOSX/ml-100k/._u5.test inflating: AutoEncoders/ml-100k/ua.base inflating: AutoEncoders/__MACOSX/ml-100k/._ua.base inflating: AutoEncoders/ml-100k/ua.test inflating: AutoEncoders/__MACOSX/ml-100k/._ua.test inflating: AutoEncoders/ml-100k/ub.base inflating: AutoEncoders/__MACOSX/ml-100k/._ub.base inflating: AutoEncoders/ml-100k/ub.test inflating: AutoEncoders/__MACOSX/ml-100k/._ub.test inflating: AutoEncoders/__MACOSX/._ml-100k Archive: AutoEncoders/ml-1m.zip creating: AutoEncoders/ml-1m/ inflating: AutoEncoders/ml-1m/.DS_Store creating: AutoEncoders/__MACOSX/ml-1m/ inflating: AutoEncoders/__MACOSX/ml-1m/._.DS_Store inflating: AutoEncoders/ml-1m/.Rhistory inflating: AutoEncoders/ml-1m/movies.dat inflating: AutoEncoders/__MACOSX/ml-1m/._movies.dat inflating: AutoEncoders/ml-1m/ratings.dat inflating: AutoEncoders/__MACOSX/ml-1m/._ratings.dat inflating: AutoEncoders/ml-1m/README inflating: AutoEncoders/__MACOSX/ml-1m/._README inflating: AutoEncoders/ml-1m/test_set.csv inflating: AutoEncoders/__MACOSX/ml-1m/._test_set.csv inflating: AutoEncoders/ml-1m/training_set.csv inflating: AutoEncoders/__MACOSX/ml-1m/._training_set.csv inflating: AutoEncoders/ml-1m/users.dat inflating: AutoEncoders/__MACOSX/ml-1m/._users.dat inflating: AutoEncoders/__MACOSX/._ml-1m
#### Atur dataset path
DATASET_DIRECTORY = 'AutoEncoders/'
def showdata(dataframe):
print('Dataframe Size: {}'.format(dataframe.shape))
return dataframe
# Importing the libraries
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torch.utils.data
from torch.autograd import Variable
movies = pd.read_csv(DATASET_DIRECTORY + 'ml-1m/movies.dat', sep='::', header=None, engine='python', encoding='latin-1')
showdata(movies).head(10)
Dataframe Size: (3883, 3)
0 | 1 | 2 | |
---|---|---|---|
0 | 1 | Toy Story (1995) | Animation|Children's|Comedy |
1 | 2 | Jumanji (1995) | Adventure|Children's|Fantasy |
2 | 3 | Grumpier Old Men (1995) | Comedy|Romance |
3 | 4 | Waiting to Exhale (1995) | Comedy|Drama |
4 | 5 | Father of the Bride Part II (1995) | Comedy |
5 | 6 | Heat (1995) | Action|Crime|Thriller |
6 | 7 | Sabrina (1995) | Comedy|Romance |
7 | 8 | Tom and Huck (1995) | Adventure|Children's |
8 | 9 | Sudden Death (1995) | Action |
9 | 10 | GoldenEye (1995) | Action|Adventure|Thriller |
users = pd.read_csv(DATASET_DIRECTORY + 'ml-1m/users.dat', sep='::', header=None, engine='python', encoding='latin-1')
showdata(users).head(10)
Dataframe Size: (6040, 5)
0 | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|
0 | 1 | F | 1 | 10 | 48067 |
1 | 2 | M | 56 | 16 | 70072 |
2 | 3 | M | 25 | 15 | 55117 |
3 | 4 | M | 45 | 7 | 02460 |
4 | 5 | M | 25 | 20 | 55455 |
5 | 6 | F | 50 | 9 | 55117 |
6 | 7 | M | 35 | 1 | 06810 |
7 | 8 | M | 25 | 12 | 11413 |
8 | 9 | M | 25 | 17 | 61614 |
9 | 10 | F | 35 | 1 | 95370 |
ratings = pd.read_csv(DATASET_DIRECTORY + 'ml-1m/ratings.dat', sep='::', header=None, engine='python', encoding='latin-1')
showdata(ratings).head(10)
Dataframe Size: (1000209, 4)
0 | 1 | 2 | 3 | |
---|---|---|---|---|
0 | 1 | 1193 | 5 | 978300760 |
1 | 1 | 661 | 3 | 978302109 |
2 | 1 | 914 | 3 | 978301968 |
3 | 1 | 3408 | 4 | 978300275 |
4 | 1 | 2355 | 5 | 978824291 |
5 | 1 | 1197 | 3 | 978302268 |
6 | 1 | 1287 | 5 | 978302039 |
7 | 1 | 2804 | 5 | 978300719 |
8 | 1 | 594 | 4 | 978302268 |
9 | 1 | 919 | 4 | 978301368 |
# Preparing the training set and the test set
training_set = pd.read_csv(DATASET_DIRECTORY + 'ml-100k/u1.base', delimiter='\t')
training_set = np.array(training_set, dtype='int')
test_set = pd.read_csv(DATASET_DIRECTORY + 'ml-100k/u1.test', delimiter='\t')
test_set = np.array(test_set, dtype='int')
# Getting the number of users and movies
nb_users = int(max(max(training_set[:, 0]), max(test_set[:, 0])))
nb_movies = int(max(max(training_set[:, 1]), max(test_set[:, 1])))
# Converting the data into an array with users in lines and movies in columns
def convert(data):
new_data = []
for id_users in range(1, nb_users+1):
id_movies = data[:, 1][data[:, 0] == id_users]
id_ratings = data[:, 2][data[:, 0] == id_users]
ratings = np.zeros(nb_movies)
ratings[id_movies - 1] = id_ratings
new_data.append(list(ratings))
return new_data
training_set = convert(training_set)
test_set = convert(test_set)
# Converting the data into Torch tensors
training_set = torch.FloatTensor(training_set)
test_set = torch.FloatTensor(test_set)
training_set
tensor([[0., 3., 4., ..., 0., 0., 0.], [4., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [5., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 5., 0., ..., 0., 0., 0.]])
# Creating the architecture of the Neural Network
class SAE(nn.Module):
def __init__(self, ):
super(SAE, self).__init__()
self.fc1 = nn.Linear(nb_movies, 20)
self.fc2 = nn.Linear(20, 10)
self.fc3 = nn.Linear(10, 20)
self.fc4 = nn.Linear(20, nb_movies)
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.activation(self.fc1(x))
x = self.activation(self.fc2(x))
x = self.activation(self.fc3(x))
x = self.fc4(x)
return x
sae = SAE()
criterion = nn.MSELoss()
optimizer = optim.RMSprop(sae.parameters(), lr = 0.01, weight_decay = 0.5)
# Training the SAE
nb_epoch = 200
for epoch in range(1, nb_epoch + 1):
train_loss = 0
s = 0.
for id_user in range(nb_users):
input = Variable(training_set[id_user]).unsqueeze(0)
target = input.clone()
if torch.sum(target.data > 0) > 0:
output = sae(input)
target.require_grad = False
output[target == 0] = 0
loss = criterion(output, target)
mean_corrector = nb_movies/float(torch.sum(target.data > 0) + 1e-10)
loss.backward()
train_loss += np.sqrt(loss.item()*mean_corrector)
s += 1.
optimizer.step()
print('epoch: '+str(epoch)+' loss: '+str(train_loss/s))
epoch: 1 loss: 1.7663983791313438 epoch: 2 loss: 1.0965944818481448 epoch: 3 loss: 1.0533398732955221 epoch: 4 loss: 1.0383018413922185 epoch: 5 loss: 1.0308177439541621 epoch: 6 loss: 1.026551124053685 epoch: 7 loss: 1.023840092408676 epoch: 8 loss: 1.021978586980373 epoch: 9 loss: 1.0206570638587025 epoch: 10 loss: 1.0196462708959995 epoch: 11 loss: 1.0187753163243505 epoch: 12 loss: 1.018512555740381 epoch: 13 loss: 1.0178744683018195 epoch: 14 loss: 1.0174755647701952 epoch: 15 loss: 1.0170719470478082 epoch: 16 loss: 1.017201642832892 epoch: 17 loss: 1.0163239136444078 epoch: 18 loss: 1.0165747767066637 epoch: 19 loss: 1.0162508415906395 epoch: 20 loss: 1.0162299744574526 epoch: 21 loss: 1.0160825599663328 epoch: 22 loss: 1.0159708620648906 epoch: 23 loss: 1.0159037432204494 epoch: 24 loss: 1.0156694047908619 epoch: 25 loss: 1.0156815102111703 epoch: 26 loss: 1.0154590358153581 epoch: 27 loss: 1.0152956203593735 epoch: 28 loss: 1.0151429122142581 epoch: 29 loss: 1.0127277229574954 epoch: 30 loss: 1.0115507879790988 epoch: 31 loss: 1.0106808694785414 epoch: 32 loss: 1.0074244496142102 epoch: 33 loss: 1.0073100915343118 epoch: 34 loss: 1.0034969234369306 epoch: 35 loss: 1.0027353737074234 epoch: 36 loss: 1.0000683778711716 epoch: 37 loss: 0.9968187110598279 epoch: 38 loss: 0.9945375976402397 epoch: 39 loss: 0.9952177935337382 epoch: 40 loss: 0.9938334742471779 epoch: 41 loss: 0.9934695043949954 epoch: 42 loss: 0.9902121855511794 epoch: 43 loss: 0.9901160391783914 epoch: 44 loss: 0.9857301381332167 epoch: 45 loss: 0.9848217773360862 epoch: 46 loss: 0.9801835996478252 epoch: 47 loss: 0.9810873597000531 epoch: 48 loss: 0.978300727353134 epoch: 49 loss: 0.9768159755686795 epoch: 50 loss: 0.970972205043055 epoch: 51 loss: 0.9714721652842023 epoch: 52 loss: 0.968500137167768 epoch: 53 loss: 0.9677024816685345 epoch: 54 loss: 0.9659461926308117 epoch: 55 loss: 0.9674038597441262 epoch: 56 loss: 0.9652042557789273 epoch: 57 loss: 0.9635202505788273 epoch: 58 loss: 0.9650874836412309 epoch: 59 loss: 0.9642095855871714 epoch: 60 loss: 0.9586750134842592 epoch: 61 loss: 0.9572684056349163 epoch: 62 loss: 0.9564866799474354 epoch: 63 loss: 0.9524743478337185 epoch: 64 loss: 0.9502278884724376 epoch: 65 loss: 0.9533428352764142 epoch: 66 loss: 0.9520933496393511 epoch: 67 loss: 0.9546508691490383 epoch: 68 loss: 0.9489561905583827 epoch: 69 loss: 0.9490490017216804 epoch: 70 loss: 0.9483167270874054 epoch: 71 loss: 0.948329255203358 epoch: 72 loss: 0.9450881600029056 epoch: 73 loss: 0.9463115597986019 epoch: 74 loss: 0.9437816299409459 epoch: 75 loss: 0.9455461502145251 epoch: 76 loss: 0.9420526631180003 epoch: 77 loss: 0.9435457856469216 epoch: 78 loss: 0.9411563134969737 epoch: 79 loss: 0.9436575836579513 epoch: 80 loss: 0.9422297843906718 epoch: 81 loss: 0.9410528463853715 epoch: 82 loss: 0.9402148460233527 epoch: 83 loss: 0.9409234754132823 epoch: 84 loss: 0.9405657855477602 epoch: 85 loss: 0.9382027201893749 epoch: 86 loss: 0.9393233675827815 epoch: 87 loss: 0.9374333910506758 epoch: 88 loss: 0.9366116336780694 epoch: 89 loss: 0.9377259823272002 epoch: 90 loss: 0.9365444235602165 epoch: 91 loss: 0.9380175938760765 epoch: 92 loss: 0.9364794219167737 epoch: 93 loss: 0.9368766124940768 epoch: 94 loss: 0.9348002232788932 epoch: 95 loss: 0.9353004705734516 epoch: 96 loss: 0.9343677843163494 epoch: 97 loss: 0.9353256751794342 epoch: 98 loss: 0.933877368043547 epoch: 99 loss: 0.9342818034628956 epoch: 100 loss: 0.9333942400397647 epoch: 101 loss: 0.9341794560759067 epoch: 102 loss: 0.932444274542758 epoch: 103 loss: 0.9329446660349489 epoch: 104 loss: 0.9331678830270377 epoch: 105 loss: 0.9331724844463245 epoch: 106 loss: 0.9331020305951515 epoch: 107 loss: 0.9356272341681415 epoch: 108 loss: 0.9333336215395651 epoch: 109 loss: 0.9327508003016757 epoch: 110 loss: 0.9308627731347268 epoch: 111 loss: 0.9319176007690649 epoch: 112 loss: 0.9306397121343122 epoch: 113 loss: 0.9305777403332568 epoch: 114 loss: 0.9302414124205797 epoch: 115 loss: 0.9305424765978645 epoch: 116 loss: 0.9294236245683961 epoch: 117 loss: 0.9295683690937063 epoch: 118 loss: 0.9290601632685692 epoch: 119 loss: 0.9298997313915192 epoch: 120 loss: 0.9287010974464924 epoch: 121 loss: 0.9288074722866032 epoch: 122 loss: 0.9279760744321034 epoch: 123 loss: 0.9279426068053931 epoch: 124 loss: 0.9275374298911129 epoch: 125 loss: 0.9279328461908956 epoch: 126 loss: 0.9277038322243288 epoch: 127 loss: 0.9280261047596016 epoch: 128 loss: 0.9266577717902903 epoch: 129 loss: 0.9274436983768939 epoch: 130 loss: 0.9262172192927275 epoch: 131 loss: 0.9268704635553348 epoch: 132 loss: 0.9264313648325654 epoch: 133 loss: 0.9270331564311223 epoch: 134 loss: 0.9259879544058086 epoch: 135 loss: 0.9265063473172516 epoch: 136 loss: 0.9252285856398398 epoch: 137 loss: 0.9257206007928372 epoch: 138 loss: 0.9245857017528629 epoch: 139 loss: 0.9249536996678024 epoch: 140 loss: 0.9239828664132971 epoch: 141 loss: 0.9250168599949399 epoch: 142 loss: 0.9239714020219754 epoch: 143 loss: 0.9248878068576096 epoch: 144 loss: 0.9231863363249722 epoch: 145 loss: 0.9244485999674413 epoch: 146 loss: 0.9231108985583485 epoch: 147 loss: 0.9241529591466949 epoch: 148 loss: 0.9228550944294732 epoch: 149 loss: 0.9237827557157635 epoch: 150 loss: 0.922260170746647 epoch: 151 loss: 0.9231400282022982 epoch: 152 loss: 0.9221839934603951 epoch: 153 loss: 0.9227788564070573 epoch: 154 loss: 0.9213350301333955 epoch: 155 loss: 0.922453842482827 epoch: 156 loss: 0.9210483122507049 epoch: 157 loss: 0.9219510963958538 epoch: 158 loss: 0.9204969614260258 epoch: 159 loss: 0.9205394209501664 epoch: 160 loss: 0.9200661759022467 epoch: 161 loss: 0.9207735137229326 epoch: 162 loss: 0.9196641402017643 epoch: 163 loss: 0.9204513049820104 epoch: 164 loss: 0.9193051927516236 epoch: 165 loss: 0.9210140873158912 epoch: 166 loss: 0.9193127515207875 epoch: 167 loss: 0.9200597882686071 epoch: 168 loss: 0.9185944485414366 epoch: 169 loss: 0.9201572432142742 epoch: 170 loss: 0.9183169550351225 epoch: 171 loss: 0.9193881788559667 epoch: 172 loss: 0.9180057668314479 epoch: 173 loss: 0.9191220927901347 epoch: 174 loss: 0.9177848844173945 epoch: 175 loss: 0.9190516442024842 epoch: 176 loss: 0.9181445924423348 epoch: 177 loss: 0.919047934578481 epoch: 178 loss: 0.9175119757656524 epoch: 179 loss: 0.9186781150882567 epoch: 180 loss: 0.9175681590539049 epoch: 181 loss: 0.9183763375326187 epoch: 182 loss: 0.9169434621528899 epoch: 183 loss: 0.9177548550969366 epoch: 184 loss: 0.9170545570415128 epoch: 185 loss: 0.9179762411576573 epoch: 186 loss: 0.9166707151557505 epoch: 187 loss: 0.9174266883043443 epoch: 188 loss: 0.9162146914993445 epoch: 189 loss: 0.917265776286358 epoch: 190 loss: 0.9159440051014004 epoch: 191 loss: 0.9167926651895048 epoch: 192 loss: 0.9157365677088328 epoch: 193 loss: 0.9169038115550036 epoch: 194 loss: 0.9156644022282158 epoch: 195 loss: 0.916360655268448 epoch: 196 loss: 0.9149874787609436 epoch: 197 loss: 0.9160702331415719 epoch: 198 loss: 0.9148375459877753 epoch: 199 loss: 0.915890166240895 epoch: 200 loss: 0.9151742022378695
# Testing the SAE
test_loss = 0
s = 0.
for id_user in range(nb_users):
input = Variable(training_set[id_user]).unsqueeze(0)
target = Variable(test_set[id_user]).unsqueeze(0)
if torch.sum(target.data > 0) > 0:
output = sae(input)
target.require_grad = False
output[target == 0] = 0
loss = criterion(output, target)
mean_corrector = nb_movies/float(torch.sum(target.data > 0) + 1e-10)
test_loss += np.sqrt(loss.item()*mean_corrector)
s += 1.
print('test loss: '+str(test_loss/s))
test loss: 0.9503542203018388