!nvidia-smi
Mon Jun 10 08:21:40 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.67 Driver Version: 410.79 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 73C P8 34W / 149W | 0MiB / 11441MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
from google.colab import files
uploaded = files.upload()
Saving sparse_eq.png to sparse_eq.png
uploaded = files.upload()
Saving Figure 15-5. A single graph to train a stacked autoencoder.jpg to Figure 15-5. A single graph to train a stacked autoencoder.jpg Saving Figure 15-8. Unsupervised pretraining using autoencoders.jpg to Figure 15-8. Unsupervised pretraining using autoencoders.jpg Saving Figure 15-9. Denoising autoencoders, with Gaussian noise (left) or dropout (right).jpg to Figure 15-9. Denoising autoencoders, with Gaussian noise (left) or dropout (right).jpg Saving Figure 15-10. Sparsity loss.png to Figure 15-10. Sparsity loss.png Saving Figure 15-12. Images of handwritten digits generated by the variational autoencoder.png to Figure 15-12. Images of handwritten digits generated by the variational autoencoder.png Saving Figure15-3.png to Figure15-3.png Saving Figure15-4.png to Figure15-4.png Saving Figure15-5.png to Figure15-5.png Saving Figure15-8.png to Figure15-8.png Saving Figure15-9.png to Figure15-9.png Saving gan.PNG to gan.PNG Saving gan_eq.PNG to gan_eq.PNG Saving generated_digits_plot.png to generated_digits_plot.png Saving information_eq.png to information_eq.png Saving kl.png to kl.png Saving kl_1.png to kl_1.png
uploaded = files.upload()
--------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-3-ed2fd71b4a2f> in <module>() ----> 1 uploaded = files.upload() NameError: name 'files' is not defined
!ls
15-11.png '15-11 Variational autoencoder (left), and an instance going through it (right).png' ae_1.png ae_2.png checkpoint cross_kl.png cross.png distribution.PNG entropy_eq.png entropy_ex.png extracted_features_plot.png 'Figure 15-10. Sparsity loss.png' 'Figure 15-12. Images of handwritten digits generated by the variational autoencoder.png' 'Figure 15-1. The chess memory experiment (left) and a simple autoencoder (right).jpg' 'Figure 15-2. PCA performed by an undercomplete linear autoencoder.jpg' Figure15-3.png 'Figure 15-3. Stacked autoencoder.jpg' Figure15-4.png 'Figure 15-4. Training one autoencoder at a time.jpg' 'Figure 15-5. A single graph to train a stacked autoencoder.jpg' Figure15-5.png Figure15-8.png 'Figure 15-8. Unsupervised pretraining using autoencoders.jpg' 'Figure 15-9. Denoising autoencoders, with Gaussian noise (left) or dropout (right).jpg' Figure15-9.png gan_eq.PNG gan_generated_image_epoch_100.png gan_generated_image_epoch_1.png gan_generated_image_epoch_20.png gan_generated_image_epoch_40.png gan_generated_image_epoch_60.png gan_generated_image_epoch_80.png gan.PNG generated_digits_plot.png information_eq.png kl_1.png kl.png 'linear_autoencoder_pca_plot (1).png' linear_autoencoder_pca_plot.png my_model_sparse.ckpt.data-00000-of-00001 my_model_sparse.ckpt.index my_model_sparse.ckpt.meta my_model_variational.ckpt.data-00000-of-00001 my_model_variational.ckpt.index my_model_variational.ckpt.meta 'police (1).PNG' police.PNG 'reconstruction_plot (1).png' reconstruction_plot.png 'relation (1).png' relation.png sample_data 'SparseAE_Structure (1).png' SparseAE_Structure.png 'sparseCoding (1).png' 'sparseCoding_eq (1).png' sparseCoding_eq.png sparseCoding.png 'sparse_eq (1).png' sparse_eq.png 'sparse_obj (1).png' sparse_obj.png 'sparsity_loss_plot (1).png' sparsity_loss_plot.png 'spCoding_eq (1).png' spCoding_eq.png 'stackedConvAE (1).png' stackedConvAE.png 'vae_eq (1).PNG' vae_eq.PNG
uploaded = files.upload()
Saving gan.PNG to gan (1).PNG Saving gan_eq.PNG to gan_eq (1).PNG Saving police.PNG to police (2).PNG
def plot_image(image, shape=[28, 28]):
plt.imshow(image.reshape(shape), cmap="Greys", interpolation="nearest")
plt.axis("off")
def plot_multiple_images(images, n_rows, n_cols, pad=2):
images = images - images.min() # 최소값을 0으로 만들어 패딩이 하얗게 보이도록 합니다.
w,h = images.shape[1:]
image = np.zeros(((w+pad)*n_rows+pad, (h+pad)*n_cols+pad))
for y in range(n_rows):
for x in range(n_cols):
image[(y*(h+pad)+pad):(y*(h+pad)+pad+h),(x*(w+pad)+pad):(x*(w+pad)+pad+w)] = images[y*n_cols+x]
plt.imshow(image, cmap="Greys", interpolation="nearest")
plt.axis("off")
# 파이썬 2와 파이썬 3 지원
from __future__ import division, print_function, unicode_literals
# 공통
import numpy as np
import os
import sys
import tensorflow as tf
# 일관된 출력을 위해 유사난수 초기화
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# 맷플롯립 설정
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# 한글출력
plt.rcParams['font.family'] = 'NanumBarunGothic'
plt.rcParams['axes.unicode_minus'] = False
# 그림을 저장할 폴더
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "autoencoders"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
15장 – 오토인코더
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0
X_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
X_valid, X_train = X_train[:5000], X_train[5000:]
y_valid, y_train = y_train[:5000], y_train[5000:]
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step
def shuffle_batch(X, y, batch_size):
rnd_idx = np.random.permutation(len(X))
n_batches = len(X) // batch_size
for batch_idx in np.array_split(rnd_idx, n_batches):
X_batch, y_batch = X[batch_idx], y[batch_idx]
yield X_batch, y_batch
def show_reconstructed_digits(X, outputs, model_path = None, n_test_digits = 2):
with tf.Session() as sess:
if model_path:
saver.restore(sess, model_path)
# X_test = mnist.test.images[:n_test_digits]
outputs_val = outputs.eval(feed_dict={X: X_test[:n_test_digits]})
fig = plt.figure(figsize=(8, 3 * n_test_digits))
for digit_index in range(n_test_digits):
plt.subplot(n_test_digits, 2, digit_index * 2 + 1)
plot_image(X_test[digit_index])
plt.subplot(n_test_digits, 2, digit_index * 2 + 2)
plot_image(outputs_val[digit_index])
from IPython.display import Image
출처 - 기계학습(오일석 지음, 한빛 아카데미)
사전 $D$를 구성하는 기저 벡터의 선형 결합으로 신호(영상) $x$를 표현
Image('sparseCoding_eq.png')
Image('sparseCoding.png')
희소 코딩이 다른 변환 기법과 다른 점
Image('spCoding_eq.png')
Image('ae_1.png')
Image('ae_2.png')
Image('SparseAE_Structure.png')
데이터에서 일정한 구조 또는 숨어있는 패턴을 찾아 효율적으로 데이터를 표현할 수 있어야 함.
여러 기법 중 좋은 특성을 추출하도록 만드는 제약의 방식으로 희소(sparsity) 인코더가 있다.
Image('sparse_eq.png')
=> 이렇게 하면 적은 수의 활성화된 뉴런을 조합하여 입력을 표현해야 함. 결국 코딩층의 각 뉴런은 유용한 특성을 표현하게 됩니다.
예를 들어 한 뉴런의 평균 활성화가 0.3이고 목표 희소 정도가 0.1이라면, 이 뉴런은 덜 활성화되도록 규제되어야 합니다. 간단한 방법은 제곱 오차를 추가하는 것.실전에서 더 좋은 방법은 평균 제곱 오차보다 훨씬 강한 그래디언트를 가진 쿨백 라이블러 발산을 사용하는 것.
두 개의 이산 확률 분포 $P$와 $Q$가 주어졌을 떄, 이 두 분산 사이의 $D_{\mathrm{KL}}(P\|Q)$는 아래의 식 15-1을 사용해 계산할 수 있음.
식 15-1: 쿨백 라이블러 발산
$ D_{\mathrm{KL}}(P\|Q) = \sum\limits_{i} P(i) \log \dfrac{P(i)}{Q(i)} $
식 15-2: 목표 희소 정도 $p$ 와 실제 희소 정도 $q$ 사이의 KL 발산
$ D_{\mathrm{KL}}(p\|q) = p \, \log \dfrac{p}{q} + (1-p) \log \dfrac{1-p}{1-q} $
Image('sparse_obj.png')
0이 아닌 요소를 희소하게 가지도록 하는 구현 방법은 여러가지가 존재..
정보이론에서 메시지의 정보량을 확률로 측정한다.
정보량(자기정보, self-information)
확률변수 $x$에 대해서
Image('information_eq.png')
Image('entropy_eq.png')
Image('entropy_ex.png')
=> 주사위는 윷보다 무질서하고 불확실성이 더 큼 => 엔트로피가 더 높음
앞의 엔트로피는 한 확률분포의 무질서 정도를 측정.
두 확률분포 간의 엔트로피를 측정하는 것을 교차 엔트로피(cross entropy)라고 함. 단, 두 확률분포는 같은 확률변수에 대해 정의되어 있어야 함.
두 확률분포 P와 Q 사이의 교차 엔트로피
Image('cross.png')
Image('cross_kl.png')
Image('kl.png')
Image('relation.png')
from matplotlib import font_manager, rc
#font_name = font_manager.FontProperties(fname="c:/Windows/Fonts/malgun.ttf").get_name()
#rc('font', family=font_name)
p = 0.1
q = np.linspace(0.001, 0.999, 500) # 0.001과 0.999 사이의 랜덤값을 500개 갖는 array
kl_div = p * np.log(p / q) + (1 - p) * np.log((1 - p) / (1 - q))
mse = (p - q)**2
plt.plot([p, p], [0, 0.3], "k:") # p가 중앙선, y축은 0~0.0 범위
plt.text(0.05, 0.32, "Objective", fontsize=14)
plt.plot(q, kl_div, "b-", label="KL-divergence") # 500개 array를 입력으로 그에 따른 kl-div 값을 y값으로 취급, 그래프로 출력
plt.plot(q, mse, "r--", label="MSE") # 500개 array를 입력으로 그에 따른 mse 값을 y값으로 취급, 그래프로 출력,
plt.legend(loc="upper left")
plt.xlabel("real sparsity")
plt.ylabel("Cost", rotation=0)
plt.axis([0, 1, 0, 0.95]) # x는 [0, 1], y는 [0, 0.95] 사이 범위로 그래프 설정.
#save_fig("sparsity_loss_plot")
[0, 1, 0, 0.95]
/usr/local/lib/python3.6/dist-packages/matplotlib/font_manager.py:1241: UserWarning: findfont: Font family ['NanumBarunGothic'] not found. Falling back to DejaVu Sans. (prop.get_family(), self.defaultFamily[fontext]))
reset_graph()
n_inputs = 28 * 28
n_hidden1 = 1000 # 희소 코딩 유닛
n_outputs = n_inputs
def kl_divergence(p, q):
# 쿨백 라이블러 발산
return p * tf.log(p / q) + (1 - p) * tf.log((1 - p) / (1 - q))
learning_rate = 0.01
sparsity_target = 0.1
sparsity_weight = 0.2
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.sigmoid)
outputs = tf.layers.dense(hidden1, n_outputs)
hidden1_mean = tf.reduce_mean(hidden1, axis=0) # 배치 평균
sparsity_loss = tf.reduce_sum(kl_divergence(sparsity_target, hidden1_mean))
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE
loss = reconstruction_loss + sparsity_weight * sparsity_loss
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
WARNING:tensorflow:From <ipython-input-34-4d447f0d11fd>:11: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version. Instructions for updating: Use keras.layers.dense instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead.
init = tf.global_variables_initializer()
saver = tf.train.Saver()
import time
n_epochs = 100
batch_size = 1000
start = time.time()
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = len(X_train) // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = next(shuffle_batch(X_train, y_train, batch_size))
sess.run(training_op, feed_dict={X: X_batch})
reconstruction_loss_val, sparsity_loss_val, loss_val = sess.run([reconstruction_loss, sparsity_loss, loss], feed_dict={X: X_batch})
print("\r{}".format(epoch), "Train MSE:", reconstruction_loss_val, "\tSparsity loss:", sparsity_loss_val, "\tTotal loss:", loss_val)
saver.save(sess, "./my_model_sparse.ckpt")
print("time :", time.time() - start)
0 Train MSE: 0.13764465 Sparsity loss: 0.90867686 Total loss: 0.31938004 1 Train MSE: 0.059780527 Sparsity loss: 0.02385321 Total loss: 0.06455117 2 Train MSE: 0.05259748 Sparsity loss: 0.02887623 Total loss: 0.05837273 3 Train MSE: 0.050172243 Sparsity loss: 0.19674301 Total loss: 0.08952084 4 Train MSE: 0.04457513 Sparsity loss: 0.0070949756 Total loss: 0.045994125 5 Train MSE: 0.03982869 Sparsity loss: 0.35642117 Total loss: 0.11111292 6 Train MSE: 0.03850078 Sparsity loss: 0.048918962 Total loss: 0.04828457 7 Train MSE: 0.03621183 Sparsity loss: 0.023476766 Total loss: 0.04090718 8 Train MSE: 0.033634793 Sparsity loss: 0.057289448 Total loss: 0.045092683 9 Train MSE: 0.030880697 Sparsity loss: 0.042790554 Total loss: 0.039438806 10 Train MSE: 0.028282981 Sparsity loss: 0.32261726 Total loss: 0.09280643 11 Train MSE: 0.025015583 Sparsity loss: 0.030907825 Total loss: 0.03119715 12 Train MSE: 0.023248762 Sparsity loss: 0.08871906 Total loss: 0.040992573 13 Train MSE: 0.021769173 Sparsity loss: 0.06630225 Total loss: 0.035029624 14 Train MSE: 0.022006068 Sparsity loss: 0.1147336 Total loss: 0.044952787 15 Train MSE: 0.020363856 Sparsity loss: 0.04796795 Total loss: 0.029957447 16 Train MSE: 0.017985549 Sparsity loss: 0.18309247 Total loss: 0.054604046 17 Train MSE: 0.01866192 Sparsity loss: 0.029088652 Total loss: 0.02447965 18 Train MSE: 0.017803263 Sparsity loss: 0.23516992 Total loss: 0.06483725 19 Train MSE: 0.017381176 Sparsity loss: 0.02494723 Total loss: 0.022370622 20 Train MSE: 0.017425468 Sparsity loss: 0.08669689 Total loss: 0.03476485 21 Train MSE: 0.016189659 Sparsity loss: 0.041225016 Total loss: 0.024434663 22 Train MSE: 0.016698197 Sparsity loss: 0.049753554 Total loss: 0.026648909 23 Train MSE: 0.015898878 Sparsity loss: 0.05508973 Total loss: 0.026916824 24 Train MSE: 0.01725482 Sparsity loss: 0.07781975 Total loss: 0.032818772 25 Train MSE: 0.0155733265 Sparsity loss: 0.08851769 Total loss: 0.033276863 26 Train MSE: 0.014312171 Sparsity loss: 0.071160115 Total loss: 0.028544195 27 Train MSE: 0.013452827 Sparsity loss: 0.03949416 Total loss: 0.02135166 28 Train MSE: 0.014181801 Sparsity loss: 0.03427192 Total loss: 0.021036185 29 Train MSE: 0.014233899 Sparsity loss: 0.09178954 Total loss: 0.03259181 30 Train MSE: 0.01591235 Sparsity loss: 0.08804813 Total loss: 0.033521976 31 Train MSE: 0.014909096 Sparsity loss: 0.098699115 Total loss: 0.034648918 32 Train MSE: 0.012845989 Sparsity loss: 0.049285192 Total loss: 0.02270303 33 Train MSE: 0.0136613 Sparsity loss: 0.090848416 Total loss: 0.031830985 34 Train MSE: 0.013174699 Sparsity loss: 0.041957743 Total loss: 0.021566248 35 Train MSE: 0.013076461 Sparsity loss: 0.03809562 Total loss: 0.020695586 36 Train MSE: 0.013874388 Sparsity loss: 0.053886425 Total loss: 0.024651673 37 Train MSE: 0.014134639 Sparsity loss: 0.08780369 Total loss: 0.031695377 38 Train MSE: 0.012819665 Sparsity loss: 0.23281334 Total loss: 0.059382334 39 Train MSE: 0.012348056 Sparsity loss: 0.04404425 Total loss: 0.021156907 40 Train MSE: 0.012252708 Sparsity loss: 0.13875358 Total loss: 0.040003423 41 Train MSE: 0.012996036 Sparsity loss: 0.12448029 Total loss: 0.037892096 42 Train MSE: 0.012731823 Sparsity loss: 0.05220743 Total loss: 0.02317331 43 Train MSE: 0.012437585 Sparsity loss: 0.15004885 Total loss: 0.042447355 44 Train MSE: 0.013994339 Sparsity loss: 0.08616006 Total loss: 0.031226352 45 Train MSE: 0.013336396 Sparsity loss: 0.18166137 Total loss: 0.04966867 46 Train MSE: 0.012504965 Sparsity loss: 0.17852083 Total loss: 0.04820913 47 Train MSE: 0.01303535 Sparsity loss: 0.13480781 Total loss: 0.039996915 48 Train MSE: 0.012266207 Sparsity loss: 0.07756748 Total loss: 0.027779702 49 Train MSE: 0.013177212 Sparsity loss: 0.09438327 Total loss: 0.032053865 50 Train MSE: 0.011335696 Sparsity loss: 0.055620268 Total loss: 0.02245975 51 Train MSE: 0.011653459 Sparsity loss: 0.18031777 Total loss: 0.047717012 52 Train MSE: 0.011956124 Sparsity loss: 0.056385465 Total loss: 0.023233216 53 Train MSE: 0.011858156 Sparsity loss: 0.09703424 Total loss: 0.031265005 54 Train MSE: 0.011784154 Sparsity loss: 0.083305046 Total loss: 0.028445162 55 Train MSE: 0.011937064 Sparsity loss: 0.15825097 Total loss: 0.04358726 56 Train MSE: 0.012403173 Sparsity loss: 0.18447876 Total loss: 0.049298927 57 Train MSE: 0.012061286 Sparsity loss: 0.14881377 Total loss: 0.04182404 58 Train MSE: 0.013030433 Sparsity loss: 0.20365345 Total loss: 0.053761125 59 Train MSE: 0.015655046 Sparsity loss: 0.15615867 Total loss: 0.04688678 60 Train MSE: 0.01248566 Sparsity loss: 0.25863275 Total loss: 0.06421221 61 Train MSE: 0.015990822 Sparsity loss: 0.95966583 Total loss: 0.207924 62 Train MSE: 0.017794607 Sparsity loss: 0.20053437 Total loss: 0.05790148 63 Train MSE: 0.015992826 Sparsity loss: 0.5567107 Total loss: 0.12733497 64 Train MSE: 0.01381242 Sparsity loss: 0.84981614 Total loss: 0.18377565 65 Train MSE: 0.013261842 Sparsity loss: 0.14214331 Total loss: 0.041690506 66 Train MSE: 0.015336386 Sparsity loss: 1.1463914 Total loss: 0.24461466 67 Train MSE: 0.011943788 Sparsity loss: 0.26361644 Total loss: 0.064667076 68 Train MSE: 0.015223267 Sparsity loss: 0.1692866 Total loss: 0.049080588 69 Train MSE: 0.03809518 Sparsity loss: 0.19099137 Total loss: 0.07629345 70 Train MSE: 0.01371808 Sparsity loss: 0.57679194 Total loss: 0.12907647 71 Train MSE: 0.017095746 Sparsity loss: 0.17948656 Total loss: 0.05299306 72 Train MSE: 0.0151821375 Sparsity loss: 0.18126717 Total loss: 0.05143557 73 Train MSE: 0.0281637 Sparsity loss: 0.37376887 Total loss: 0.10291748 74 Train MSE: 0.023138482 Sparsity loss: 0.13949572 Total loss: 0.051037624 75 Train MSE: 0.07661456 Sparsity loss: 0.8005074 Total loss: 0.23671605 76 Train MSE: 0.026549678 Sparsity loss: 0.19624233 Total loss: 0.06579815 77 Train MSE: 0.01564996 Sparsity loss: 0.75447464 Total loss: 0.16654488 78 Train MSE: 0.015906323 Sparsity loss: 0.11990744 Total loss: 0.03988781 79 Train MSE: 0.014085131 Sparsity loss: 0.2722744 Total loss: 0.068540014 80 Train MSE: 0.013960965 Sparsity loss: 0.24998102 Total loss: 0.06395717 81 Train MSE: 0.013162848 Sparsity loss: 0.64989233 Total loss: 0.14314131 82 Train MSE: 0.017985554 Sparsity loss: 0.62528694 Total loss: 0.14304294 83 Train MSE: 0.015929148 Sparsity loss: 0.19605559 Total loss: 0.055140268 84 Train MSE: 0.028577356 Sparsity loss: 0.18764514 Total loss: 0.06610639 85 Train MSE: 0.014616185 Sparsity loss: 0.6533987 Total loss: 0.14529593 86 Train MSE: 0.015494145 Sparsity loss: 0.1344715 Total loss: 0.042388447 87 Train MSE: 0.016051361 Sparsity loss: 0.24769147 Total loss: 0.06558966 88 Train MSE: 0.031025538 Sparsity loss: 0.2902636 Total loss: 0.089078255 89 Train MSE: 0.016500866 Sparsity loss: 0.43176642 Total loss: 0.102854155 90 Train MSE: 0.018226268 Sparsity loss: 0.17216595 Total loss: 0.05265946 91 Train MSE: 0.034575947 Sparsity loss: 0.21927813 Total loss: 0.07843158 92 Train MSE: 0.02438534 Sparsity loss: 0.90587467 Total loss: 0.20556027 93 Train MSE: 0.015527312 Sparsity loss: 0.07101078 Total loss: 0.029729469 94 Train MSE: 0.014447002 Sparsity loss: 0.46265855 Total loss: 0.106978714 95 Train MSE: 0.015279319 Sparsity loss: 0.08312563 Total loss: 0.031904444 96 Train MSE: 0.016786408 Sparsity loss: 0.9592708 Total loss: 0.20864058 97 Train MSE: 0.016320782 Sparsity loss: 1.0042542 Total loss: 0.21717162 98 Train MSE: 0.02087487 Sparsity loss: 0.11514209 Total loss: 0.043903288 99 Train MSE: 0.014411595 Sparsity loss: 0.28764212 Total loss: 0.07194002 time : 93.06000447273254
show_reconstructed_digits(X, outputs, "./my_model_sparse.ckpt")
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. INFO:tensorflow:Restoring parameters from ./my_model_sparse.ckpt
=================================================
코딩층은 0에서 1사이의 값을 출력해야 하므로 시그모이드 활성화 함수를 사용합니다:
hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.sigmoid)
훈련 속도를 높일 수 있는 한 가지 기교는 MSE 대신 더 큰 그래디언트를 가지는 재구성 손실을 사용하는 것. 보통 크로스 엔트로피가 좋은 선택. 이를 위해 입력을 0과 1사이로 정규화하고 로지스틱 활성화 함수를 사용하고 나서, 크로스엔트로피를 사용합니다:
logits = tf.layers.dense(hidden1, n_outputs)
outputs = tf.nn.sigmoid(logits) # logits의 의미 : 입력 데이터를 모델로 전방향 진행하여 나온 결과물
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=X, logits=logits)
reconstruction_loss = tf.reduce_mean(xentropy) # Computes the mean of elements across dimensions of a tensor.
=================================================================================================
reset_graph()
n_inputs = 28 * 28
n_hidden1 = 1000 # 희소 코딩 유닛
n_outputs = n_inputs
def kl_divergence(p, q):
# 쿨백 라이블러 발산
return p * tf.log(p / q) + (1 - p) * tf.log((1 - p) / (1 - q))
learning_rate = 0.01
sparsity_target = 0.1
sparsity_weight = 0.2
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
hidden1 = tf.layers.dense(X, n_hidden1, activation=tf.nn.sigmoid)
logits = tf.layers.dense(hidden1, n_outputs)
outputs = tf.nn.sigmoid(logits)
# outputs = tf.layers.dense(hidden1, n_outputs)
hidden1_mean = tf.reduce_mean(hidden1, axis=0) # 배치 평균
sparsity_loss = tf.reduce_sum(kl_divergence(sparsity_target, hidden1_mean))
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=X, logits=logits)
reconstruction_loss = tf.reduce_sum(xentropy)
#reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE
loss = reconstruction_loss + sparsity_weight * sparsity_loss
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 100
batch_size = 1000
start = time.time()
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = len(X_train) // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = next(shuffle_batch(X_train, y_train, batch_size))
sess.run(training_op, feed_dict={X: X_batch})
reconstruction_loss_val, sparsity_loss_val, loss_val = sess.run([reconstruction_loss, sparsity_loss, loss], feed_dict={X: X_batch})
print("\r{}".format(epoch), "Train MSE:", reconstruction_loss_val, "\tSparsity loss:", sparsity_loss_val, "\tTotal loss:", loss_val)
saver.save(sess, "./my_model_sparse.ckpt")
print("time :", time.time() - start)
0 Train MSE: 135619.55 Sparsity loss: 291.66095 Total loss: 135677.88 1 Train MSE: 96593.695 Sparsity loss: 287.55627 Total loss: 96651.2 2 Train MSE: 76200.64 Sparsity loss: 297.12433 Total loss: 76260.06 3 Train MSE: 67032.05 Sparsity loss: 300.6568 Total loss: 67092.18 4 Train MSE: 61547.273 Sparsity loss: 303.17807 Total loss: 61607.91 5 Train MSE: 57634.83 Sparsity loss: 304.16333 Total loss: 57695.66 6 Train MSE: 57064.855 Sparsity loss: 310.46478 Total loss: 57126.95 7 Train MSE: 55744.87 Sparsity loss: 297.44574 Total loss: 55804.36 8 Train MSE: 54868.51 Sparsity loss: 300.05945 Total loss: 54928.523 9 Train MSE: 54237.793 Sparsity loss: 298.79987 Total loss: 54297.555 10 Train MSE: 53307.023 Sparsity loss: 308.2294 Total loss: 53368.668 11 Train MSE: 54078.55 Sparsity loss: 299.86093 Total loss: 54138.523 12 Train MSE: 53196.64 Sparsity loss: 298.27496 Total loss: 53256.297 13 Train MSE: 52739.71 Sparsity loss: 294.00388 Total loss: 52798.51 14 Train MSE: 52760.637 Sparsity loss: 298.49402 Total loss: 52820.336 15 Train MSE: 52285.72 Sparsity loss: 295.29602 Total loss: 52344.777 16 Train MSE: 51709.723 Sparsity loss: 294.99536 Total loss: 51768.723 17 Train MSE: 52031.758 Sparsity loss: 291.24823 Total loss: 52090.008 18 Train MSE: 51536.188 Sparsity loss: 292.1704 Total loss: 51594.62 19 Train MSE: 51511.387 Sparsity loss: 288.27805 Total loss: 51569.043 20 Train MSE: 51260.18 Sparsity loss: 287.0626 Total loss: 51317.594 21 Train MSE: 50260.27 Sparsity loss: 283.5484 Total loss: 50316.98 22 Train MSE: 50939.69 Sparsity loss: 277.72952 Total loss: 50995.24 23 Train MSE: 50867.32 Sparsity loss: 274.90128 Total loss: 50922.3 24 Train MSE: 51430.42 Sparsity loss: 277.66888 Total loss: 51485.957 25 Train MSE: 50607.67 Sparsity loss: 270.07587 Total loss: 50661.688 26 Train MSE: 50802.312 Sparsity loss: 268.68164 Total loss: 50856.047 27 Train MSE: 50259.93 Sparsity loss: 266.49237 Total loss: 50313.227 28 Train MSE: 50774.633 Sparsity loss: 263.65305 Total loss: 50827.363 29 Train MSE: 51512.17 Sparsity loss: 267.2867 Total loss: 51565.63 30 Train MSE: 50805.664 Sparsity loss: 260.68884 Total loss: 50857.8 31 Train MSE: 50932.58 Sparsity loss: 258.1543 Total loss: 50984.21 32 Train MSE: 51421.977 Sparsity loss: 257.04633 Total loss: 51473.387 33 Train MSE: 49835.33 Sparsity loss: 248.64174 Total loss: 49885.055 34 Train MSE: 50349.508 Sparsity loss: 247.54816 Total loss: 50399.016 35 Train MSE: 49696.867 Sparsity loss: 242.48781 Total loss: 49745.363 36 Train MSE: 50725.39 Sparsity loss: 239.16049 Total loss: 50773.223 37 Train MSE: 50488.375 Sparsity loss: 240.26825 Total loss: 50536.43 38 Train MSE: 49944.707 Sparsity loss: 234.40822 Total loss: 49991.59 39 Train MSE: 50251.07 Sparsity loss: 230.27438 Total loss: 50297.125 40 Train MSE: 49343.855 Sparsity loss: 227.47243 Total loss: 49389.35 41 Train MSE: 50029.312 Sparsity loss: 227.15137 Total loss: 50074.742 42 Train MSE: 50207.59 Sparsity loss: 217.70378 Total loss: 50251.13 43 Train MSE: 49880.523 Sparsity loss: 229.83855 Total loss: 49926.492 44 Train MSE: 50212.87 Sparsity loss: 221.46394 Total loss: 50257.164 45 Train MSE: 49128.484 Sparsity loss: 210.35652 Total loss: 49170.555 46 Train MSE: 50789.1 Sparsity loss: 224.6236 Total loss: 50834.027 47 Train MSE: 49745.203 Sparsity loss: 208.91866 Total loss: 49786.99 48 Train MSE: 49136.805 Sparsity loss: 212.03882 Total loss: 49179.21 49 Train MSE: 49734.508 Sparsity loss: 210.17207 Total loss: 49776.543 50 Train MSE: 49806.156 Sparsity loss: 209.52606 Total loss: 49848.062 51 Train MSE: 49702.93 Sparsity loss: 208.8072 Total loss: 49744.69 52 Train MSE: 48954.047 Sparsity loss: 203.84367 Total loss: 48994.816 53 Train MSE: 50301.16 Sparsity loss: 210.90533 Total loss: 50343.34 54 Train MSE: 49575.094 Sparsity loss: 191.69012 Total loss: 49613.434 55 Train MSE: 49676.887 Sparsity loss: 207.45674 Total loss: 49718.38 56 Train MSE: 49404.758 Sparsity loss: 201.69958 Total loss: 49445.098 57 Train MSE: 50020.695 Sparsity loss: 204.85287 Total loss: 50061.664 58 Train MSE: 48874.516 Sparsity loss: 201.74414 Total loss: 48914.863 59 Train MSE: 49111.973 Sparsity loss: 198.01823 Total loss: 49151.58 60 Train MSE: 48886.934 Sparsity loss: 196.84473 Total loss: 48926.3 61 Train MSE: 49721.66 Sparsity loss: 196.78308 Total loss: 49761.016 62 Train MSE: 49700.527 Sparsity loss: 197.27567 Total loss: 49739.984 63 Train MSE: 49426.668 Sparsity loss: 189.84508 Total loss: 49464.637 64 Train MSE: 48611.496 Sparsity loss: 190.43098 Total loss: 48649.582 65 Train MSE: 49387.082 Sparsity loss: 198.46896 Total loss: 49426.777 66 Train MSE: 50567.04 Sparsity loss: 189.65807 Total loss: 50604.97 67 Train MSE: 49802.617 Sparsity loss: 194.90764 Total loss: 49841.598 68 Train MSE: 49476.33 Sparsity loss: 190.52286 Total loss: 49514.434 69 Train MSE: 49279.33 Sparsity loss: 188.51312 Total loss: 49317.03 70 Train MSE: 49839.67 Sparsity loss: 192.46887 Total loss: 49878.164 71 Train MSE: 49372.91 Sparsity loss: 188.38422 Total loss: 49410.586 72 Train MSE: 49740.69 Sparsity loss: 188.48038 Total loss: 49778.387 73 Train MSE: 49102.797 Sparsity loss: 186.65295 Total loss: 49140.13 74 Train MSE: 49204.418 Sparsity loss: 186.87958 Total loss: 49241.793 75 Train MSE: 48563.68 Sparsity loss: 183.88103 Total loss: 48600.457 76 Train MSE: 49253.73 Sparsity loss: 185.8108 Total loss: 49290.895 77 Train MSE: 48674.668 Sparsity loss: 187.44064 Total loss: 48712.156 78 Train MSE: 49446.785 Sparsity loss: 184.85057 Total loss: 49483.754 79 Train MSE: 49006.043 Sparsity loss: 187.69724 Total loss: 49043.582 80 Train MSE: 49198.895 Sparsity loss: 188.73978 Total loss: 49236.64 81 Train MSE: 49059.836 Sparsity loss: 187.8988 Total loss: 49097.414 82 Train MSE: 49791.89 Sparsity loss: 190.03383 Total loss: 49829.9 83 Train MSE: 49170.727 Sparsity loss: 181.90988 Total loss: 49207.11 84 Train MSE: 48562.64 Sparsity loss: 183.24141 Total loss: 48599.29 85 Train MSE: 48477.52 Sparsity loss: 181.06055 Total loss: 48513.73 86 Train MSE: 48106.855 Sparsity loss: 182.4678 Total loss: 48143.348 87 Train MSE: 48961.684 Sparsity loss: 182.01225 Total loss: 48998.086 88 Train MSE: 48559.29 Sparsity loss: 181.34373 Total loss: 48595.56 89 Train MSE: 48808.39 Sparsity loss: 182.10016 Total loss: 48844.812 90 Train MSE: 48474.07 Sparsity loss: 177.49586 Total loss: 48509.57 91 Train MSE: 48768.43 Sparsity loss: 182.34877 Total loss: 48804.9 92 Train MSE: 48697.895 Sparsity loss: 177.95137 Total loss: 48733.484 93 Train MSE: 49011.066 Sparsity loss: 178.60812 Total loss: 49046.79 94 Train MSE: 49272.31 Sparsity loss: 181.26115 Total loss: 49308.562 95 Train MSE: 49035.566 Sparsity loss: 177.58408 Total loss: 49071.082 96 Train MSE: 48829.035 Sparsity loss: 175.5479 Total loss: 48864.145 97 Train MSE: 49145.797 Sparsity loss: 180.36237 Total loss: 49181.87 98 Train MSE: 48687.387 Sparsity loss: 176.1325 Total loss: 48722.613 99 Train MSE: 49001.59 Sparsity loss: 176.56206 Total loss: 49036.902 time : 98.44356489181519
Image('vae_concept.PNG')
사람은 고양이 사진들이 저마다 다르게 생겼어도 고양이라는 것을 인식.
=> 고양이의 추상화된 특징(털 색깔, 눈 모양, 이빨 개수 등)을 보고 고양이라는 결론을 냄. => 변이형 오토인코더는 위와 같은 추상화된 특징인 잠재 변수(latent variable) $z$를 만들어내 입력 $x$를 복원한다.
특징
=> 두 가지 속성이 RBM과 유사하게 만든다. 훈련이 더 쉽고 샘플링 과정이 훨씬 빠름.(RBM에서는 샐로운 샘플을 만들기 전에 네트워크가 '열평형' 상태로 안정될 떄까지 기다려야 함.)
훈련하는 동안 비용함수가 코딩을 가우시안 샘플들의 군집처럼 보이는 거의 (초)구 형태를 가진 코딩 공간(잠재 변수 공간)으로 점진적으로 이동시킴. 그러므로 훈련이 끝난 뒤 새로운 샘플을 매우 쉽게 생성할 수 있게 됨.
비용함수
잠재 변수 $z$는 무수히 많은 경우가 존재하여 변분추론(Variational Inference)이라는 계산이 어려운 확률분포를 다루기 쉬운 분포로 근사하는 방법을 이용.
Image('15-11.png')
Image('vae_eq.PNG')
reset_graph()
from functools import partial
n_inputs = 28 * 28
n_hidden1 = 500
n_hidden2 = 500
n_hidden3 = 20 # 코딩 유닛
n_hidden4 = n_hidden2
n_hidden5 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.001
initializer = tf.variance_scaling_initializer()
my_dense_layer = partial(
tf.layers.dense,
activation=tf.nn.elu,
kernel_initializer=initializer)
X = tf.placeholder(tf.float32, [None, n_inputs])
hidden1 = my_dense_layer(X, n_hidden1)
hidden2 = my_dense_layer(hidden1, n_hidden2)
hidden3_mean = my_dense_layer(hidden2, n_hidden3, activation=None)
hidden3_sigma = my_dense_layer(hidden2, n_hidden3, activation=None)
noise = tf.random_normal(tf.shape(hidden3_sigma), dtype=tf.float32)
hidden3 = hidden3_mean + hidden3_sigma * noise
hidden4 = my_dense_layer(hidden3, n_hidden4)
hidden5 = my_dense_layer(hidden4, n_hidden5)
logits = my_dense_layer(hidden5, n_outputs, activation=None)
outputs = tf.sigmoid(logits)
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=X, logits=logits)
reconstruction_loss = tf.reduce_sum(xentropy)
latent_loss에서
자주 사용하는 변형은 $\sigma$ 가 아니라 $\gamma = \log\left(\sigma^2\right)$ 을 출력하도록 인코더를 훈련시키는 것입니다. $\sigma$ 는 $ \sigma = \exp\left(\dfrac{\gamma}{2}\right) $ 로 쉽게 계산할 수 있습니다.
eps = 1e-10 # NaN을 반환하는 log(0)을 피하기 위한 안전항
latent_loss = 0.5 * tf.reduce_sum(
tf.square(hidden3_sigma) + tf.square(hidden3_mean)
- 1 - tf.log(eps + tf.square(hidden3_sigma)))
loss = reconstruction_loss + latent_loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 50
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = len(X_train) // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = next(shuffle_batch(X_train, y_train, batch_size))
sess.run(training_op, feed_dict={X: X_batch})
loss_val, reconstruction_loss_val, latent_loss_val = sess.run([loss, reconstruction_loss, latent_loss], feed_dict={X: X_batch})
print("\r{}".format(epoch), "Train total loss:", loss_val, "\tReconstruction loss:", reconstruction_loss_val, "\tLatent loss:", latent_loss_val)
saver.save(sess, "./my_model_variational.ckpt")
0 Train total loss: 29572.203 Reconstruction loss: 23151.008 Latent loss: 6421.1963 1 Train total loss: 28620.953 Reconstruction loss: 21450.049 Latent loss: 7170.904 2 Train total loss: 27015.365 Reconstruction loss: 22027.074 Latent loss: 4988.2905 3 Train total loss: 27767.275 Reconstruction loss: 22226.871 Latent loss: 5540.405 4 Train total loss: 28869.762 Reconstruction loss: 20700.754 Latent loss: 8169.0073 5 Train total loss: 25203.68 Reconstruction loss: 19641.434 Latent loss: 5562.2456 6 Train total loss: 26776.473 Reconstruction loss: 21528.006 Latent loss: 5248.467 7 Train total loss: 21943.28 Reconstruction loss: 17829.645 Latent loss: 4113.6353 8 Train total loss: 23020.701 Reconstruction loss: 19437.668 Latent loss: 3583.0332 9 Train total loss: 28440.938 Reconstruction loss: 21704.73 Latent loss: 6736.206 10 Train total loss: 26110.426 Reconstruction loss: 20252.621 Latent loss: 5857.8057 11 Train total loss: 25925.035 Reconstruction loss: 19735.355 Latent loss: 6189.6797 12 Train total loss: 28150.635 Reconstruction loss: 20736.957 Latent loss: 7413.6777 13 Train total loss: 27741.363 Reconstruction loss: 21875.215 Latent loss: 5866.1475 14 Train total loss: 25658.514 Reconstruction loss: 19513.848 Latent loss: 6144.6655 15 Train total loss: 24889.326 Reconstruction loss: 19064.598 Latent loss: 5824.728 16 Train total loss: 26051.91 Reconstruction loss: 21719.785 Latent loss: 4332.125 17 Train total loss: 22337.926 Reconstruction loss: 18707.623 Latent loss: 3630.3032 18 Train total loss: 20493.469 Reconstruction loss: 16159.981 Latent loss: 4333.4873 19 Train total loss: 19355.885 Reconstruction loss: 15786.043 Latent loss: 3569.8423 20 Train total loss: 19472.137 Reconstruction loss: 16381.818 Latent loss: 3090.3188 21 Train total loss: 18758.85 Reconstruction loss: 14850.719 Latent loss: 3908.1306 22 Train total loss: 16923.572 Reconstruction loss: 13561.895 Latent loss: 3361.677 23 Train total loss: 17037.275 Reconstruction loss: 13663.5625 Latent loss: 3373.7134 24 Train total loss: 17050.16 Reconstruction loss: 13448.291 Latent loss: 3601.8694 25 Train total loss: 16358.8545 Reconstruction loss: 13091.393 Latent loss: 3267.4622 26 Train total loss: 16128.986 Reconstruction loss: 12639.127 Latent loss: 3489.8591 27 Train total loss: 16041.445 Reconstruction loss: 12662.326 Latent loss: 3379.1191 28 Train total loss: 16156.744 Reconstruction loss: 12666.256 Latent loss: 3490.4888 29 Train total loss: 18351.074 Reconstruction loss: 14287.199 Latent loss: 4063.8745 30 Train total loss: 19945.246 Reconstruction loss: 16325.962 Latent loss: 3619.2837 31 Train total loss: 25792.234 Reconstruction loss: 20377.277 Latent loss: 5414.9575 32 Train total loss: 31066.574 Reconstruction loss: 22525.264 Latent loss: 8541.311 33 Train total loss: 23886.604 Reconstruction loss: 19954.172 Latent loss: 3932.432 34 Train total loss: 27347.096 Reconstruction loss: 20995.734 Latent loss: 6351.3613 35 Train total loss: 33824.176 Reconstruction loss: 22448.273 Latent loss: 11375.902 36 Train total loss: 34840.863 Reconstruction loss: 24077.922 Latent loss: 10762.941 37 Train total loss: 22378.75 Reconstruction loss: 18393.934 Latent loss: 3984.8157 38 Train total loss: 25845.674 Reconstruction loss: 21189.55 Latent loss: 4656.1226 39 Train total loss: 25229.393 Reconstruction loss: 20656.36 Latent loss: 4573.0337 40 Train total loss: 28137.668 Reconstruction loss: 21675.877 Latent loss: 6461.79 41 Train total loss: 22224.035 Reconstruction loss: 17693.645 Latent loss: 4530.3896 42 Train total loss: 24016.549 Reconstruction loss: 19244.54 Latent loss: 4772.0093 43 Train total loss: 27131.332 Reconstruction loss: 20319.713 Latent loss: 6811.618 44 Train total loss: 23274.395 Reconstruction loss: 18576.152 Latent loss: 4698.242 45 Train total loss: 28048.7 Reconstruction loss: 21784.77 Latent loss: 6263.9297 46 Train total loss: 26220.902 Reconstruction loss: 20531.55 Latent loss: 5689.3516 47 Train total loss: 23616.584 Reconstruction loss: 18787.098 Latent loss: 4829.486 48 Train total loss: 21312.895 Reconstruction loss: 17666.86 Latent loss: 3646.0352 49 Train total loss: 21271.795 Reconstruction loss: 17413.19 Latent loss: 3858.6047
Image('rbm.png')
Image('bm.png')
변이형 오토인코더 모델을 훈련시켜, 가우시안 분포에서 랜덤한 코딩을 샘플링하고 나서 이미지를 생성
import numpy as np
n_digits = 60
n_epochs = 50
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = len(X_train) // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = next(shuffle_batch(X_train, y_train, batch_size))
sess.run(training_op, feed_dict={X: X_batch})
loss_val, reconstruction_loss_val, latent_loss_val = sess.run([loss, reconstruction_loss, latent_loss], feed_dict={X: X_batch}) # not shown
print("\r{}".format(epoch), "Train total loss:", loss_val, "\tReconstruction loss:", reconstruction_loss_val, "\tLatent loss:", latent_loss_val) # not shown
saver.save(sess, "./my_model_variational.ckpt")
codings_rnd = np.random.normal(size=[n_digits, n_hidden3])
outputs_val = outputs.eval(feed_dict={hidden3: codings_rnd})
0 Train total loss: 32289.992 Reconstruction loss: 25590.861 Latent loss: 6699.13 1 Train total loss: 28199.309 Reconstruction loss: 22141.533 Latent loss: 6057.7764 2 Train total loss: 27895.44 Reconstruction loss: 21641.785 Latent loss: 6253.6543 3 Train total loss: 28308.658 Reconstruction loss: 21736.45 Latent loss: 6572.209 4 Train total loss: 23832.156 Reconstruction loss: 19610.201 Latent loss: 4221.955 5 Train total loss: 21277.266 Reconstruction loss: 17100.809 Latent loss: 4176.456 6 Train total loss: 21817.67 Reconstruction loss: 17989.746 Latent loss: 3827.9246 7 Train total loss: 22456.105 Reconstruction loss: 17205.367 Latent loss: 5250.738 8 Train total loss: 21742.525 Reconstruction loss: 17666.35 Latent loss: 4076.176 9 Train total loss: 28972.26 Reconstruction loss: 20899.95 Latent loss: 8072.31 10 Train total loss: 26447.56 Reconstruction loss: 19662.61 Latent loss: 6784.9517 11 Train total loss: 26740.46 Reconstruction loss: 20659.9 Latent loss: 6080.5605 12 Train total loss: 21410.893 Reconstruction loss: 17798.746 Latent loss: 3612.147 13 Train total loss: 30205.488 Reconstruction loss: 22435.758 Latent loss: 7769.7305 14 Train total loss: 30357.53 Reconstruction loss: 21036.584 Latent loss: 9320.945 15 Train total loss: 23378.457 Reconstruction loss: 18215.781 Latent loss: 5162.6763 16 Train total loss: 26472.395 Reconstruction loss: 21168.395 Latent loss: 5304.0005 17 Train total loss: 21800.527 Reconstruction loss: 17400.457 Latent loss: 4400.0713 18 Train total loss: 24208.27 Reconstruction loss: 19425.41 Latent loss: 4782.859 19 Train total loss: 25423.59 Reconstruction loss: 19908.09 Latent loss: 5515.5 20 Train total loss: 21397.303 Reconstruction loss: 17637.148 Latent loss: 3760.154 21 Train total loss: 26807.082 Reconstruction loss: 19823.438 Latent loss: 6983.6455 22 Train total loss: 32390.984 Reconstruction loss: 20121.688 Latent loss: 12269.297 23 Train total loss: 26172.88 Reconstruction loss: 18510.742 Latent loss: 7662.1387 24 Train total loss: 31215.885 Reconstruction loss: 20771.926 Latent loss: 10443.959 25 Train total loss: 28084.094 Reconstruction loss: 20473.562 Latent loss: 7610.531 26 Train total loss: 23317.818 Reconstruction loss: 18436.281 Latent loss: 4881.537 27 Train total loss: 23435.53 Reconstruction loss: 17967.922 Latent loss: 5467.608 28 Train total loss: 29619.129 Reconstruction loss: 23414.258 Latent loss: 6204.8706 29 Train total loss: 20107.65 Reconstruction loss: 16261.395 Latent loss: 3846.2563 30 Train total loss: 18412.283 Reconstruction loss: 15265.105 Latent loss: 3147.1782 31 Train total loss: 17872.803 Reconstruction loss: 14456.85 Latent loss: 3415.9526 32 Train total loss: 17319.145 Reconstruction loss: 14014.535 Latent loss: 3304.6096 33 Train total loss: 16373.042 Reconstruction loss: 13093.099 Latent loss: 3279.9434 34 Train total loss: 15870.455 Reconstruction loss: 12563.967 Latent loss: 3306.488 35 Train total loss: 15341.466 Reconstruction loss: 11922.9795 Latent loss: 3418.4866 36 Train total loss: 16059.282 Reconstruction loss: 12633.668 Latent loss: 3425.6143 37 Train total loss: 16375.703 Reconstruction loss: 12818.489 Latent loss: 3557.2136 38 Train total loss: 18657.494 Reconstruction loss: 14076.758 Latent loss: 4580.736 39 Train total loss: 19965.822 Reconstruction loss: 16081.084 Latent loss: 3884.739 40 Train total loss: 30296.324 Reconstruction loss: 22517.602 Latent loss: 7778.7236 41 Train total loss: 29319.492 Reconstruction loss: 21037.146 Latent loss: 8282.345 42 Train total loss: 23492.775 Reconstruction loss: 18191.969 Latent loss: 5300.8066 43 Train total loss: 24004.826 Reconstruction loss: 19014.695 Latent loss: 4990.131 44 Train total loss: 27420.729 Reconstruction loss: 20545.406 Latent loss: 6875.3223 45 Train total loss: 26401.379 Reconstruction loss: 20512.088 Latent loss: 5889.291 46 Train total loss: 25101.277 Reconstruction loss: 20045.941 Latent loss: 5055.3354 47 Train total loss: 31789.25 Reconstruction loss: 21657.17 Latent loss: 10132.081 48 Train total loss: 25199.232 Reconstruction loss: 20346.18 Latent loss: 4853.0527 49 Train total loss: 23106.63 Reconstruction loss: 19004.512 Latent loss: 4102.1196
plt.figure(figsize=(8,50))
for iteration in range(n_digits):
plt.subplot(n_digits, 10, iteration + 1)
plot_image(outputs_val[iteration])
[VAE의 장단점]
장점
단점
======================================================================
수축 오토인코더(CAE, Conatractive autoencoder)
훈련하는 동안 입력에 대한 코딩의 변화율이 작도록 제약을 받습니다. 다시 말해 두 개의 비슷한 입력은 비슷한 코딩이 되어야 합니다.
잡음제거 오토인코더(DAE, Denoising autoencoder)와 CAE의 공통점은 입력이 조금 변하더라도 '은닉층'의 출력, 즉 추출된 측징 벡터는 가급적 상수로 유지한다는 것. 차이점은 상수를 유지하는 전략에 있다. DAE는 일부러 잡음을 추가한 후 잡음이 없는 원래 패턴을 찾는 방식을 사용하지만, CAE는 인코더 함수의 미분값을 작게 만드는 방식을 사용한다.
입력 공간에서 가까운 두 특징 벡터가 인코더를 통해 특징 공간으로 변환되면, 특징 공간에서 이 둘은 더 가까워지는 효과를 가진다. 이러한 효과는 공간이 축소되는 현상으로 볼 수 있다. 축소(contractive)라는 단어가 있는 이유이다.
합성곱 층을 통해 처리되는 이미지를 재구성함으로써 시각적 특징을 추출하는 법을 학습하는 오토인코더입니다.
Image('stackedConvAE.png')
데이터를 생성하는 기능을 추가한 잡음제거 오토인코더의 일반화된 모델 (마코프 체인 사용)
훈련하는 도안 코딩층에 있는 모든 뉴런의 활성화를 계산한 후 훈련 배치에서 각 뉴런에 대해 최대 k% 활성화만 보존하고 나머지는 0으로 설정합니다. 이는 자연스럽게 희소 코딩을 만듭니다. 또한 비슷하 WTA 방식을 사용하여 희소한 합성곱 오토인코더를 생성할 수 있습니다.
적대적 생성 네트워크(GAN, Generative Adversarial Network)
판별자와 생성자라고 부르는 두 번째 네트워크가 만든 가짜 데이터와 실제 데이터를 구분하도록 훈련됩니다. 생성자는 판별자를 속이는 법을 학습하며, 판별자는 생성자의 속임수를 피하는 법을 학습합니다. 이런 경쟁은 매우 현실적인 가짜 데이터와 안정적인 코딩을 생성하도록 만듭니다. 적대적 훈련 은 매우 강력한 아이디어로 많은 관심을 받고 있습니다. 얀 르쿤은 심지어 이를 '지금까지 본 것 중 가장 멋진 아이디어'라고 말했습니다.
Image('police.PNG')
Ian Goodfellow는 GAN을 경찰과 위조지폐범 사이의 게임에 비유했습니다.
GAN에서 다루고자 하는 모든 데이터는 확률분포를 가지고 있는 랜덤변수(Random Variable)이므로 확률분포의 개념을 알아야 한다. 가령 2차 방정식에서 미지 수 X를 변수라 부르고, 이를 대입해 방정식을 풀면 미지수 X는 특정한 수가 됩니다. 그러나, 랜덤변수는 측정할 때마다 다른 값이 나옵니다. 하지만, 특정한 확률분포를 따르는 숫자를 생성하므로, 랜덤변수에 대한 확률분포를 안다는 이야기는 랜덤변수 즉 데이터에 대한 전부를 이해하고 있다는 것과 같습니다.
예를 들어, 확률분포를 알면 그 데이터의 예측 기댓값, 데이터의 분산을 즉각 알아낼 수 있어 데이터의 통계적 특성을 바로 분석할 수 있으며, 주어진 확률분포를 따르도록 데이터를 임의 생성하면 그 데이터는 확률분포를 구할 때 사용한 원 데이터와 유사한 값을 가집니다. 즉, GAN과 같은 비지도학습이 가능한 머신러닝 알고리즘으로 데이터에 대한 확률분포를 모델링 할 수 있게 되면, 원 데이터와 확률분포를 정확히 공유하는 무한히 많은 새로운 데이터를 새로 생성할 수 있음을 의미합니다.
Image('distribution.PNG')
위의 (a)~(d) 그래프에서 원 데이터의 확률분포가 학습이 거듭 진행됨에 따라 GAN이 만들어 내는 확률분포와 거의 동일해 짐을 볼 수 있습니다. 이렇게 되면 파란색 점선인 분류자 D는 더 이상 분류를 해도 의미가 없는 0.5라는 확률 값이 나옵니다. 이것은 동전을 던져서 앞면을 진실, 뒷면을 거짓이라고 했을 때, 진실을 맞출 확률이 0.5가 되는 것처럼 GAN에 의해 만들어진 데이터가 진짜 인지 가짜인지 맞출 확률이 0.5가 되면서 분류자가 의미 없게 되는 겁니다. 결론적으로 생성자 G가 실제 데이터와 거의 유사한 데이터를 만들어 낼 수 있는 상황이 되었음을 의미합니다.
Image('gan.PNG')
분류 모델을 먼저 학습시킨 후, 생성 모델을 학습시키는 과정을 서로 주고받으면서 반복합니다.
이 과정을 통해 분류 모델은 진짜 데이터를 진짜로, 가짜 데이터를 가짜로 분류할 수 있게 됩니다. 분류 모델을 학습시킨 다음에는 학습된 분류 모델을 속이는 방향으로 생성 모델을 학습시켜줘야 합니다. 생성 모델에서 만들어낸 가짜 데이터를 판별 모델에 입력하고, 가짜 데이터를 진짜라고 분류할 만큼 진짜 데이터와 유사한 데이터를 만들어 내도록 생성 모델을 학습시킵니다.
Image('gan_eq.PNG')
Minmaxproblem : V(D,G)에서 분류자 D를 최대화하는 방향으로 학습하고, 생성자 G는 최소화하는 방향으로 학습하는 것.
분류자 D는 데이터 판별을 최대한 잘해야 한다. 생성자 G는 분류자 D를 최대한 속여야 한다. (분류자 입장에선 최대한 진짜처럼.)
=> V(D,G)가 최대가 되도록 D를 학습하는 것은 판별자가 진짜 데이터를 진짜로, 가짜 데이터를 가짜로 분류하도록 학습하는 과정입니다.
import os
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
from keras.layers import Input
from keras.models import Model, Sequential
from keras.layers.core import Dense, Dropout
from keras.layers.advanced_activations import LeakyReLU
from keras.datasets import fashion_mnist
from keras.optimizers import Adam
from keras import initializers
Using TensorFlow backend.
# 실험을 재현하고 동일한 결과를 얻을 수 있는지 확인하기 위해 seed 를 설정.
np.random.seed(10)
# 우리의 랜덤 노이즈 벡터의 차원을 설정.
random_dim = 100
def load_minst_data():
# 데이터를 로드.
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
# 데이터를 -1 ~ 1 사이 값으로 normalize.
x_train = (x_train.astype(np.float32) - 127.5)/127.5
# x_train 의 shape 를 (60000, 28, 28) 에서 (60000, 784) 로 변형.
# 따라서 row 당 784 columns 을 가지게 됨.
x_train = x_train.reshape(60000, 784)
return (x_train, y_train, x_test, y_test)
# Adam Optimizer를 사용.
def get_optimizer():
return Adam(lr=0.0002, beta_1=0.5)
# Generator 만들기
def get_generator(optimizer):
generator = Sequential()
generator.add(Dense(256, input_dim=random_dim, kernel_initializer=initializers.RandomNormal(stddev=0.02)))
generator.add(LeakyReLU(0.2))
generator.add(Dense(512))
generator.add(LeakyReLU(0.2))
generator.add(Dense(1024))
generator.add(LeakyReLU(0.2))
generator.add(Dense(784, activation='tanh'))
generator.compile(loss='binary_crossentropy', optimizer=optimizer)
return generator
# Discriminator 만들기
def get_discriminator(optimizer):
discriminator = Sequential()
discriminator.add(Dense(1024, input_dim=784, kernel_initializer=initializers.RandomNormal(stddev=0.02)))
discriminator.add(LeakyReLU(0.2))
discriminator.add(Dropout(0.3))
discriminator.add(Dense(512))
discriminator.add(LeakyReLU(0.2))
discriminator.add(Dropout(0.3))
discriminator.add(Dense(256))
discriminator.add(LeakyReLU(0.2))
discriminator.add(Dropout(0.3))
discriminator.add(Dense(1, activation='sigmoid'))
discriminator.compile(loss='binary_crossentropy', optimizer=optimizer)
return discriminator
def get_gan_network(discriminator, random_dim, generator, optimizer):
# Generator와 Discriminator를 동시에 학습시키고 싶을 때 trainable을 False로 설정.
discriminator.trainable = False
# GAN 입력 (노이즈)은 위에서 100 차원으로 설정.
gan_input = Input(shape=(random_dim,))
# Generator의 결과는 이미지.
x = generator(gan_input)
# Discriminator의 결과는 이미지가 진짜인지 가짜인지에 대한 확률.
gan_output = discriminator(x)
gan = Model(inputs=gan_input, outputs=gan_output)
gan.compile(loss='binary_crossentropy', optimizer=optimizer)
return gan
# 생성된 MNIST 이미지를 보여주는 함수
def plot_generated_images(epoch, generator, examples=100, dim=(10, 10), figsize=(10, 10)):
noise = np.random.normal(0, 1, size=[examples, random_dim])
generated_images = generator.predict(noise)
generated_images = generated_images.reshape(examples, 28, 28)
print("epoch: ", epoch)
plt.figure(figsize=figsize)
for i in range(generated_images.shape[0]):
plt.subplot(dim[0], dim[1], i+1)
plt.imshow(generated_images[i], interpolation='nearest', cmap='gray_r')
plt.axis('off')
plt.tight_layout()
plt.savefig('gan_generated_image_epoch_%d.png' % epoch)
def train(epochs=1, batch_size=128):
# train 데이터와 test 데이터를 가져오기.
x_train, y_train, x_test, y_test = load_minst_data()
# train 데이터를 128 사이즈의 batch 로 나눔.
batch_count = x_train.shape[0] // batch_size
# GAN 네트워크를 생성.
adam = get_optimizer()
generator = get_generator(adam)
discriminator = get_discriminator(adam)
gan = get_gan_network(discriminator, random_dim, generator, adam)
for e in range(1, epochs+1):
print ('-'*15, 'Epoch %d' % e, '-'*15)
for _ in tqdm(range(batch_count)):
# 입력으로 사용할 random 노이즈와 이미지를 가져오기.
noise = np.random.normal(0, 1, size=[batch_size, random_dim])
image_batch = x_train[np.random.randint(0, x_train.shape[0], size=batch_size)]
# MNIST 이미지를 생성.
generated_images = generator.predict(noise)
X = np.concatenate([image_batch, generated_images])
y_dis = np.zeros(2*batch_size)
y_dis[:batch_size] = 1
# Discriminator를 학습.
discriminator.trainable = True
discriminator.train_on_batch(X, y_dis)
# Generator를 학습.
noise = np.random.normal(0, 1, size=[batch_size, random_dim])
y_gen = np.ones(batch_size)
discriminator.trainable = False
gan.train_on_batch(noise, y_gen)
if e == 1 or e % 20 == 0:
plot_generated_images(e, generator)
train(100, 128)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
0%| | 0/468 [00:00<?, ?it/s]
--------------- Epoch 1 --------------- WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead.
100%|██████████| 468/468 [00:08<00:00, 56.09it/s]
epoch: 1
1%|▏ | 7/468 [00:00<00:06, 69.17it/s]
--------------- Epoch 2 ---------------
100%|██████████| 468/468 [00:05<00:00, 79.70it/s] 2%|▏ | 8/468 [00:00<00:06, 76.32it/s]
--------------- Epoch 3 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.80it/s] 2%|▏ | 8/468 [00:00<00:05, 77.94it/s]
--------------- Epoch 4 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.03it/s] 2%|▏ | 8/468 [00:00<00:05, 79.63it/s]
--------------- Epoch 5 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.41it/s] 2%|▏ | 9/468 [00:00<00:05, 84.22it/s]
--------------- Epoch 6 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.41it/s] 2%|▏ | 8/468 [00:00<00:05, 79.22it/s]
--------------- Epoch 7 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.98it/s] 2%|▏ | 9/468 [00:00<00:05, 84.10it/s]
--------------- Epoch 8 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.29it/s] 2%|▏ | 9/468 [00:00<00:05, 84.19it/s]
--------------- Epoch 9 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.42it/s] 2%|▏ | 9/468 [00:00<00:05, 81.34it/s]
--------------- Epoch 10 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.45it/s] 2%|▏ | 9/468 [00:00<00:05, 82.45it/s]
--------------- Epoch 11 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.36it/s] 2%|▏ | 9/468 [00:00<00:05, 83.29it/s]
--------------- Epoch 12 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.89it/s] 2%|▏ | 9/468 [00:00<00:05, 83.62it/s]
--------------- Epoch 13 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.94it/s] 2%|▏ | 8/468 [00:00<00:05, 79.94it/s]
--------------- Epoch 14 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.17it/s] 2%|▏ | 9/468 [00:00<00:05, 84.38it/s]
--------------- Epoch 15 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.73it/s] 2%|▏ | 8/468 [00:00<00:06, 73.22it/s]
--------------- Epoch 16 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.16it/s] 2%|▏ | 9/468 [00:00<00:05, 83.58it/s]
--------------- Epoch 17 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.45it/s] 2%|▏ | 8/468 [00:00<00:05, 79.00it/s]
--------------- Epoch 18 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.53it/s] 2%|▏ | 8/468 [00:00<00:05, 79.97it/s]
--------------- Epoch 19 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.27it/s] 2%|▏ | 9/468 [00:00<00:05, 81.01it/s]
--------------- Epoch 20 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.48it/s]
epoch: 20
1%|▏ | 7/468 [00:00<00:06, 67.28it/s]
--------------- Epoch 21 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.58it/s] 2%|▏ | 9/468 [00:00<00:05, 80.38it/s]
--------------- Epoch 22 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.39it/s] 2%|▏ | 9/468 [00:00<00:05, 82.70it/s]
--------------- Epoch 23 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.89it/s] 2%|▏ | 8/468 [00:00<00:05, 79.16it/s]
--------------- Epoch 24 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.32it/s] 2%|▏ | 9/468 [00:00<00:05, 80.17it/s]
--------------- Epoch 25 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.49it/s] 2%|▏ | 8/468 [00:00<00:05, 79.29it/s]
--------------- Epoch 26 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.53it/s] 2%|▏ | 9/468 [00:00<00:05, 84.79it/s]
--------------- Epoch 27 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.83it/s] 2%|▏ | 8/468 [00:00<00:05, 77.35it/s]
--------------- Epoch 28 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.54it/s] 2%|▏ | 9/468 [00:00<00:05, 83.19it/s]
--------------- Epoch 29 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.49it/s] 2%|▏ | 9/468 [00:00<00:05, 82.63it/s]
--------------- Epoch 30 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.70it/s] 2%|▏ | 9/468 [00:00<00:05, 81.73it/s]
--------------- Epoch 31 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.54it/s] 2%|▏ | 9/468 [00:00<00:05, 82.90it/s]
--------------- Epoch 32 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.50it/s] 2%|▏ | 8/468 [00:00<00:06, 73.71it/s]
--------------- Epoch 33 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.90it/s] 2%|▏ | 9/468 [00:00<00:05, 81.65it/s]
--------------- Epoch 34 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.83it/s] 2%|▏ | 9/468 [00:00<00:05, 81.90it/s]
--------------- Epoch 35 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.25it/s] 2%|▏ | 9/468 [00:00<00:05, 83.52it/s]
--------------- Epoch 36 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.25it/s] 2%|▏ | 8/468 [00:00<00:05, 79.49it/s]
--------------- Epoch 37 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.05it/s] 2%|▏ | 9/468 [00:00<00:05, 83.85it/s]
--------------- Epoch 38 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.22it/s] 2%|▏ | 8/468 [00:00<00:05, 79.10it/s]
--------------- Epoch 39 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.64it/s] 2%|▏ | 9/468 [00:00<00:05, 82.71it/s]
--------------- Epoch 40 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.79it/s]
epoch: 40
1%|▏ | 7/468 [00:00<00:06, 68.29it/s]
--------------- Epoch 41 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.01it/s] 2%|▏ | 9/468 [00:00<00:05, 81.30it/s]
--------------- Epoch 42 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.56it/s] 2%|▏ | 8/468 [00:00<00:06, 75.66it/s]
--------------- Epoch 43 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.33it/s] 2%|▏ | 8/468 [00:00<00:05, 79.39it/s]
--------------- Epoch 44 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.93it/s] 2%|▏ | 8/468 [00:00<00:05, 78.75it/s]
--------------- Epoch 45 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.44it/s] 2%|▏ | 9/468 [00:00<00:05, 84.55it/s]
--------------- Epoch 46 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.79it/s] 2%|▏ | 9/468 [00:00<00:05, 80.96it/s]
--------------- Epoch 47 ---------------
100%|██████████| 468/468 [00:05<00:00, 79.65it/s] 2%|▏ | 9/468 [00:00<00:05, 82.74it/s]
--------------- Epoch 48 ---------------
100%|██████████| 468/468 [00:05<00:00, 78.07it/s] 1%|▏ | 7/468 [00:00<00:06, 66.24it/s]
--------------- Epoch 49 ---------------
100%|██████████| 468/468 [00:06<00:00, 76.63it/s] 2%|▏ | 9/468 [00:00<00:05, 80.82it/s]
--------------- Epoch 50 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.16it/s] 2%|▏ | 9/468 [00:00<00:05, 81.56it/s]
--------------- Epoch 51 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.50it/s] 2%|▏ | 9/468 [00:00<00:05, 80.96it/s]
--------------- Epoch 52 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.13it/s] 2%|▏ | 9/468 [00:00<00:05, 82.58it/s]
--------------- Epoch 53 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.61it/s] 2%|▏ | 8/468 [00:00<00:05, 76.69it/s]
--------------- Epoch 54 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.40it/s] 2%|▏ | 9/468 [00:00<00:05, 81.14it/s]
--------------- Epoch 55 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.15it/s] 2%|▏ | 9/468 [00:00<00:05, 81.91it/s]
--------------- Epoch 56 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.55it/s] 2%|▏ | 9/468 [00:00<00:05, 84.22it/s]
--------------- Epoch 57 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.74it/s] 2%|▏ | 9/468 [00:00<00:05, 82.58it/s]
--------------- Epoch 58 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.58it/s] 2%|▏ | 9/468 [00:00<00:05, 82.21it/s]
--------------- Epoch 59 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.94it/s] 2%|▏ | 9/468 [00:00<00:05, 82.72it/s]
--------------- Epoch 60 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.50it/s]
epoch: 60
1%|▏ | 7/468 [00:00<00:06, 66.72it/s]
--------------- Epoch 61 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.05it/s] 2%|▏ | 9/468 [00:00<00:05, 81.01it/s]
--------------- Epoch 62 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.55it/s] 2%|▏ | 9/468 [00:00<00:05, 83.67it/s]
--------------- Epoch 63 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.75it/s] 2%|▏ | 8/468 [00:00<00:05, 79.65it/s]
--------------- Epoch 64 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.27it/s] 2%|▏ | 8/468 [00:00<00:05, 77.60it/s]
--------------- Epoch 65 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.13it/s] 2%|▏ | 8/468 [00:00<00:05, 79.02it/s]
--------------- Epoch 66 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.10it/s] 2%|▏ | 9/468 [00:00<00:05, 80.47it/s]
--------------- Epoch 67 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.86it/s] 2%|▏ | 9/468 [00:00<00:05, 83.32it/s]
--------------- Epoch 68 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.35it/s] 2%|▏ | 9/468 [00:00<00:05, 81.27it/s]
--------------- Epoch 69 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.23it/s] 2%|▏ | 9/468 [00:00<00:05, 82.25it/s]
--------------- Epoch 70 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.67it/s] 2%|▏ | 9/468 [00:00<00:05, 82.38it/s]
--------------- Epoch 71 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.33it/s] 2%|▏ | 9/468 [00:00<00:05, 82.84it/s]
--------------- Epoch 72 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.63it/s] 2%|▏ | 9/468 [00:00<00:05, 83.37it/s]
--------------- Epoch 73 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.66it/s] 2%|▏ | 9/468 [00:00<00:05, 81.89it/s]
--------------- Epoch 74 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.90it/s] 2%|▏ | 9/468 [00:00<00:05, 81.99it/s]
--------------- Epoch 75 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.55it/s] 2%|▏ | 9/468 [00:00<00:05, 83.27it/s]
--------------- Epoch 76 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.60it/s] 2%|▏ | 8/468 [00:00<00:05, 78.39it/s]
--------------- Epoch 77 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.72it/s] 2%|▏ | 9/468 [00:00<00:05, 80.84it/s]
--------------- Epoch 78 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.68it/s] 2%|▏ | 9/468 [00:00<00:05, 83.24it/s]
--------------- Epoch 79 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.79it/s] 2%|▏ | 8/468 [00:00<00:06, 72.07it/s]
--------------- Epoch 80 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.28it/s]
epoch: 80
1%|▏ | 7/468 [00:00<00:06, 67.73it/s]
--------------- Epoch 81 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.48it/s] 2%|▏ | 9/468 [00:00<00:05, 80.73it/s]
--------------- Epoch 82 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.67it/s] 2%|▏ | 8/468 [00:00<00:06, 75.92it/s]
--------------- Epoch 83 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.50it/s] 2%|▏ | 9/468 [00:00<00:05, 82.78it/s]
--------------- Epoch 84 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.40it/s] 2%|▏ | 9/468 [00:00<00:05, 80.87it/s]
--------------- Epoch 85 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.39it/s] 2%|▏ | 9/468 [00:00<00:05, 81.61it/s]
--------------- Epoch 86 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.06it/s] 2%|▏ | 9/468 [00:00<00:05, 85.26it/s]
--------------- Epoch 87 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.89it/s] 2%|▏ | 8/468 [00:00<00:06, 75.06it/s]
--------------- Epoch 88 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.73it/s] 2%|▏ | 9/468 [00:00<00:05, 83.11it/s]
--------------- Epoch 89 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.51it/s] 2%|▏ | 9/468 [00:00<00:05, 81.20it/s]
--------------- Epoch 90 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.49it/s] 2%|▏ | 9/468 [00:00<00:05, 82.26it/s]
--------------- Epoch 91 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.60it/s] 2%|▏ | 9/468 [00:00<00:05, 83.86it/s]
--------------- Epoch 92 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.75it/s] 2%|▏ | 9/468 [00:00<00:05, 81.88it/s]
--------------- Epoch 93 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.98it/s] 2%|▏ | 9/468 [00:00<00:05, 82.66it/s]
--------------- Epoch 94 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.78it/s] 2%|▏ | 8/468 [00:00<00:05, 79.25it/s]
--------------- Epoch 95 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.22it/s] 2%|▏ | 9/468 [00:00<00:05, 81.02it/s]
--------------- Epoch 96 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.25it/s] 2%|▏ | 9/468 [00:00<00:05, 82.68it/s]
--------------- Epoch 97 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.83it/s] 2%|▏ | 9/468 [00:00<00:05, 80.67it/s]
--------------- Epoch 98 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.32it/s] 2%|▏ | 8/468 [00:00<00:06, 76.50it/s]
--------------- Epoch 99 ---------------
100%|██████████| 468/468 [00:05<00:00, 80.62it/s] 2%|▏ | 9/468 [00:00<00:05, 82.22it/s]
--------------- Epoch 100 ---------------
100%|██████████| 468/468 [00:05<00:00, 81.33it/s]
epoch: 100
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0
X_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
X_valid, X_train = X_train[:5000], X_train[5000:]
y_valid, y_train = y_train[:5000], y_train[5000:]
reset_graph()
from functools import partial
n_inputs = 28 * 28
n_hidden1 = 500
n_hidden2 = 500
n_hidden3 = 20 # 코딩 유닛
n_hidden4 = n_hidden2
n_hidden5 = n_hidden1
n_outputs = n_inputs
learning_rate = 0.001
initializer = tf.variance_scaling_initializer()
my_dense_layer = partial(
tf.layers.dense,
activation=tf.nn.elu,
kernel_initializer=initializer)
X = tf.placeholder(tf.float32, [None, n_inputs])
hidden1 = my_dense_layer(X, n_hidden1)
hidden2 = my_dense_layer(hidden1, n_hidden2)
hidden3_mean = my_dense_layer(hidden2, n_hidden3, activation=None)
hidden3_sigma = my_dense_layer(hidden2, n_hidden3, activation=None)
noise = tf.random_normal(tf.shape(hidden3_sigma), dtype=tf.float32)
hidden3 = hidden3_mean + hidden3_sigma * noise
hidden4 = my_dense_layer(hidden3, n_hidden4)
hidden5 = my_dense_layer(hidden4, n_hidden5)
logits = my_dense_layer(hidden5, n_outputs, activation=None)
outputs = tf.sigmoid(logits)
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=X, logits=logits)
reconstruction_loss = tf.reduce_sum(xentropy)
eps = 1e-10 # NaN을 반환하는 log(0)을 피하기 위한 안전항
latent_loss = 0.5 * tf.reduce_sum(
tf.square(hidden3_sigma) + tf.square(hidden3_mean)
- 1 - tf.log(eps + tf.square(hidden3_sigma)))
loss = reconstruction_loss + latent_loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
import numpy as np
n_digits = 60
n_epochs = 50
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
n_batches = len(X_train) // batch_size
for iteration in range(n_batches):
print("\r{}%".format(100 * iteration // n_batches), end="")
sys.stdout.flush()
X_batch, y_batch = next(shuffle_batch(X_train, y_train, batch_size))
sess.run(training_op, feed_dict={X: X_batch})
loss_val, reconstruction_loss_val, latent_loss_val = sess.run([loss, reconstruction_loss, latent_loss], feed_dict={X: X_batch}) # not shown
print("\r{}".format(epoch), "훈련 전체 손실:", loss_val, "\t재구성 손실:", reconstruction_loss_val, "\t잠재 손실:", latent_loss_val) # not shown
saver.save(sess, "./my_model_variational.ckpt")
codings_rnd = np.random.normal(size=[n_digits, n_hidden3])
outputs_val = outputs.eval(feed_dict={hidden3: codings_rnd})
0 훈련 전체 손실: 55285.066 재구성 손실: 47245.22 잠재 손실: 8039.849 1 훈련 전체 손실: 43523.305 재구성 손실: 39118.625 잠재 손실: 4404.681 2 훈련 전체 손실: 45720.316 재구성 손실: 41197.598 잠재 손실: 4522.7173 3 훈련 전체 손실: 38479.887 재구성 손실: 36276.61 잠재 손실: 2203.2776 4 훈련 전체 손실: 36968.895 재구성 손실: 35021.273 잠재 손실: 1947.6211 5 훈련 전체 손실: 39625.383 재구성 손실: 37663.39 잠재 손실: 1961.9904 6 훈련 전체 손실: 40833.824 재구성 손실: 37471.934 잠재 손실: 3361.8914 7 훈련 전체 손실: 37958.734 재구성 손실: 35936.633 잠재 손실: 2022.1023 8 훈련 전체 손실: 36066.004 재구성 손실: 34106.516 잠재 손실: 1959.4895 9 훈련 전체 손실: 37961.965 재구성 손실: 35910.14 잠재 손실: 2051.823 10 훈련 전체 손실: 38499.035 재구성 손실: 36510.61 잠재 손실: 1988.4252 11 훈련 전체 손실: 36274.22 재구성 손실: 34300.008 잠재 손실: 1974.2114 12 훈련 전체 손실: 37572.15 재구성 손실: 35256.242 잠재 손실: 2315.9077 13 훈련 전체 손실: 43412.957 재구성 손실: 38159.348 잠재 손실: 5253.609 14 훈련 전체 손실: 40295.625 재구성 손실: 37452.484 잠재 손실: 2843.1392 15 훈련 전체 손실: 36653.562 재구성 손실: 34720.555 잠재 손실: 1933.009 16 훈련 전체 손실: 36083.215 재구성 손실: 34038.56 잠재 손실: 2044.658 17 훈련 전체 손실: 37858.266 재구성 손실: 35818.82 잠재 손실: 2039.447 18 훈련 전체 손실: 37536.66 재구성 손실: 35414.33 잠재 손실: 2122.333 19 훈련 전체 손실: 36385.95 재구성 손실: 34350.055 잠재 손실: 2035.8955 20 훈련 전체 손실: 36630.25 재구성 손실: 34549.426 잠재 손실: 2080.8262 21 훈련 전체 손실: 38045.9 재구성 손실: 35967.59 잠재 손실: 2078.308 22 훈련 전체 손실: 41576.855 재구성 손실: 38611.28 잠재 손실: 2965.576 23 훈련 전체 손실: 38049.934 재구성 손실: 35987.31 잠재 손실: 2062.624 24 훈련 전체 손실: 45564.777 재구성 손실: 41773.91 잠재 손실: 3790.8667 25 훈련 전체 손실: 50432.93 재구성 손실: 40351.547 잠재 손실: 10081.383 26 훈련 전체 손실: 45612.61 재구성 손실: 40324.207 잠재 손실: 5288.4004 27 훈련 전체 손실: 42108.152 재구성 손실: 38572.637 잠재 손실: 3535.515 28 훈련 전체 손실: 39533.715 재구성 손실: 37552.78 잠재 손실: 1980.9318 29 훈련 전체 손실: 37504.46 재구성 손실: 35207.004 잠재 손실: 2297.4563 30 훈련 전체 손실: 38088.38 재구성 손실: 36055.754 잠재 손실: 2032.6261 31 훈련 전체 손실: 35400.31 재구성 손실: 33169.21 잠재 손실: 2231.0974 32 훈련 전체 손실: 37872.68 재구성 손실: 35839.586 잠재 손실: 2033.0924 33 훈련 전체 손실: 37731.035 재구성 손실: 35701.85 잠재 손실: 2029.1854 34 훈련 전체 손실: 36672.44 재구성 손실: 34686.492 잠재 손실: 1985.948 35 훈련 전체 손실: 34540.688 재구성 손실: 32578.158 잠재 손실: 1962.529 36 훈련 전체 손실: 39895.312 재구성 손실: 37675.47 잠재 손실: 2219.8418 37 훈련 전체 손실: 37837.22 재구성 손실: 35721.92 잠재 손실: 2115.296 38 훈련 전체 손실: 34139.113 재구성 손실: 32192.105 잠재 손실: 1947.0076 39 훈련 전체 손실: 36726.23 재구성 손실: 34581.156 잠재 손실: 2145.075 40 훈련 전체 손실: 40625.45 재구성 손실: 36481.89 잠재 손실: 4143.557 41 훈련 전체 손실: 41370.78 재구성 손실: 37119.906 잠재 손실: 4250.875 42 훈련 전체 손실: 36747.574 재구성 손실: 34321.164 잠재 손실: 2426.4106 43 훈련 전체 손실: 35599.09 재구성 손실: 33496.48 잠재 손실: 2102.6104 44 훈련 전체 손실: 36534.367 재구성 손실: 34447.02 잠재 손실: 2087.3472 45 훈련 전체 손실: 35860.848 재구성 손실: 33799.34 잠재 손실: 2061.5085 46 훈련 전체 손실: 43206.37 재구성 손실: 39841.59 잠재 손실: 3364.7812 47 훈련 전체 손실: 36891.74 재구성 손실: 34923.695 잠재 손실: 1968.0414 48 훈련 전체 손실: 43163.797 재구성 손실: 39235.55 잠재 손실: 3928.2463 49 훈련 전체 손실: 39201.594 재구성 손실: 36837.402 잠재 손실: 2364.1926
plt.figure(figsize=(8,50))
for iteration in range(n_digits):
plt.subplot(n_digits, 10, iteration + 1)
plot_image(outputs_val[iteration])
오토인코더를 활용할 수 있는 주요 작업은 다음과 같습니다. • 특성 추출 • 비지도 사전훈련 • 차원 축소 • 생성 모델 • 이상치 탐지
레이블 되지 않은 훈련 데이터는 많지만, 레이블된 데이터는 수천 개 정도만 가지고 있을 때 분류기를 훈련시키려면 전체 데이터셋 (레이블된 것 + 레이블되지 않은 것)에 먼저 심층 오토인코더를 훈련시킨 다음 하위층 절반(즉, 코딩층과 그 하위층들)을 재사용합니다. 그리고 레이블된 데이터를 사용해 분류기를 훈련시킵니다. 레이블된 데이터가 조금밖에 없다면 분류기를 훈련시킬 때 재사용된 층을 동결하는 것이 좋습니다.
어떤 오토인코더가 입력을 완벽하게 재구성한다는 사실이 반드시 좋은 오토인코더임을 의미하는 것은 아닙니다. 아마도 입력을 코딩층과 출력으로 복사하는 것을 배운 과대완전 오토인코더일지 모릅니다. 사실 코딩층의 뉴런이 한 개여도 매우 깊은 오토인코더는 모든 훈련 샘플을 다른 코딩으로 매핑하는 것이 가능합니다(예를 들어 첫 번째 샘플은 0.001에 두 번째 샘플은 0.002에 세 번째 샘플은 0.003에 매핑되는 식입니다). 그리고 각 코딩에 대한 정확한 훈련 샘플을 재구성하는 것을 외워서 학습할 수 있습니다. 데이터에 있는 어떤 유용한 패턴을 실제 학습하지 않고 입력을 완벽히 재구성합니다. 실전에서 이런 매핑은 거의 일어나지 않지만 완벽하게 재구성되었다는 것이 오토인코더가 유용한 어떤 것을 학습했다고 보장하지 않는다는 사실을 말해줍니다. 하지만 재구성이 매우 나쁘다면 좋지 못한 오토인코더임이 거의 틀림없습니다. 오토인코더의 성능을 재기 위한 한 가지 방법은 재구성 소실을 계산하는 것입니다(예를 들어 출력에서 입력을 뺀 값의 평균 제곱인 MSE를 계산합니다). 여기에서도 높은 재구성 손실은 오토인코더가 나쁘다는 것을 알려주는 좋은 신호입니다. 하지만 재구성 손실이 낮다고 해서 좋은 오토인코더임을 보장할 수는 없습니다. 사용하는 방식에 맞추어 오토인코더를 평가해야 합니다. 예를 들어 분류기의 비지도 사전 훈련을 위해 사용한다면 분류기의 성능도 반드시 평가해야 합니다.
과소완전 오토인코더는 코딩층이 입력층과 출력층보다 작은 경우입니다. 만약 코딩층이 더 크다면 과대완전 오토인코더입니다. 아주 심한 과소완전 오토인코더는 입력을 재구성하는데 실패할 가능성이 큽니다. 과대완전 오토인코더의 주된 문제는 유용한 특성을 학습하지 못하고 입력을 출력으로 그냥 복사하는 것입니다.
인코더 층의 가중치를 그에 상응하는 디코더 층과 묶으려면 인코더 가중치의 전치를 디코더의 가중치로 사용하면 됩니다. 이렇게 하면 모델의 파라미터 개수가 반으로 줄고, 종종 적은 훈련 데이터로도 수렴이 빨라집니다. 또한 훈련 세트에 과대적합될 위험을 감소시킵니다.
스택 오토인코더의 하위층이 학습한 특성을 시각화하기 위한 일반적인 방법은 각 뉴런의 가중치를 입력 이미지의 크기로 바꾸어 그려보는 것입니다(예를 들어 MNIST에서는 [784] 크기의 가중치 벡터를 [28, 28]로 바꿉니다). 상위층에서 학습한 특성을 시각화하기 위한 한 가지 방법은 각 뉴런을 가장 활성화시키는 훈련 샘플을 그려보는 것입니다.
생성 모델은 훈련 샘플과 닮은 출력을 랜덤하게 생성할 수 있는 모델입니다. 예를 들어 MNIST 데이터셋에 잘 훈련된 생성 모델은 실제와 같은 임의의 숫자 이미지를 생성할 수 있습니다. 출력 분포는 일반적으로 훈련 데이터와 비슷합니다. 예들 들어 MNIST에는 각 숫자별 이미지가 많기 때문에 이 생성 모델은 각 숫자에 대해 거의 비슷한 개수의 이미지를 출력할 것입니다. 어떤 생성 모델은 특정 종류의 출력만 생성하기 위해 파라미터로 제어할 수 있습니다. 생성 오토인코더의 종류로는 변이형 오토인코더, 제한된 볼츠만 머신(RBM)등이 있습니다.
• MNIST나 조금 더 어려운 걸 원한다면 CIFAR10같은 규모가 큰 이미지셋을 사용합니다. CIFAR10을 선택한다면 훈련을 위해 이미지 배치를 로드하는 코드를 작성해야 합니다. 이 부분을 건너뛰고 싶으면 텐서 플로의 모델 저장소에 있는 도구를 사용할 수 있습니다.
• 데이터셋을 훈련 세트와 테스트 세트로 나누세요. 전체 훈련 세트를 사용해 심층 잡음 제거 오토인코더를 훈련시킵니다.
%autosave 60
import tensorflow as tf
import numpy as np
from tensorflow.contrib.layers import batch_norm, dropout
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
# 일관된 출력을 위해 유사난수 초기화
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
Autosaving every 60 seconds Extracting /tmp/data/train-images-idx3-ubyte.gz Extracting /tmp/data/train-labels-idx1-ubyte.gz Extracting /tmp/data/t10k-images-idx3-ubyte.gz Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
import math
from sklearn.base import BaseEstimator, ClassifierMixin
import tensorflow as tf
# he 표준 초기화
def he_normal_initialisation(n_inputs, n_outputs):
stddev = np.power(2 / (n_inputs + n_outputs), 1 / np.sqrt(2))
# 잘린 정규 분포는 가중치들의 크기를 제한하고, 훈련 속도를 높혀줍니다.
return tf.truncated_normal((n_inputs, n_outputs), stddev=stddev)
# he 균일 초기화
def he_uniform_initialisation(n_inputs, n_outputs):
r = np.power(6 / (n_inputs + n_outputs), 1 / np.sqrt(2))
return tf.random_uniform((n_inputs, n_outputs), -r, r)
# 다음 배치 생성
def create_next_batch_fn(data, sequence_lengths, targets, batch_size):
assert len(data) == len(sequence_lengths) and len(data) == len(targets)
current_batch = 0
def next_batch():
nonlocal current_batch
i = current_batch
current_batch = (current_batch + batch_size) % len(data)
return data[i:i+batch_size], sequence_lengths[i:i+batch_size], targets[i:i+batch_size]
return next_batch
import math
# 입력 공간 크기
input_spatial_size = 28
input_channels = 1
# 배치 크기
batch_size = 150
learning_rate = 0.01
# 입력 크기 = 784
n_input_neurons = input_spatial_size ** 2
n_hidden_neurons_layer1 = 120
n_hidden_neurons_layer2 = 75
n_hidden_neurons_layer3 = n_hidden_neurons_layer1
n_output_neurons = n_input_neurons
# 정규화 변수
l2_reg = 0.0001
X = tf.placeholder(tf.float32, shape=(None, n_input_neurons), name="input")
noisy_X = X + tf.random_normal(tf.shape(X), mean=0.1, stddev=0.1)
he_init = tf.contrib.layers.variance_scaling_initializer()
l2_regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
from functools import partial
my_dense_layer = partial(tf.layers.dense,
activation=tf.nn.elu,
kernel_initializer=he_init)
hidden1 = my_dense_layer(noisy_X, n_hidden_neurons_layer1)
hidden2 = my_dense_layer(hidden1, n_hidden_neurons_layer2)
hidden3 = my_dense_layer(hidden2, n_hidden_neurons_layer3)
# 출력 값은 자연스럽게 0과 1 사이에 있으므로, 시그모이드 활성화 함수는 값을 나누는 데 유용합니다.
outputs = my_dense_layer(hidden3, n_output_neurons, activation=tf.nn.sigmoid)
with tf.name_scope("loss"):
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X), name="reconstruction_loss")
loss = reconstruction_loss# + regularisation_loss
with tf.name_scope("training"):
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
interim_checkpoint_path = "./checkpoints/mnist_autoencoder_model.ckpt"
early_stopping_checkpoint_path = "./checkpoints/mnist_autoencoder_model_early_stopping.ckpt"
from datetime import datetime
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
log_dir = "{}/run-{}/".format(root_logdir, now)
loss_summary = tf.summary.scalar('loss', loss)
summary_op = tf.summary.merge([loss_summary])
file_writer = tf.summary.FileWriter(log_dir, tf.get_default_graph())
epochs = 100
n_batches = int(np.ceil(len(mnist.train.images) // batch_size))
early_stopping_check_frequency = n_batches // 4
early_stopping_check_limit = n_batches * 3
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
session = sess
sess.run(init)
best_loss = 1000000000.0
best_loss_step = 0
for epoch in range(epochs):
print("epoch", epoch)
for batch_index in range(n_batches):
step = epoch * n_batches + batch_index
# TODO: replace this with code that gets a batch from X and y.
X_batch, y_batch = mnist.train.next_batch(batch_size)
if batch_index % 10 == 0:
summary_str = summary_op.eval(session=sess, feed_dict={X: X_batch})
file_writer.add_summary(summary_str, step)
t = sess.run(training_op, feed_dict={X: X_batch})
if batch_index % 10 == 0:
l = sess.run(loss, feed_dict={X: X_batch})
print("loss:", l)
# 조기 종료 체크
if batch_index % early_stopping_check_frequency == 0:
l = sess.run(loss, feed_dict={X: X_batch})
if l < best_loss:
saver.save(sess, early_stopping_checkpoint_path)
best_loss = l
best_loss_step = step
elif step >= (best_loss_step + early_stopping_check_limit):
print("Stopping early during epoch", epoch, "with best loss:", best_loss)
break
else:
continue
break
save_path = saver.save(sess, interim_checkpoint_path)
saver.restore(sess, early_stopping_checkpoint_path)
save_path = saver.save(sess, "./checkpoints/mnist_autoencoder_model_final.ckpt")
epoch 0 loss: 0.37833923 loss: 0.07920331 loss: 0.06241521 loss: 0.054238968 loss: 0.047944844 loss: 0.040305614 loss: 0.037352886 loss: 0.034206294 loss: 0.03140836 loss: 0.030972116 loss: 0.030230131 loss: 0.025832694 loss: 0.027881471 loss: 0.02587864 loss: 0.024067175 loss: 0.022960775 loss: 0.0223519 loss: 0.021234497 loss: 0.020200126 loss: 0.020670325 loss: 0.020200202 loss: 0.020051215 loss: 0.020805053 loss: 0.018527703 loss: 0.017872233 loss: 0.018130437 loss: 0.01719606 loss: 0.017160462 loss: 0.017434003 loss: 0.0171431 loss: 0.017777655 loss: 0.0167927 loss: 0.01670833 loss: 0.015845127 loss: 0.016901653 loss: 0.015518803 loss: 0.015206556 epoch 1 loss: 0.016985284 loss: 0.015888035 loss: 0.015662426 loss: 0.014735561 loss: 0.014292588 loss: 0.015285356 loss: 0.014782617 loss: 0.014409327 loss: 0.015238346 loss: 0.014086526 loss: 0.014761629 loss: 0.013282807 loss: 0.013542069 loss: 0.013397572 loss: 0.013462113 loss: 0.013849601 loss: 0.014598147 loss: 0.0128062125 loss: 0.015133962 loss: 0.013514482 loss: 0.01350578 loss: 0.013254873 loss: 0.013381684 loss: 0.013097669 loss: 0.014417844 loss: 0.013373114 loss: 0.0118721 loss: 0.013238206 loss: 0.012789745 loss: 0.013160447 loss: 0.012644261 loss: 0.012339496 loss: 0.012715998 loss: 0.0120034665 loss: 0.013188049 loss: 0.012540314 loss: 0.013395845 epoch 2 loss: 0.01317061 loss: 0.012272314 loss: 0.012285389 loss: 0.012112097 loss: 0.012167565 loss: 0.012047794 loss: 0.012307607 loss: 0.012453256 loss: 0.013737618 loss: 0.012750804 loss: 0.012499442 loss: 0.01215962 loss: 0.012918962 loss: 0.012117733 loss: 0.012019859 loss: 0.0116125215 loss: 0.012217319 loss: 0.011981064 loss: 0.01223299 loss: 0.011144409 loss: 0.010963983 loss: 0.011758907 loss: 0.011342933 loss: 0.012124948 loss: 0.012127116 loss: 0.011529069 loss: 0.011550096 loss: 0.011609923 loss: 0.012898604 loss: 0.011770345 loss: 0.010865737 loss: 0.012155876 loss: 0.01143744 loss: 0.011961038 loss: 0.011872695 loss: 0.0112203425 loss: 0.012143478 epoch 3 loss: 0.01166343 loss: 0.011623991 loss: 0.010649663 loss: 0.010545882 loss: 0.010967723 loss: 0.010688015 loss: 0.010840224 loss: 0.010784233 loss: 0.011235874 loss: 0.011541841 loss: 0.012167049 loss: 0.010388707 loss: 0.010887743 loss: 0.011497244 loss: 0.011604907 loss: 0.010272773 loss: 0.010668412 loss: 0.01085711 loss: 0.010555438 loss: 0.011829893 loss: 0.01049535 loss: 0.01099551 loss: 0.010760321 loss: 0.010107279 loss: 0.010747947 loss: 0.011017851 loss: 0.010575398 loss: 0.01018963 loss: 0.010740333 loss: 0.010978172 loss: 0.010351139 loss: 0.012164323 loss: 0.010695059 loss: 0.01099074 loss: 0.010521142 loss: 0.009892723 loss: 0.010393346 epoch 4 loss: 0.010647416 loss: 0.010280674 loss: 0.011029268 loss: 0.011164742 loss: 0.010625262 loss: 0.010294624 loss: 0.010230614 loss: 0.01092125 loss: 0.0108020995 loss: 0.011009137 loss: 0.009965291 loss: 0.010402353 loss: 0.009838089 loss: 0.010649011 loss: 0.010313192 loss: 0.01002993 loss: 0.010734364 loss: 0.009908321 loss: 0.0098249875 loss: 0.009856179 loss: 0.0107748015 loss: 0.009819365 loss: 0.009437586 loss: 0.010178756 loss: 0.010106722 loss: 0.010166657 loss: 0.010309751 loss: 0.010392465 loss: 0.010310616 loss: 0.010035245 loss: 0.0105786435 loss: 0.010299541 loss: 0.010118939 loss: 0.009472202 loss: 0.009115609 loss: 0.010624504 loss: 0.009849781 epoch 5 loss: 0.010381685 loss: 0.010886028 loss: 0.009583102 loss: 0.00965982 loss: 0.009287523 loss: 0.009356421 loss: 0.01016832 loss: 0.009889057 loss: 0.009521803 loss: 0.0095694335 loss: 0.009884883 loss: 0.00961585 loss: 0.009119379 loss: 0.010120445 loss: 0.010110902 loss: 0.009775707 loss: 0.0101648215 loss: 0.009878239 loss: 0.010286192 loss: 0.009715242 loss: 0.0103066955 loss: 0.010010004 loss: 0.009344 loss: 0.009913735 loss: 0.009363422 loss: 0.009565841 loss: 0.009307424 loss: 0.0095083555 loss: 0.009796153 loss: 0.010128673 loss: 0.009697812 loss: 0.009171585 loss: 0.009445876 loss: 0.009214631 loss: 0.009804181 loss: 0.010048083 loss: 0.009340505 epoch 6 loss: 0.00949436 loss: 0.009272382 loss: 0.00920581 loss: 0.009692459 loss: 0.008890549 loss: 0.009933353 loss: 0.010306195 loss: 0.009207328 loss: 0.009749654 loss: 0.009319024 loss: 0.009446106 loss: 0.00930332 loss: 0.0085760895 loss: 0.009891088 loss: 0.010068605 loss: 0.009561329 loss: 0.009327106 loss: 0.009017741 loss: 0.009959118 loss: 0.009273007 loss: 0.009263289 loss: 0.00962179 loss: 0.009644446 loss: 0.008642388 loss: 0.010267067 loss: 0.008667112 loss: 0.009495124 loss: 0.009540674 loss: 0.00972187 loss: 0.009426581 loss: 0.009149244 loss: 0.010421746 loss: 0.008841418 loss: 0.009485595 loss: 0.009433046 loss: 0.009658053 loss: 0.009308182 epoch 7 loss: 0.009024592 loss: 0.0091597885 loss: 0.009152595 loss: 0.00866728 loss: 0.009593097 loss: 0.009087996 loss: 0.009034292 loss: 0.008223072 loss: 0.009271575 loss: 0.009507441 loss: 0.009537555 loss: 0.0090269 loss: 0.008663332 loss: 0.009419458 loss: 0.009050417 loss: 0.008561404 loss: 0.008963386 loss: 0.009630428 loss: 0.009267942 loss: 0.009148306 loss: 0.009009512 loss: 0.008589783 loss: 0.009377875 loss: 0.0093790125 loss: 0.009198875 loss: 0.009071898 loss: 0.00926242 loss: 0.009059405 loss: 0.008278607 loss: 0.009593684 loss: 0.009412988 loss: 0.0087737525 loss: 0.00887719 loss: 0.008960385 loss: 0.010033062 loss: 0.009077356 loss: 0.00886693 epoch 8 loss: 0.008724764 loss: 0.009986391 loss: 0.009425292 loss: 0.00907335 loss: 0.009101106 loss: 0.009529323 loss: 0.009569475 loss: 0.00859469 loss: 0.009114931 loss: 0.009440886 loss: 0.009310386 loss: 0.008402557 loss: 0.009522655 loss: 0.00930486 loss: 0.008620383 loss: 0.008799734 loss: 0.009317965 loss: 0.0094050355 loss: 0.0094368365 loss: 0.009062277 loss: 0.009615176 loss: 0.008987798 loss: 0.0088266535 loss: 0.0088036815 loss: 0.008798692 loss: 0.009053823 loss: 0.008529147 loss: 0.008713593 loss: 0.0088811945 loss: 0.009255002 loss: 0.008586682 loss: 0.008615187 loss: 0.009048283 loss: 0.009082505 loss: 0.008912675 loss: 0.008031664 loss: 0.008470041 epoch 9 loss: 0.008810526 loss: 0.008011173 loss: 0.009202556 loss: 0.008635762 loss: 0.009031031 loss: 0.0084549105 loss: 0.00836663 loss: 0.009102521 loss: 0.008942791 loss: 0.00902345 loss: 0.008894277 loss: 0.008597872 loss: 0.008743945 loss: 0.008486009 loss: 0.009045807 loss: 0.009202954 loss: 0.008736569 loss: 0.009139871 loss: 0.009078119 loss: 0.008251552 loss: 0.008462351 loss: 0.008175165 loss: 0.008968184 loss: 0.008938407 loss: 0.008728756 loss: 0.008621468 loss: 0.009213406 loss: 0.008652955 loss: 0.009151028 loss: 0.008496371 loss: 0.0085176 loss: 0.00799281 loss: 0.00861038 loss: 0.0089553 loss: 0.0082359025 loss: 0.00919227 loss: 0.008909912 epoch 10 loss: 0.008494034 loss: 0.009184463 loss: 0.008742388 loss: 0.008226126 loss: 0.008037541 loss: 0.009061827 loss: 0.009111629 loss: 0.008324703 loss: 0.009215362 loss: 0.008021807 loss: 0.009072393 loss: 0.008754422 loss: 0.008794903 loss: 0.009950735 loss: 0.008491712 loss: 0.008520451 loss: 0.0091554765 loss: 0.009063824 loss: 0.008795458 loss: 0.008804095 loss: 0.008094036 loss: 0.008312729 loss: 0.008167007 loss: 0.0095128855 loss: 0.00858466 loss: 0.008648767 loss: 0.008790134 loss: 0.008122368 loss: 0.009472452 loss: 0.00876878 loss: 0.008447664 loss: 0.008503711 loss: 0.008813896 loss: 0.009028753 loss: 0.0090532405 loss: 0.008226308 loss: 0.008869935 epoch 11 loss: 0.008535112 loss: 0.008655277 loss: 0.008757001 loss: 0.008022492 loss: 0.008876428 loss: 0.008997021 loss: 0.008884626 loss: 0.008465955 loss: 0.008980088 loss: 0.008689194 loss: 0.009154987 loss: 0.008745418 loss: 0.0089361975 loss: 0.009066812 loss: 0.009037794 loss: 0.008489254 loss: 0.009066196 loss: 0.008047472 loss: 0.008848999 loss: 0.008311809 loss: 0.008624768 loss: 0.008224952 loss: 0.007798185 loss: 0.008145842 loss: 0.007309697 loss: 0.008935105 loss: 0.008928891 loss: 0.008422685 loss: 0.008807593 loss: 0.008853116 loss: 0.008795599 loss: 0.0082875425 loss: 0.007819187 loss: 0.008452619 loss: 0.008319561 loss: 0.0086821485 loss: 0.009637521 epoch 12 loss: 0.009421194 loss: 0.008211767 loss: 0.00783317 loss: 0.008837273 loss: 0.009006624 loss: 0.008754343 loss: 0.007860344 loss: 0.008518468 loss: 0.009037432 loss: 0.0084939925 loss: 0.008546422 loss: 0.008483496 loss: 0.00861041 loss: 0.008954049 loss: 0.00908621 loss: 0.0077483384 loss: 0.009116821 loss: 0.008734968 loss: 0.00813734 loss: 0.008308919 loss: 0.009129507 loss: 0.008499078 loss: 0.008033809 loss: 0.008414547 loss: 0.008350356 loss: 0.008201628 loss: 0.008797891 loss: 0.008815241 loss: 0.008234299 loss: 0.007563515 loss: 0.008242762 loss: 0.008859068 loss: 0.008456603 loss: 0.009217868 loss: 0.009016869 loss: 0.008497326 loss: 0.008788857 epoch 13 loss: 0.009156346 loss: 0.008754188 loss: 0.00855221 loss: 0.008807703 loss: 0.008938001 loss: 0.008369329 loss: 0.008665204 loss: 0.007867401 loss: 0.00883288 loss: 0.009357523 loss: 0.0083918795 loss: 0.0087394975 loss: 0.00810458 loss: 0.008639882 loss: 0.007905402 loss: 0.008258656 loss: 0.008636378 loss: 0.009112701 loss: 0.008263434 loss: 0.008691112 loss: 0.007868329 loss: 0.00882711 loss: 0.00805918 loss: 0.00892268 loss: 0.008882902 loss: 0.008762881 loss: 0.0085968245 loss: 0.00863244 loss: 0.008675511 loss: 0.0086462945 loss: 0.008580443 loss: 0.008477003 loss: 0.008947461 loss: 0.009322859 loss: 0.008939487 loss: 0.008589292 loss: 0.008773217 Stopping early during epoch 13 with best loss: 0.00794858 INFO:tensorflow:Restoring parameters from ./checkpoints/mnist_autoencoder_model_early_stopping.ckpt
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# 이미지 표시 함수
def show_image(image):
plt.imshow(image, cmap = matplotlib.cm.binary,
interpolation="nearest")
plt.axis("off")
plt.show()
visualisation_batch = mnist.train.images[:3]
o = sess.run([outputs], feed_dict={X: visualisation_batch})
o = np.array(o).reshape((-1, n_input_neurons))
image_shape = (input_spatial_size, input_spatial_size)
for input_data, output_data in zip(visualisation_batch, o):
input_image = input_data.reshape(image_shape)
show_image(input_image)
output_image = output_data.reshape(image_shape)
show_image(output_image)
• 이미지가 잘 재구성되는지 확인하고 저수준 특성을 시각화해보세요.
import os
def get_layer_weights(layer):
return tf.get_default_graph().get_tensor_by_name(os.path.split(layer.name)[0] + '/kernel:0')
with sess.as_default():
# Go through each low lever neuron and turn its weights into an image.
hidden1_weights = get_layer_weights(hidden1)
for neuron_weights in np.transpose(hidden1_weights.eval()):
show_image(neuron_weights.reshape(image_shape))
흥미롭게도, 어떤 뉴런은 무작위로 가중치를 가지고있는 것으로 보이고 다른 어떤 뉴런들은 숫자 3과 같은 특정 모양이나 규칙있는 숫자를 찾고 있는 것 처럼 보입니다.
• 이 오토인코더의 하위층을 재사용하여 분류를 위한 심층 신경망을 구축합니다. 훈련 세트의 10%만 사용하여 훈련해보세요. 전체 훈련 세트로 훈련시킨 동일한 분류기만큼의 성능을 얻을 수 있나요?
clf_n_hidden_neurons_layer3 = 200
y = tf.placeholder(tf.int32, shape=(None), name="labels")
he_init = tf.contrib.layers.variance_scaling_initializer()
l2_regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
from functools import partial
my_dense_layer = partial(tf.layers.dense,
activation=tf.nn.elu,
kernel_initializer=he_init)
clf_hidden3 = my_dense_layer(hidden2, clf_n_hidden_neurons_layer3)
logits = my_dense_layer(clf_hidden3, n_output_neurons)
with tf.name_scope("loss"):
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
clf_loss = tf.reduce_mean(cross_entropy, name="loss")
with tf.name_scope("training"):
clf_optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
clf_training_op = optimizer.minimize(clf_loss)
with tf.name_scope("eval"):
k = 1
correctness = tf.nn.in_top_k(logits, y, k)
accuracy = tf.reduce_mean(tf.cast(correctness, tf.float32)) * 100.0
clf_init = tf.global_variables_initializer()
from datetime import datetime
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
log_dir = "{}/run-{}/".format(root_logdir, now)
epochs = 6
n_batches = int(np.ceil(len(mnist.train.images) // batch_size))
# 조기 종료 체크 주기
early_stopping_check_frequency = n_batches // 4
# 조기 종료 체크 제한
early_stopping_check_limit = n_batches * 3
saver = tf.train.Saver()
early_stopping_checkpoint_path = "./checkpoints/mnist_model_early_stopping.ckpt"
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
sess.run(clf_init)
def create_next_batch_fn(images, labels, batch_size):
assert len(images) == len(labels)
current_batch = 0
def next_batch():
nonlocal current_batch
i = current_batch
current_batch = (current_batch + batch_size) % len(images)
return images[i:i+batch_size], labels[i:i+batch_size]
return next_batch
training_size = int(len(mnist.train.images) / 10)
print("training dataset size", training_size)
create_next_batch = create_next_batch_fn(
mnist.train.images[:training_size],
mnist.train.labels[:training_size],
batch_size)
best_validation_acc = 0.0
best_validation_step = 0
for epoch in range(epochs):
print("epoch", epoch)
for batch_index in range(n_batches):
step = epoch * n_batches + batch_index
X_batch, y_batch = create_next_batch()
t, l, ta = sess.run([clf_training_op, clf_loss, accuracy], feed_dict={X: X_batch, y: y_batch})
if batch_index % 10 == 0:
print("loss:", l, "training accuracy:", ta)
# 조기 종료 체크
if batch_index % early_stopping_check_frequency == 0:
validation_acc = sess.run(accuracy, feed_dict={X: mnist.validation.images, y: mnist.validation.labels})
print("validation accuracy", validation_acc)
if validation_acc > best_validation_acc:
saver.save(sess, early_stopping_checkpoint_path)
best_validation_acc = validation_acc
best_validation_step = step
elif step >= (best_validation_step + early_stopping_check_limit):
print("Stopping early during epoch", epoch)
break
else:
continue
break
saver.restore(sess, early_stopping_checkpoint_path)
test_acc = sess.run(accuracy, feed_dict={X: mnist.test.images, y: mnist.test.labels})
print(">>>>>>>>>> test dataset accuracy:", test_acc)
training dataset size 5500 epoch 0 loss: 7.1888614 training accuracy: 0.0 validation accuracy 6.46 loss: 4.220206 training accuracy: 34.0 loss: 1.5230052 training accuracy: 52.0 loss: 0.94349885 training accuracy: 74.0 loss: 0.62123734 training accuracy: 74.666664 loss: 0.56518066 training accuracy: 78.0 loss: 0.3670417 training accuracy: 91.333336 loss: 0.32205713 training accuracy: 88.666664 loss: 0.43454874 training accuracy: 82.666664 loss: 0.40520793 training accuracy: 88.666664 validation accuracy 89.560005 loss: 0.3876314 training accuracy: 88.0 loss: 0.29734057 training accuracy: 91.333336 loss: 0.3815071 training accuracy: 86.0 loss: 0.31802627 training accuracy: 87.333336 loss: 0.38644174 training accuracy: 88.666664 loss: 0.24193156 training accuracy: 92.0 loss: 0.29746792 training accuracy: 90.0 loss: 0.1686385 training accuracy: 94.666664 loss: 0.17701945 training accuracy: 92.666664 validation accuracy 92.34 loss: 0.23308179 training accuracy: 90.0 loss: 0.48174524 training accuracy: 82.666664 loss: 0.36502704 training accuracy: 89.33333 loss: 0.26113528 training accuracy: 91.333336 loss: 0.2118856 training accuracy: 94.666664 loss: 0.092940725 training accuracy: 97.333336 loss: 0.13824874 training accuracy: 96.0 loss: 0.13982536 training accuracy: 96.0 loss: 0.2409252 training accuracy: 92.666664 validation accuracy 92.14 loss: 0.10667101 training accuracy: 96.666664 loss: 0.08523243 training accuracy: 96.666664 loss: 0.08830479 training accuracy: 96.666664 loss: 0.13237108 training accuracy: 95.33333 loss: 0.20725384 training accuracy: 93.333336 loss: 0.16756201 training accuracy: 95.33333 loss: 0.16338731 training accuracy: 94.666664 loss: 0.11579433 training accuracy: 97.333336 loss: 0.17340855 training accuracy: 93.333336 validation accuracy 91.32 epoch 1 loss: 0.0536899 training accuracy: 99.0 validation accuracy 92.5 loss: 0.1548884 training accuracy: 93.333336 loss: 0.20904762 training accuracy: 93.333336 loss: 0.17749873 training accuracy: 94.0 loss: 0.06261415 training accuracy: 96.666664 loss: 0.091623574 training accuracy: 96.666664 loss: 0.18207863 training accuracy: 92.0 loss: 0.07579949 training accuracy: 97.333336 loss: 0.085447386 training accuracy: 97.333336 loss: 0.13484342 training accuracy: 94.666664 validation accuracy 92.24 loss: 0.13282132 training accuracy: 95.33333 loss: 0.05392003 training accuracy: 97.0 loss: 0.07212275 training accuracy: 96.666664 loss: 0.13634977 training accuracy: 94.666664 loss: 0.12790501 training accuracy: 96.0 loss: 0.162034 training accuracy: 96.666664 loss: 0.22946621 training accuracy: 97.333336 loss: 0.21187173 training accuracy: 91.333336 loss: 0.1469328 training accuracy: 94.666664 validation accuracy 93.26 loss: 0.12514442 training accuracy: 96.0 loss: 0.08800435 training accuracy: 96.666664 loss: 0.2101647 training accuracy: 95.33333 loss: 0.046341017 training accuracy: 99.0 loss: 0.062644534 training accuracy: 98.66667 loss: 0.11261814 training accuracy: 94.666664 loss: 0.1480767 training accuracy: 95.33333 loss: 0.11696757 training accuracy: 94.666664 loss: 0.100568466 training accuracy: 96.666664 validation accuracy 93.36 loss: 0.092270665 training accuracy: 96.666664 loss: 0.107794136 training accuracy: 96.666664 loss: 0.18431877 training accuracy: 96.0 loss: 0.08732726 training accuracy: 96.0 loss: 0.1815017 training accuracy: 96.666664 loss: 0.049982276 training accuracy: 97.0 loss: 0.14138986 training accuracy: 96.666664 loss: 0.11042318 training accuracy: 96.0 loss: 0.1143365 training accuracy: 95.33333 validation accuracy 92.86 epoch 2 loss: 0.17779413 training accuracy: 96.666664 validation accuracy 92.6 loss: 0.081115514 training accuracy: 98.0 loss: 0.1890978 training accuracy: 93.333336 loss: 0.10076456 training accuracy: 97.333336 loss: 0.105168775 training accuracy: 96.666664 loss: 0.04713968 training accuracy: 98.0 loss: 0.114606716 training accuracy: 95.33333 loss: 0.06758935 training accuracy: 98.0 loss: 0.0909336 training accuracy: 96.666664 loss: 0.029784176 training accuracy: 98.0 validation accuracy 94.520004 loss: 0.03065901 training accuracy: 99.333336 loss: 0.03696321 training accuracy: 98.66667 loss: 0.073340215 training accuracy: 98.0 loss: 0.0512759 training accuracy: 98.0 loss: 0.047442418 training accuracy: 98.66667 loss: 0.03840739 training accuracy: 98.0 loss: 0.037696093 training accuracy: 98.66667 loss: 0.032336846 training accuracy: 98.0 loss: 0.057639968 training accuracy: 98.66667 validation accuracy 92.24 loss: 0.07859678 training accuracy: 96.0 loss: 0.11011237 training accuracy: 97.333336 loss: 0.028186614 training accuracy: 98.66667 loss: 0.101020254 training accuracy: 97.333336 loss: 0.05950702 training accuracy: 98.66667 loss: 0.010420151 training accuracy: 100.0 loss: 0.1097966 training accuracy: 97.333336 loss: 0.07598352 training accuracy: 98.0 loss: 0.06080917 training accuracy: 97.333336 validation accuracy 94.5 loss: 0.024233159 training accuracy: 98.66667 loss: 0.036571354 training accuracy: 98.66667 loss: 0.0069872965 training accuracy: 100.0 loss: 0.024791839 training accuracy: 98.66667 loss: 0.008942997 training accuracy: 100.0 loss: 0.049340103 training accuracy: 97.333336 loss: 0.0071674944 training accuracy: 100.0 loss: 0.017279357 training accuracy: 99.333336 loss: 0.08618937 training accuracy: 96.666664 validation accuracy 92.92 epoch 3 loss: 0.05952855 training accuracy: 97.333336 validation accuracy 94.74 loss: 0.036693104 training accuracy: 99.333336 loss: 0.027092794 training accuracy: 99.333336 loss: 0.03605994 training accuracy: 98.0 loss: 0.04350349 training accuracy: 98.0 loss: 0.0510887 training accuracy: 98.0 loss: 0.021487003 training accuracy: 98.66667 loss: 0.099546604 training accuracy: 96.666664 loss: 0.12621367 training accuracy: 96.0 loss: 0.022184713 training accuracy: 99.333336 validation accuracy 94.520004 loss: 0.05232877 training accuracy: 98.0 loss: 0.054093767 training accuracy: 97.333336 loss: 0.013794069 training accuracy: 100.0 loss: 0.0040111686 training accuracy: 100.0 loss: 0.005717643 training accuracy: 100.0 loss: 0.0063240645 training accuracy: 100.0 loss: 0.018602641 training accuracy: 98.66667 loss: 0.16220665 training accuracy: 96.666664 loss: 0.026138972 training accuracy: 99.333336 validation accuracy 94.04 loss: 0.043413684 training accuracy: 98.0 loss: 0.0424447 training accuracy: 98.0 loss: 0.036737416 training accuracy: 98.66667 loss: 0.030346934 training accuracy: 98.66667 loss: 0.047788735 training accuracy: 98.0 loss: 0.02587219 training accuracy: 98.66667 loss: 0.040776458 training accuracy: 98.66667 loss: 0.08700067 training accuracy: 97.333336 loss: 0.06114159 training accuracy: 98.0 validation accuracy 93.84 loss: 0.1537612 training accuracy: 95.33333 loss: 0.0038601279 training accuracy: 100.0 loss: 0.15582092 training accuracy: 94.666664 loss: 0.12410842 training accuracy: 98.66667 loss: 0.19452377 training accuracy: 94.0 loss: 0.0975822 training accuracy: 98.0 loss: 0.022813994 training accuracy: 98.66667 loss: 0.046785228 training accuracy: 98.0 loss: 0.1450705 training accuracy: 94.666664 validation accuracy 93.3 epoch 4 loss: 0.032827266 training accuracy: 99.333336 validation accuracy 92.979996 loss: 0.091470845 training accuracy: 97.333336 loss: 0.16962463 training accuracy: 96.666664 loss: 0.057833266 training accuracy: 98.0 loss: 0.114900135 training accuracy: 97.333336 loss: 0.0030993249 training accuracy: 100.0 loss: 0.16527756 training accuracy: 96.666664 loss: 0.12039481 training accuracy: 95.33333 loss: 0.18056557 training accuracy: 94.0 loss: 0.101325706 training accuracy: 97.333336 validation accuracy 93.24 loss: 0.024136225 training accuracy: 98.66667 loss: 0.03179721 training accuracy: 98.66667 loss: 0.010509423 training accuracy: 99.333336 loss: 0.029636882 training accuracy: 98.66667 loss: 0.012082965 training accuracy: 99.333336 loss: 0.092954874 training accuracy: 96.666664 loss: 0.01908857 training accuracy: 99.333336 loss: 0.027838733 training accuracy: 99.333336 loss: 0.016438814 training accuracy: 99.333336 validation accuracy 93.48 loss: 0.03996194 training accuracy: 99.333336 loss: 0.04175265 training accuracy: 99.333336 loss: 0.057717487 training accuracy: 98.66667 loss: 0.00786926 training accuracy: 99.333336 loss: 0.031769965 training accuracy: 98.0 loss: 0.05848662 training accuracy: 96.666664 loss: 0.0014069263 training accuracy: 100.0 loss: 0.018707827 training accuracy: 99.333336 loss: 0.035855368 training accuracy: 98.66667 validation accuracy 91.92 loss: 0.0824926 training accuracy: 98.0 loss: 0.05050535 training accuracy: 98.0 loss: 0.025759919 training accuracy: 98.0 loss: 0.03533636 training accuracy: 98.66667 loss: 0.0018448596 training accuracy: 100.0 loss: 0.09433303 training accuracy: 98.0 loss: 0.023528008 training accuracy: 98.66667 loss: 0.12862298 training accuracy: 98.0 loss: 0.097198054 training accuracy: 97.333336 validation accuracy 93.92 epoch 5 loss: 0.11478611 training accuracy: 98.0 validation accuracy 93.96 loss: 0.025521984 training accuracy: 98.0 loss: 0.073736474 training accuracy: 98.0 loss: 0.10977268 training accuracy: 95.33333 loss: 0.23620898 training accuracy: 94.0 loss: 0.13854577 training accuracy: 97.333336 loss: 0.04907253 training accuracy: 98.0 loss: 0.110449985 training accuracy: 96.666664 loss: 0.050390314 training accuracy: 98.66667 loss: 0.15946814 training accuracy: 96.666664 validation accuracy 92.479996 loss: 0.06625064 training accuracy: 98.0 loss: 0.108660065 training accuracy: 97.333336 loss: 0.004868257 training accuracy: 100.0 loss: 0.029168949 training accuracy: 98.66667 loss: 0.08611409 training accuracy: 98.66667 loss: 0.12665349 training accuracy: 96.666664 loss: 0.021821778 training accuracy: 99.333336 loss: 0.09257885 training accuracy: 97.333336 loss: 0.039263476 training accuracy: 98.66667 validation accuracy 93.06 loss: 0.16523013 training accuracy: 97.333336 loss: 0.0053053214 training accuracy: 100.0 loss: 0.11401265 training accuracy: 96.666664 loss: 0.0020482887 training accuracy: 100.0 loss: 0.106307104 training accuracy: 98.66667 loss: 0.23169416 training accuracy: 95.33333 loss: 0.068607405 training accuracy: 98.0 loss: 0.10386844 training accuracy: 98.0 loss: 0.007553943 training accuracy: 99.333336 validation accuracy 93.1 loss: 0.057904612 training accuracy: 99.333336 loss: 0.017487276 training accuracy: 100.0 loss: 0.0024884527 training accuracy: 100.0 loss: 0.0620466 training accuracy: 97.333336 loss: 0.0583167 training accuracy: 98.0 loss: 0.119629316 training accuracy: 96.666664 loss: 0.0037181322 training accuracy: 100.0 loss: 0.036844134 training accuracy: 98.66667 loss: 0.0034287472 training accuracy: 100.0 validation accuracy 93.64 INFO:tensorflow:Restoring parameters from ./checkpoints/mnist_model_early_stopping.ckpt >>>>>>>>>> test dataset accuracy: 94.26
10%의 훈련 데이터를 사용한 오토인코더는 것은 스크래치에서 전체 데이터셋으로 훈련된 모델(96.47%의 테스트 정확도)보다 2.21% 낮은 94.26%의 테스트 데이터 정확도를 냈습니다. 이는 훈련 데이터 세트가 비교 모델의 훈련 설정 크기의 10%라는 점을 감안할 때 너무 나쁘지 않습니다.
• 코딩층 아래에 두 개의 은닉층을 가지는 적층 오토인코더를 생성하고 이전 연습문제에서 사용한 이미지 데이터셋으로 훈련시킵니다. 코딩층은 30개의 뉴런을 가지고 0과 1 사이의 값을 출력하기 위해 로지스틱 활성화 함수를 사용해야 합니다. 훈련이 끝난 뒤 이미지의 해시를 만들기 위해서는 오토인코더를 통과시키고 출력된 코딩층의 각 값을 가까운 정수(0 또는 1)로 반올림하면 됩니다.
• 살라쿠티노프와 힌튼이 제안한 한 가지 멋진 기교는 (평균이 0인) 가우시안 잡음을 훈련하는 동안에만 코딩층의 입력에 추가하는 것입니다. 신호 대비 잡음 signal-to-noise 비 율을 높게 유지하기 위해 오토인코더가 코딩층에 큰 값을 주입하도록 학습될 것입니다. (잡음을 무시하기 위해). 결과적으로 이는 코딩층의 로지스틱 함수가 0 또는 1로 수렴 하게 될 가능성이 높다는 뜻입니다. 결국 코딩을 0 또는 1로 반올림해도 결과를 너무 왜곡하지 않게 되고, 해시의 신뢰도를 향상시킬 것입니다.
import math
input_spatial_size = 28
input_channels = 1
batch_size = 80
learning_rate = 0.005
n_inputs = 28 * 28
n_hidden1 = 100
n_hidden2 = 70
n_hidden3 = 30 # 코딩 유닛
n_hidden4 = n_hidden2
n_hidden5 = n_hidden1
n_outputs = n_inputs
l2_reg = 0.0001
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="input")
noisy_X = X + tf.random_normal(tf.shape(X), mean=0.1, stddev=0.1)
he_init = tf.contrib.layers.variance_scaling_initializer()
l2_regularizer = tf.contrib.layers.l2_regularizer(l2_reg)
from functools import partial
my_dense_layer = partial(tf.layers.dense,
activation=tf.nn.elu,
kernel_initializer=he_init)
hidden1 = my_dense_layer(noisy_X, n_hidden1)
hidden2 = my_dense_layer(hidden1, n_hidden2)
# 코딩 입력에 노이즈를 추가하여 모델에서 큰 값을 입력하여 노이즈를 "제거"합니다.
coding_input = hidden2 + tf.random_normal(tf.shape(hidden2), mean=0.0, stddev=0.3)
codings = my_dense_layer(coding_input, n_hidden3, activation=tf.nn.sigmoid, kernel_initializer=tf.contrib.layers.xavier_initializer())
hidden4 = my_dense_layer(codings, n_hidden4)
hidden5 = my_dense_layer(hidden4, n_hidden5)
# 출력 값은 자연스럽게 0과 1 사이에 있으므로, 시그모이드 활성화 함수는 값을 나누는 데 유용합니다.
outputs = my_dense_layer(hidden5, n_outputs, activation=tf.nn.sigmoid, kernel_initializer=None)
with tf.name_scope("loss"):
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X), name="reconstruction_loss")
loss = reconstruction_loss# + regularisation_loss
with tf.name_scope("training"):
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
interim_checkpoint_path = "./checkpoints/mnist_semantic_hashing_model.ckpt"
early_stopping_checkpoint_path = "./checkpoints/mnist_semantic_hashing_model_early_stopping.ckpt"
from datetime import datetime
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
log_dir = "{}/run-{}/".format(root_logdir, now)
loss_summary = tf.summary.scalar('loss', loss)
summary_op = tf.summary.merge([loss_summary])
file_writer = tf.summary.FileWriter(log_dir, tf.get_default_graph())
epochs = 100
n_batches = int(np.ceil(len(mnist.train.images) // batch_size))
# 조기 종료 체크 주기
early_stopping_check_frequency = n_batches // 4
# 조기 종료 체크 제한
early_stopping_check_limit = n_batches * 3
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
session = sess
sess.run(init)
best_loss = 1000000000.0
best_loss_step = 0
for epoch in range(epochs):
print("epoch", epoch)
for batch_index in range(n_batches):
step = epoch * n_batches + batch_index
# TODO: replace this with code that gets a batch from X and y.
X_batch, y_batch = mnist.train.next_batch(batch_size)
if batch_index % 10 == 0:
summary_str = summary_op.eval(session=sess, feed_dict={X: X_batch})
file_writer.add_summary(summary_str, step)
t = sess.run([training_op], feed_dict={X: X_batch})
l = sess.run(loss, feed_dict={X: X_batch})
if batch_index % 10 == 0: print("loss:", l)
# 조기 종료 체크
if batch_index % early_stopping_check_frequency == 0:
if l < best_loss:
saver.save(sess, early_stopping_checkpoint_path)
best_loss = l
best_loss_step = step
elif step >= (best_loss_step + early_stopping_check_limit):
print("Stopping early during epoch", epoch, "with best loss:", best_loss)
break
else:
continue
break
save_path = saver.save(sess, interim_checkpoint_path)
saver.restore(sess, early_stopping_checkpoint_path)
print("total training loss:", sess.run(loss, feed_dict={X: mnist.train.images}))
save_path = saver.save(sess, "./checkpoints/mnist_semantic_hashing_model_final.ckpt")
epoch 0 loss: 0.20398761 loss: 0.0708838 loss: 0.07229429 loss: 0.06901253 loss: 0.06409615 loss: 0.0630189 loss: 0.061219312 loss: 0.062343072 loss: 0.059783693 loss: 0.055046517 loss: 0.051431205 loss: 0.050496895 loss: 0.050629124 loss: 0.049471587 loss: 0.046292912 loss: 0.0462023 loss: 0.04678931 loss: 0.04685284 loss: 0.045283422 loss: 0.04664896 loss: 0.041416183 loss: 0.042336438 loss: 0.04272098 loss: 0.038918313 loss: 0.039984394 loss: 0.036915567 loss: 0.03467884 loss: 0.038930684 loss: 0.035953958 loss: 0.033901986 loss: 0.035733573 loss: 0.033974353 loss: 0.0372837 loss: 0.03401359 loss: 0.032581102 loss: 0.032176018 loss: 0.035932537 loss: 0.03326757 loss: 0.03286691 loss: 0.031730413 loss: 0.03215327 loss: 0.02969548 loss: 0.028949726 loss: 0.030335365 loss: 0.030604234 loss: 0.030071177 loss: 0.027319392 loss: 0.03091022 loss: 0.028623749 loss: 0.02990663 loss: 0.02846408 loss: 0.02673378 loss: 0.028284827 loss: 0.02668308 loss: 0.025172098 loss: 0.027223745 loss: 0.026092904 loss: 0.025095725 loss: 0.0256128 loss: 0.023903593 loss: 0.026833395 loss: 0.02291649 loss: 0.024836017 loss: 0.025639106 loss: 0.025024228 loss: 0.0222739 loss: 0.024345953 loss: 0.0231073 loss: 0.024386862 epoch 1 loss: 0.024582902 loss: 0.023237532 loss: 0.021512251 loss: 0.02360023 loss: 0.022676071 loss: 0.02415287 loss: 0.022744521 loss: 0.021667516 loss: 0.0214169 loss: 0.022158368 loss: 0.02518047 loss: 0.021124976 loss: 0.02152063 loss: 0.021633657 loss: 0.022811037 loss: 0.023017516 loss: 0.01991303 loss: 0.019953934 loss: 0.021038674 loss: 0.021392051 loss: 0.021926763 loss: 0.022134589 loss: 0.02136919 loss: 0.022660827 loss: 0.02037036 loss: 0.020912988 loss: 0.019177796 loss: 0.020303404 loss: 0.02138506 loss: 0.018389126 loss: 0.022530423 loss: 0.018623551 loss: 0.020804202 loss: 0.020081392 loss: 0.020083092 loss: 0.021284213 loss: 0.018534305 loss: 0.0183512 loss: 0.019071132 loss: 0.019083902 loss: 0.019230291 loss: 0.018950727 loss: 0.019156925 loss: 0.01800082 loss: 0.01821514 loss: 0.01947802 loss: 0.018508071 loss: 0.019777609 loss: 0.018603006 loss: 0.017900696 loss: 0.01885143 loss: 0.018220805 loss: 0.017387122 loss: 0.017529434 loss: 0.019234309 loss: 0.01828063 loss: 0.019518722 loss: 0.018294448 loss: 0.018128412 loss: 0.020815888 loss: 0.01799356 loss: 0.018431714 loss: 0.018848682 loss: 0.018857423 loss: 0.019021131 loss: 0.016798018 loss: 0.017310837 loss: 0.018420905 loss: 0.016857319 epoch 2 loss: 0.015891058 loss: 0.017673375 loss: 0.019343566 loss: 0.016927814 loss: 0.01797259 loss: 0.018083502 loss: 0.016454443 loss: 0.018090952 loss: 0.016857482 loss: 0.016990108 loss: 0.016487645 loss: 0.017264206 loss: 0.016896106 loss: 0.016171075 loss: 0.016866647 loss: 0.015637698 loss: 0.01678272 loss: 0.016579656 loss: 0.01601613 loss: 0.0157633 loss: 0.016649604 loss: 0.015920397 loss: 0.017333504 loss: 0.016743971 loss: 0.015430242 loss: 0.017510636 loss: 0.015406435 loss: 0.014941905 loss: 0.017000062 loss: 0.014751226 loss: 0.01741294 loss: 0.017143331 loss: 0.015859472 loss: 0.016285475 loss: 0.015224927 loss: 0.015700316 loss: 0.016492123 loss: 0.017247746 loss: 0.01478081 loss: 0.017131016 loss: 0.016169608 loss: 0.01389546 loss: 0.016394597 loss: 0.015914895 loss: 0.01566257 loss: 0.016259143 loss: 0.0146471765 loss: 0.015519961 loss: 0.01431595 loss: 0.014771233 loss: 0.016047878 loss: 0.014557332 loss: 0.015913744 loss: 0.014146825 loss: 0.01520318 loss: 0.015653465 loss: 0.014581375 loss: 0.014317034 loss: 0.01587145 loss: 0.015004034 loss: 0.014825545 loss: 0.01560139 loss: 0.01398159 loss: 0.016154384 loss: 0.014502609 loss: 0.016542055 loss: 0.016051147 loss: 0.01489912 loss: 0.01472992 epoch 3 loss: 0.01554211 loss: 0.015347974 loss: 0.014121061 loss: 0.014726387 loss: 0.013787114 loss: 0.014743303 loss: 0.015485565 loss: 0.015077258 loss: 0.014018869 loss: 0.014256843 loss: 0.015000916 loss: 0.014162362 loss: 0.014598925 loss: 0.014936838 loss: 0.015463309 loss: 0.015450461 loss: 0.015737291 loss: 0.015318049 loss: 0.013969505 loss: 0.014653122 loss: 0.014576774 loss: 0.013985077 loss: 0.015552988 loss: 0.014998029 loss: 0.014106471 loss: 0.01445698 loss: 0.014903617 loss: 0.0147204 loss: 0.013105157 loss: 0.014188498 loss: 0.01625831 loss: 0.0137854675 loss: 0.015832786 loss: 0.015702369 loss: 0.0143693695 loss: 0.014188599 loss: 0.013323049 loss: 0.014181157 loss: 0.014221835 loss: 0.013628747 loss: 0.015060135 loss: 0.013807766 loss: 0.013851439 loss: 0.014503563 loss: 0.015494782 loss: 0.014865318 loss: 0.01428503 loss: 0.012973661 loss: 0.013429867 loss: 0.014272985 loss: 0.0143521875 loss: 0.014956686 loss: 0.013378698 loss: 0.014161178 loss: 0.012507994 loss: 0.0143193565 loss: 0.014939347 loss: 0.015642649 loss: 0.013173428 loss: 0.014113757 loss: 0.013824285 loss: 0.0134775685 loss: 0.014253838 loss: 0.013930325 loss: 0.013156113 loss: 0.013942626 loss: 0.013392967 loss: 0.01420289 loss: 0.014103031 epoch 4 loss: 0.012441137 loss: 0.01328984 loss: 0.013537665 loss: 0.013256628 loss: 0.014290652 loss: 0.014059423 loss: 0.013794588 loss: 0.015058254 loss: 0.012702195 loss: 0.013149733 loss: 0.01338209 loss: 0.013195736 loss: 0.012732673 loss: 0.012860078 loss: 0.01328647 loss: 0.014086197 loss: 0.012508435 loss: 0.013897132 loss: 0.013814654 loss: 0.013529816 loss: 0.012843007 loss: 0.013757027 loss: 0.01281456 loss: 0.014033934 loss: 0.013953847 loss: 0.013546245 loss: 0.0139955925 loss: 0.013803875 loss: 0.0137700215 loss: 0.012011651 loss: 0.014214111 loss: 0.0137741985 loss: 0.012818061 loss: 0.012745061 loss: 0.013893744 loss: 0.012940918 loss: 0.012406513 loss: 0.012645377 loss: 0.012276475 loss: 0.012607189 loss: 0.01300378 loss: 0.0138201425 loss: 0.012588673 loss: 0.014124509 loss: 0.011979781 loss: 0.013706204 loss: 0.014691397 loss: 0.014759926 loss: 0.011645438 loss: 0.012248036 loss: 0.012992725 loss: 0.013051807 loss: 0.013213025 loss: 0.01249303 loss: 0.012806725 loss: 0.011517627 loss: 0.012835205 loss: 0.01237728 loss: 0.012007951 loss: 0.01311569 loss: 0.013528222 loss: 0.01378299 loss: 0.01208682 loss: 0.011607734 loss: 0.012667652 loss: 0.012659919 loss: 0.014269459 loss: 0.012960398 loss: 0.0135566145 epoch 5 loss: 0.013040621 loss: 0.012196188 loss: 0.01412129 loss: 0.011933154 loss: 0.013052467 loss: 0.013785319 loss: 0.012061191 loss: 0.013262551 loss: 0.013347747 loss: 0.01388266 loss: 0.0137386285 loss: 0.013217572 loss: 0.01222505 loss: 0.012881078 loss: 0.013655604 loss: 0.013147645 loss: 0.013982431 loss: 0.0128617305 loss: 0.01275698 loss: 0.012412731 loss: 0.012369351 loss: 0.014179274 loss: 0.012488497 loss: 0.01177005 loss: 0.011942177 loss: 0.012554642 loss: 0.014722351 loss: 0.011692692 loss: 0.01167937 loss: 0.013070264 loss: 0.0128221875 loss: 0.011493606 loss: 0.012696155 loss: 0.013966623 loss: 0.013154316 loss: 0.011256062 loss: 0.0140472315 loss: 0.012387058 loss: 0.014442756 loss: 0.0122085335 loss: 0.013044476 loss: 0.011672852 loss: 0.0124899065 loss: 0.013600085 loss: 0.012711153 loss: 0.013043363 loss: 0.01440119 loss: 0.012532822 loss: 0.012627454 loss: 0.012843603 loss: 0.014244438 loss: 0.012513099 loss: 0.012629468 loss: 0.013610585 loss: 0.0121669695 loss: 0.011974603 loss: 0.012001737 loss: 0.012609236 loss: 0.013280911 loss: 0.012722595 loss: 0.012683248 loss: 0.01164493 loss: 0.013956759 loss: 0.012718685 loss: 0.012742458 loss: 0.012978978 loss: 0.013556181 loss: 0.010991797 loss: 0.011444965 epoch 6 loss: 0.012323111 loss: 0.01222966 loss: 0.012523524 loss: 0.011155445 loss: 0.013273032 loss: 0.010767553 loss: 0.012481439 loss: 0.01164698 loss: 0.013542713 loss: 0.0115556745 loss: 0.011837618 loss: 0.011464203 loss: 0.013203878 loss: 0.013536702 loss: 0.011471036 loss: 0.011461776 loss: 0.012253372 loss: 0.012009237 loss: 0.01296052 loss: 0.011026498 loss: 0.012676247 loss: 0.012387393 loss: 0.012620336 loss: 0.012828475 loss: 0.012366954 loss: 0.012460991 loss: 0.012331645 loss: 0.012543314 loss: 0.0117979245 loss: 0.01127458 loss: 0.013251022 loss: 0.012541855 loss: 0.013014635 loss: 0.012745627 loss: 0.011880721 loss: 0.013518706 loss: 0.012768081 loss: 0.01297049 loss: 0.011401916 loss: 0.011795603 loss: 0.012230153 loss: 0.012031536 loss: 0.011716977 loss: 0.011863783 loss: 0.011915739 loss: 0.011265389 loss: 0.012141608 loss: 0.011187957 loss: 0.01135136 loss: 0.012569268 loss: 0.013387786 loss: 0.011767387 loss: 0.011308278 loss: 0.012688971 loss: 0.012213094 loss: 0.010388863 loss: 0.011382862 loss: 0.011967888 loss: 0.011696421 loss: 0.0123099 loss: 0.011207707 loss: 0.011111326 loss: 0.012015384 loss: 0.011073628 loss: 0.012718277 loss: 0.011192713 loss: 0.011475217 loss: 0.011763395 loss: 0.010625752 epoch 7 loss: 0.012259676 loss: 0.011547565 loss: 0.01178422 loss: 0.012144682 loss: 0.013011747 loss: 0.011971288 loss: 0.011618375 loss: 0.010870748 loss: 0.010573136 loss: 0.011100963 loss: 0.012425871 loss: 0.012784197 loss: 0.01226877 loss: 0.011550314 loss: 0.011219459 loss: 0.01212149 loss: 0.012174046 loss: 0.0111228395 loss: 0.011566624 loss: 0.011383625 loss: 0.013195303 loss: 0.010693143 loss: 0.013492647 loss: 0.011520888 loss: 0.011339776 loss: 0.011212556 loss: 0.012480525 loss: 0.011775929 loss: 0.01120465 loss: 0.012256342 loss: 0.012029624 loss: 0.010975736 loss: 0.011604002 loss: 0.010566902 loss: 0.011156612 loss: 0.010505692 loss: 0.01147217 loss: 0.010737614 loss: 0.011986536 loss: 0.012458518 loss: 0.011403101 loss: 0.010998595 loss: 0.012401477 loss: 0.011425182 loss: 0.01052833 loss: 0.010911061 loss: 0.011680076 loss: 0.011836586 loss: 0.01167806 loss: 0.012438214 loss: 0.012141175 loss: 0.01232822 loss: 0.012208186 loss: 0.011162 loss: 0.01186358 loss: 0.0117553715 loss: 0.011596052 loss: 0.010983239 loss: 0.011138611 loss: 0.011710602 loss: 0.011128295 loss: 0.010743541 loss: 0.011779562 loss: 0.011036215 loss: 0.011979385 loss: 0.0110892095 loss: 0.011172439 loss: 0.011817001 loss: 0.010762086 epoch 8 loss: 0.011644484 loss: 0.011098864 loss: 0.010962925 loss: 0.011265798 loss: 0.011321453 loss: 0.010915661 loss: 0.011102727 loss: 0.011033996 loss: 0.012437082 loss: 0.011389793 loss: 0.012510997 loss: 0.011114421 loss: 0.01190129 loss: 0.011417796 loss: 0.011574058 loss: 0.0119496165 loss: 0.0103203105 loss: 0.011636757 loss: 0.011124954 loss: 0.010845161 loss: 0.0113107385 loss: 0.012381489 loss: 0.011640988 loss: 0.011467977 loss: 0.011576727 loss: 0.011241379 loss: 0.011656605 loss: 0.011381336 loss: 0.010956622 loss: 0.011745524 loss: 0.011589206 loss: 0.011655858 loss: 0.011384589 loss: 0.011013235 loss: 0.011098908 loss: 0.0129487235 loss: 0.012026979 loss: 0.011634558 loss: 0.012556377 loss: 0.010589638 loss: 0.010928977 loss: 0.010640337 loss: 0.011592608 loss: 0.011439752 loss: 0.011960359 loss: 0.011625732 loss: 0.0110754995 loss: 0.011417785 loss: 0.011928026 loss: 0.011787284 loss: 0.0108889835 loss: 0.011458508 loss: 0.011032494 loss: 0.011050851 loss: 0.010831718 loss: 0.011319487 loss: 0.009408834 loss: 0.009706354 loss: 0.010504356 loss: 0.01216339 loss: 0.0115867555 loss: 0.011130015 loss: 0.010709727 loss: 0.010778569 loss: 0.009910786 loss: 0.011382248 loss: 0.0115498975 loss: 0.012280586 loss: 0.010787726 epoch 9 loss: 0.011048308 loss: 0.011651134 loss: 0.011269953 loss: 0.011896882 loss: 0.010874115 loss: 0.011068698 loss: 0.010697417 loss: 0.009580335 loss: 0.0105505 loss: 0.011048719 loss: 0.011642543 loss: 0.011283861 loss: 0.011417235 loss: 0.010732672 loss: 0.011404441 loss: 0.0103973625 loss: 0.010992599 loss: 0.011907046 loss: 0.011964726 loss: 0.010490552 loss: 0.011589882 loss: 0.009763245 loss: 0.011625051 loss: 0.0112976 loss: 0.011159627 loss: 0.0110978745 loss: 0.011750891 loss: 0.011754345 loss: 0.010547963 loss: 0.010540882 loss: 0.010218873 loss: 0.010034548 loss: 0.010671083 loss: 0.0112939915 loss: 0.010853591 loss: 0.010927235 loss: 0.011228314 loss: 0.011632837 loss: 0.010358789 loss: 0.0106889745 loss: 0.0117118955 loss: 0.010704861 loss: 0.011392472 loss: 0.011000782 loss: 0.010114234 loss: 0.010817886 loss: 0.010951023 loss: 0.011597703 loss: 0.011675825 loss: 0.011358223 loss: 0.011985237 loss: 0.011024646 loss: 0.01096758 loss: 0.010788832 loss: 0.010219674 loss: 0.010660395 loss: 0.010264582 loss: 0.011692374 loss: 0.0109353475 loss: 0.010531704 loss: 0.0127126705 loss: 0.011933025 loss: 0.010915989 loss: 0.011137444 loss: 0.0124240955 loss: 0.011475529 loss: 0.012402742 loss: 0.01040511 loss: 0.010206296 epoch 10 loss: 0.011184712 loss: 0.010463664 loss: 0.010023722 loss: 0.010612198 loss: 0.010483181 loss: 0.0102085285 loss: 0.011186 loss: 0.01066276 loss: 0.010770067 loss: 0.011184137 loss: 0.011998059 loss: 0.01114344 loss: 0.01048459 loss: 0.011617395 loss: 0.0111675225 loss: 0.010929288 loss: 0.010590641 loss: 0.011209213 loss: 0.01104852 loss: 0.010456921 loss: 0.0115050115 loss: 0.011988423 loss: 0.009639543 loss: 0.010854396 loss: 0.010944683 loss: 0.010030548 loss: 0.012164401 loss: 0.011614042 loss: 0.01171688 loss: 0.0099626575 loss: 0.012350665 loss: 0.011351825 loss: 0.012037474 loss: 0.010333799 loss: 0.010304038 loss: 0.010966631 loss: 0.010645836 loss: 0.011047033 loss: 0.010040125 loss: 0.009987492 loss: 0.011298816 loss: 0.011131223 loss: 0.010810079 loss: 0.011506773 loss: 0.011441492 loss: 0.010632008 loss: 0.011372352 loss: 0.011304805 loss: 0.011183978 loss: 0.010642498 loss: 0.011452371 loss: 0.010955593 loss: 0.00978572 loss: 0.0102161765 loss: 0.01028869 loss: 0.0108613055 loss: 0.009949582 loss: 0.011062235 loss: 0.010336972 loss: 0.011087737 loss: 0.010681274 loss: 0.010445464 loss: 0.010858898 loss: 0.011380687 loss: 0.010996087 loss: 0.011476617 loss: 0.010908187 loss: 0.0110577075 loss: 0.010730969 epoch 11 loss: 0.010705704 loss: 0.0108601805 loss: 0.009892839 loss: 0.0108570745 loss: 0.010092951 loss: 0.00998047 loss: 0.011159838 loss: 0.011408601 loss: 0.012209294 loss: 0.009638174 loss: 0.009289627 loss: 0.010721436 loss: 0.010776609 loss: 0.010202306 loss: 0.010876749 loss: 0.010299942 loss: 0.010971914 loss: 0.011249053 loss: 0.010308282 loss: 0.011372701 loss: 0.010734813 loss: 0.011823536 loss: 0.010756066 loss: 0.010398587 loss: 0.010662826 loss: 0.009916436 loss: 0.010640897 loss: 0.012022933 loss: 0.0108010685 loss: 0.010201327 loss: 0.010020067 loss: 0.010830712 loss: 0.009622213 loss: 0.011225727 loss: 0.010959267 loss: 0.010440739 loss: 0.010966088 loss: 0.01035189 loss: 0.00900905 loss: 0.01071839 loss: 0.009855099 loss: 0.010594245 loss: 0.011878652 loss: 0.01068098 loss: 0.01046814 loss: 0.010184966 loss: 0.010429711 loss: 0.009936679 loss: 0.010598876 loss: 0.009757067 loss: 0.009838457 loss: 0.0121118855 loss: 0.010310979 loss: 0.010645928 loss: 0.010167852 loss: 0.009682597 loss: 0.010356342 loss: 0.0116213355 loss: 0.011165218 loss: 0.010007209 loss: 0.01013015 loss: 0.010203817 loss: 0.01181489 loss: 0.011020434 loss: 0.0116110435 loss: 0.010882542 loss: 0.010348857 loss: 0.010143533 loss: 0.010491353 epoch 12 loss: 0.010273032 loss: 0.010639654 loss: 0.009473341 loss: 0.010224551 loss: 0.010681433 loss: 0.010707919 loss: 0.011343758 loss: 0.011797562 loss: 0.010249904 loss: 0.010156715 loss: 0.010814169 loss: 0.010808091 loss: 0.009942088 loss: 0.011414611 loss: 0.010024754 loss: 0.0099155195 loss: 0.0101348115 loss: 0.009656771 loss: 0.010496768 loss: 0.009736717 loss: 0.010164807 loss: 0.010919332 loss: 0.009934049 loss: 0.009575572 loss: 0.01006444 loss: 0.0107354885 loss: 0.01122057 loss: 0.011580407 loss: 0.009864665 loss: 0.010810163 loss: 0.011285651 loss: 0.010008608 loss: 0.011023027 loss: 0.010525289 loss: 0.010711821 loss: 0.0094750775 loss: 0.010361064 loss: 0.011647568 loss: 0.011471926 loss: 0.01069621 loss: 0.009718336 loss: 0.011001427 loss: 0.011494247 loss: 0.011390984 loss: 0.010965856 loss: 0.011316819 loss: 0.010520764 loss: 0.0101675335 loss: 0.010739777 loss: 0.01053729 loss: 0.011197011 loss: 0.00987224 loss: 0.010218886 loss: 0.010374656 loss: 0.010967279 loss: 0.010592968 loss: 0.010317132 loss: 0.011473187 loss: 0.010490462 loss: 0.011131374 loss: 0.01105205 loss: 0.010994959 loss: 0.010154883 loss: 0.010611513 loss: 0.0092827 loss: 0.009098871 loss: 0.009947883 loss: 0.010962969 loss: 0.009646412 epoch 13 loss: 0.009982557 loss: 0.00990064 loss: 0.010923625 loss: 0.011153592 loss: 0.009615634 loss: 0.010383678 loss: 0.011058303 loss: 0.009523452 loss: 0.01034024 loss: 0.010144045 loss: 0.011125022 loss: 0.01006402 loss: 0.010618763 loss: 0.0097811995 loss: 0.010316396 loss: 0.011944314 loss: 0.010754027 loss: 0.009575165 loss: 0.009959766 loss: 0.009532126 loss: 0.00975386 loss: 0.010521692 loss: 0.011115234 loss: 0.011289496 loss: 0.01082473 loss: 0.010211134 loss: 0.009460744 loss: 0.009727153 loss: 0.011345826 loss: 0.011374165 loss: 0.010376674 loss: 0.009794309 loss: 0.00971704 loss: 0.0097471 loss: 0.010415157 loss: 0.010409054 loss: 0.010791568 loss: 0.010341201 loss: 0.010877825 loss: 0.010736446 loss: 0.010641711 loss: 0.010659557 loss: 0.009694965 loss: 0.010682326 loss: 0.01042872 loss: 0.010194273 loss: 0.010174377 loss: 0.011209864 loss: 0.010191905 loss: 0.009208324 loss: 0.010290743 loss: 0.011244711 loss: 0.010624282 loss: 0.0092731565 loss: 0.0104009705 loss: 0.0095734615 loss: 0.010580317 loss: 0.011163265 loss: 0.010787968 loss: 0.010721558 loss: 0.010822434 loss: 0.010131888 loss: 0.010668005 loss: 0.01064761 loss: 0.011076705 loss: 0.010623196 loss: 0.0099081 loss: 0.0090801865 loss: 0.008679121 epoch 14 loss: 0.009865679 loss: 0.0102252485 loss: 0.009715452 loss: 0.010198598 loss: 0.010438069 loss: 0.009779227 loss: 0.009864477 loss: 0.0107597355 loss: 0.010848683 loss: 0.009537066 loss: 0.009845057 loss: 0.0108810095 loss: 0.009525611 loss: 0.010098495 loss: 0.009892824 loss: 0.010234031 loss: 0.009238628 loss: 0.009940764 loss: 0.010002652 loss: 0.010419194 loss: 0.010328224 loss: 0.009781118 loss: 0.009956813 loss: 0.009865069 loss: 0.009574652 loss: 0.010332919 loss: 0.010138412 loss: 0.010826332 loss: 0.010031293 loss: 0.010287953 loss: 0.009140285 loss: 0.009521558 loss: 0.00918143 loss: 0.009018264 loss: 0.011257072 loss: 0.010645506 loss: 0.009413964 loss: 0.009680024 loss: 0.0102651 loss: 0.010298104 loss: 0.010558372 loss: 0.010410026 loss: 0.009844176 loss: 0.010147072 loss: 0.010781193 loss: 0.010045637 loss: 0.0108656 loss: 0.010072212 loss: 0.010765687 loss: 0.008447681 loss: 0.010443452 loss: 0.0098031955 loss: 0.010125086 loss: 0.01025436 loss: 0.010417419 loss: 0.011304691 loss: 0.009983812 loss: 0.009900513 loss: 0.009554831 loss: 0.010090484 loss: 0.010334088 loss: 0.010637082 loss: 0.010195751 loss: 0.009662527 loss: 0.009442285 loss: 0.009995036 loss: 0.010866946 loss: 0.010198947 loss: 0.0106290635 epoch 15 loss: 0.010152561 loss: 0.010157128 loss: 0.0115354 loss: 0.010556909 loss: 0.010461502 loss: 0.009990545 loss: 0.009566664 loss: 0.009848112 loss: 0.009805015 loss: 0.009139631 loss: 0.009768955 loss: 0.010775402 loss: 0.010464738 loss: 0.009251035 loss: 0.01017208 loss: 0.009420542 loss: 0.009820479 loss: 0.00981044 loss: 0.00937628 loss: 0.010072878 loss: 0.009152371 loss: 0.009909756 loss: 0.00984354 loss: 0.010673326 loss: 0.00990551 loss: 0.00993782 loss: 0.009897539 loss: 0.010547227 loss: 0.009570951 loss: 0.009979097 loss: 0.009531025 loss: 0.009849482 loss: 0.010415752 loss: 0.010479251 loss: 0.010575111 loss: 0.009915465 loss: 0.0102574015 loss: 0.009849289 loss: 0.008724481 loss: 0.00883091 loss: 0.009979644 loss: 0.009425349 loss: 0.010495141 loss: 0.010774763 loss: 0.010972577 loss: 0.009655561 loss: 0.009832535 loss: 0.009570184 loss: 0.01005698 loss: 0.009616186 loss: 0.011111624 loss: 0.011138457 loss: 0.009167759 loss: 0.009222815 loss: 0.009298692 loss: 0.009708643 loss: 0.009621044 loss: 0.009751302 loss: 0.010615755 loss: 0.009929426 loss: 0.008806011 loss: 0.00877188 loss: 0.009863395 loss: 0.010544793 loss: 0.009818079 loss: 0.011506056 loss: 0.01023103 loss: 0.010242042 loss: 0.009989407 epoch 16 loss: 0.010647762 loss: 0.009201192 loss: 0.009316859 loss: 0.009399303 loss: 0.0102157025 loss: 0.011151691 loss: 0.01008243 loss: 0.009923316 loss: 0.010656066 loss: 0.009865905 loss: 0.0103597045 loss: 0.009648504 loss: 0.009303178 loss: 0.010412516 loss: 0.010375655 loss: 0.010152042 loss: 0.009770993 loss: 0.010907874 loss: 0.009561488 loss: 0.009019187 loss: 0.009455226 loss: 0.0098455595 loss: 0.01158014 loss: 0.0100719035 loss: 0.009544587 loss: 0.010595742 loss: 0.010017289 loss: 0.009502221 loss: 0.009218959 loss: 0.010542845 loss: 0.008848218 loss: 0.010070333 loss: 0.009380975 loss: 0.009907382 loss: 0.010391366 loss: 0.0096727805 loss: 0.010427266 loss: 0.0100415135 loss: 0.010247157 loss: 0.009948467 loss: 0.009428262 loss: 0.010056345 loss: 0.010452663 loss: 0.010939043 loss: 0.00924669 loss: 0.010026745 loss: 0.010062096 loss: 0.01109509 loss: 0.009372253 loss: 0.009224272 loss: 0.010883604 loss: 0.011136336 loss: 0.009782977 loss: 0.008917739 loss: 0.010209996 loss: 0.009804467 loss: 0.009253861 loss: 0.01003161 loss: 0.009420148 loss: 0.00949273 loss: 0.010561677 loss: 0.009369085 loss: 0.009129116 loss: 0.010678703 loss: 0.009674228 loss: 0.008805875 loss: 0.009755804 loss: 0.008920079 loss: 0.008727222 epoch 17 loss: 0.008749018 loss: 0.0102071995 loss: 0.009895873 loss: 0.010481615 loss: 0.009503026 loss: 0.009042395 loss: 0.008807212 loss: 0.009959428 loss: 0.009359758 loss: 0.0099416245 loss: 0.010034719 loss: 0.009540549 loss: 0.011269552 loss: 0.009381974 loss: 0.010243221 loss: 0.009062939 loss: 0.009842326 loss: 0.009297387 Stopping early during epoch 17 with best loss: 0.008390769 INFO:tensorflow:Restoring parameters from ./checkpoints/mnist_semantic_hashing_model_early_stopping.ckpt total training loss: 0.010291892
코딩 레이어의 입력에 노이즈를 추가하면 모델 손실이 증가하지만 이전보다 코딩 값을 0과 1로 더 잘 나누므로 해시가 더 유용해집니다.
# 숫자 재현 함수
def show_reconstructed_digits(X, outputs, model_path = None, n_test_digits = 2):
with tf.Session() as sess:
if model_path:
saver.restore(sess, model_path)
outputs_val = outputs.eval(feed_dict={X: X_test[:n_test_digits]})
fig = plt.figure(figsize=(8, 3 * n_test_digits))
for digit_index in range(n_test_digits):
plt.subplot(n_test_digits, 2, digit_index * 2 + 1)
plot_image(X_test[digit_index])
plt.subplot(n_test_digits, 2, digit_index * 2 + 2)
plot_image(outputs_val[digit_index])
# 코딩을 비트로 바꾸는 함수
def coding_to_bit_hash(coding):
return np.array(list(map(lambda a: int(round(a)), coding)))
images = mnist.train.images[:5]
codings_output = sess.run(codings, feed_dict={X: images})
for i in range(len(images)):
show_image(images[i].reshape(image_shape))
coding = codings_output[i]
print(coding_to_bit_hash(coding))
[1 1 0 1 0 1 1 0 0 1 1 0 1 1 1 0 1 1 1 1 0 0 1 1 1 0 0 0 0 0]
[1 0 0 0 0 1 1 0 0 0 0 0 1 1 1 1 1 0 1 0 0 1 1 1 1 1 0 1 1 0]
[0 1 0 0 0 0 1 0 0 1 0 1 1 0 0 1 0 0 1 0 1 0 1 0 1 1 0 0 0 1]
[0 0 1 1 1 0 1 0 0 0 0 1 1 0 0 1 0 0 1 0 0 0 1 0 1 1 0 1 1 0]
[1 1 0 1 0 0 1 0 1 1 1 0 1 0 0 0 1 1 1 0 1 1 1 0 0 1 0 0 0 0]
# 비트 해시를 정수로 변환하는 함수
def bit_hash_to_int(bit_hash):
bit_str = "".join(map(lambda a: str(a), bit_hash))
return int(bit_str, 2)
codings_output = sess.run(codings, feed_dict={X: mnist.train.images})
# 매칭 사전 생성 함수
def create_matching_dictionary(codings, labels):
coding_int_to_labels = {}
for i in range(len(codings)):
label = labels[i]
coding_int = bit_hash_to_int(coding_to_bit_hash(codings_output[i]))
if not coding_int in coding_int_to_labels:
coding_int_to_labels[coding_int] = []
coding_int_to_labels[coding_int].append(label)
return coding_int_to_labels
coding_int_to_labels = create_matching_dictionary(codings_output, mnist.train.labels)
• 전체 이미지의 해시를 계산하고 동일한 해시를 가진 이미지들이 어떤지 확인해보세요. MNIST와 CIFAR10은 레이블되어 있기 때문에 시맨틱 해싱을 위한 오토인코더의 성 능을 측정하는 객관적인 방법은 같은 해시의 이미지가 동일한 클래스인지 확인하는 것 입니다. 이렇게 하기 위한 한 가지 방법은 동일한(또는 매우 비슷한) 해시를 가진 이미지의 묶음에 대한 평균 지니 순도를 계산하는 것입니다 (6장에서 소개했습니다).
for hash_int, labels in coding_int_to_labels.items():
if len(labels) > 1:
print(labels)
[1, 1, 1, 1, 1] [6, 6] [1, 1] [8, 8] [1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1] [7, 7, 7, 7, 7] [3, 3] [1, 1, 1, 1] [9, 7, 7] [1, 1] [1, 1, 1, 1, 1] [9, 9] [6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1] [4, 4] [8, 8] [3, 8, 5] [0, 0] [6, 6, 6] [5, 5, 5] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [1, 1] [1, 1, 1, 1] [1, 1, 1] [8, 8] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [4, 4] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [3, 3] [6, 6] [7, 7, 7, 7, 7, 7] [0, 0] [9, 9, 9] [6, 6, 6] [7, 7] [0, 0] [5, 5] [1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [6, 6] [6, 6] [0, 0] [3, 3, 3] [0, 0] [0, 0] [4, 4, 4, 4] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [3, 3] [6, 6] [9, 9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [7, 7] [1, 1, 1, 1] [1, 1] [1, 1] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [8, 8] [6, 6] [9, 9, 9, 9] [7, 7] [5, 5] [6, 6] [6, 6] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [1, 1, 1, 1] [1, 1, 1, 1, 1, 1] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1] [0, 0] [6, 6] [3, 3] [2, 2] [9, 9] [5, 5] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [7, 7, 7, 7, 7, 7, 7] [0, 0, 0] [1, 1] [1, 1, 1, 1, 1, 1, 1] [2, 2] [0, 0, 0] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1] [9, 9] [0, 0, 0] [6, 6] [1, 1, 1, 1, 1] [7, 7, 7] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [2, 2, 2, 2, 2] [5, 5] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [7, 7] [0, 0, 0, 0] [4, 4] [7, 1] [3, 3, 3, 3] [6, 6] [9, 9] [7, 7] [9, 9, 9] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 9, 9] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0, 0, 0] [1, 1] [3, 3] [1, 1, 1, 1, 1, 1, 1, 1] [1, 1] [1, 1, 1, 1, 1] [6, 6] [7, 7, 7, 7] [4, 4, 4] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 3] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [4, 4] [8, 8, 8] [1, 1, 1, 1, 1, 1] [6, 6] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [2, 2] [7, 7] [7, 7] [7, 7, 7, 7, 7] [9, 9] [1, 1, 1] [6, 6, 6] [0, 0] [7, 7, 7, 7] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6, 6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [0, 0, 0, 0] [7, 7, 7] [6, 6] [3, 3, 3] [3, 3] [7, 7, 7, 7, 7, 7, 7] [7, 7, 7, 7] [7, 7, 7, 7, 7, 7] [9, 9] [5, 5] [0, 6] [6, 6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 9] [0, 0] [9, 9] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1] [6, 6] [7, 7] [1, 1] [7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6, 6, 6, 6, 6, 6] [6, 6, 6, 6, 6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6, 5] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0, 0, 0, 0] [2, 2, 2] [9, 9] [6, 6, 6] [7, 7, 7, 7] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1] [7, 7, 7] [1, 1] [0, 0, 0, 0] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0, 0, 0, 0, 0] [5, 5, 5] [1, 1, 1, 1, 1, 1, 1, 1] [1, 1] [7, 7, 7, 7, 7, 7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0, 0, 0, 0, 0, 0, 0] [6, 6, 6, 6, 6, 6, 6, 6, 6, 6] [9, 9] [1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [2, 2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7, 7, 7] [6, 6, 6, 6, 6, 6, 6, 6] [7, 7, 7, 7] [3, 3] [1, 1, 1] [7, 7] [5, 5] [3, 3, 3] [0, 0, 0, 0] [6, 6, 6] [1, 1] [0, 0, 0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6, 6, 6, 6] [6, 6, 6, 6, 6, 6, 6, 6, 6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [8, 8, 8] [1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [0, 0, 0] [0, 0, 0] [7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [6, 6] [5, 5] [7, 7, 7, 7, 7, 7] [0, 0] [1, 1] [0, 0, 0] [1, 1, 1, 1, 1, 1, 1] [7, 7] [4, 4] [9, 9, 9] [6, 5] [1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [4, 4] [1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [1, 1] [8, 8] [1, 1] [6, 6] [9, 9, 9] [7, 7] [2, 2] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [4, 4] [1, 1] [4, 4, 4] [1, 1, 1] [6, 6] [1, 1, 1, 1, 1, 1] [0, 0, 0, 0] [3, 8] [1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [2, 2] [0, 0, 0, 0] [1, 1, 1, 1, 1] [5, 5] [4, 9, 9] [6, 6, 6, 6] [8, 8] [5, 5] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 9, 9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0] [1, 1, 1, 1, 1, 1, 1, 1] [5, 5] [1, 1, 1] [1, 1] [1, 1, 1, 1, 1, 1, 1] [9, 9] [4, 4] [9, 9] [8, 0] [4, 4] [6, 6, 6] [6, 6, 6, 6] [9, 9] [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6] [1, 1, 1, 1, 1, 1, 7, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [2, 2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [3, 3] [1, 1, 1, 1, 1, 1] [7, 7, 7, 7] [2, 2] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [2, 2] [0, 0] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [4, 4] [9, 9] [3, 3] [4, 4] [0, 0] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [9, 9, 9] [4, 9, 9, 9] [6, 6, 6] [6, 6, 6, 6] [9, 9] [0, 0] [6, 6] [7, 7, 7] [9, 9, 9, 9] [3, 3] [7, 7] [0, 0] [5, 5] [5, 5] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [5, 5] [1, 1, 1, 1] [3, 3] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 8, 1, 1] [1, 1, 1, 1] [4, 1, 1, 1, 1, 1] [4, 4] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [4, 4, 4, 4] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [8, 8, 8] [6, 6, 6] [7, 7] [4, 4] [0, 0] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1] [7, 7, 7] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [4, 4] [7, 7] [6, 5] [0, 0] [6, 6, 6] [7, 7] [3, 3, 3] [0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [4, 4] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [6, 6] [1, 1, 1, 1, 1, 1, 1] [1, 1, 1] [6, 6] [3, 3] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [4, 4] [8, 8] [2, 2, 2] [0, 0] [0, 0, 0] [2, 2] [0, 0, 0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [2, 2] [8, 8] [2, 2] [7, 7] [7, 7, 7, 7, 7, 7, 7, 7, 7] [7, 7] [7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6, 6, 6, 6, 6, 6] [7, 7, 7, 7, 7] [6, 6, 6] [8, 8] [1, 1, 1, 1] [6, 6, 6, 6] [3, 3, 3, 3] [2, 2] [1, 1, 1, 1] [6, 0] [6, 6, 6] [7, 7, 7, 7, 7, 7] [6, 6, 6, 6] [1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1] [6, 6, 6, 6, 6] [1, 1, 1, 1, 1] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [3, 3] [1, 1] [4, 4] [6, 6] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [7, 7, 7, 7, 7, 7, 7, 7, 7] [1, 1, 1, 1] [0, 0, 0, 0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1] [1, 1, 1, 1] [6, 6] [0, 0] [8, 8] [2, 2, 2] [0, 0] [1, 1, 1, 1] [2, 2] [0, 0] [0, 0] [1, 1, 1, 1] [3, 3] [6, 6, 6] [9, 9] [7, 7, 7, 7] [3, 3, 3] [6, 6] [3, 3] [8, 8] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 9, 9, 9, 9, 9] [4, 4] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0] [1, 1, 1, 1, 1] [7, 7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 9] [5, 5] [6, 6, 6, 6, 6, 6, 6, 6] [9, 9] [9, 9] [7, 7] [7, 7, 7] [6, 6, 6] [8, 8] [6, 6, 6, 6, 6] [6, 6] [1, 1, 1] [1, 1] [1, 1, 1] [1, 1, 1, 1, 1, 1, 1] [8, 8] [8, 8] [5, 5] [1, 1] [6, 6, 6, 6] [2, 2] [6, 6, 6, 6, 6, 6, 6] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [4, 4] [8, 8] [7, 7] [4, 7] [7, 7, 7, 7, 7, 7] [6, 6] [3, 3] [7, 7] [8, 8, 8, 8] [0, 0] [0, 0] [7, 7] [8, 8, 8, 8] [1, 1, 1] [4, 4] [0, 0, 0] [8, 8, 8] [5, 5] [9, 9, 9] [6, 6, 6, 6, 6, 6] [7, 7, 7] [7, 7] [7, 7, 7] [8, 8, 8, 8, 8] [5, 6] [1, 1, 1, 1, 1] [9, 7, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [3, 3] [5, 6] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [2, 2] [4, 4] [8, 8] [6, 6] [4, 4, 4] [0, 0, 0] [4, 4] [5, 8] [2, 2] [0, 0, 0, 0] [0, 0] [6, 6] [6, 6] [0, 0] [6, 6, 6] [9, 9] [9, 9] [9, 4] [7, 7, 7] [9, 9, 4, 9] [4, 4] [7, 7] [7, 7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1] [3, 3] [0, 0, 0] [9, 9] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [6, 6] [6, 6, 6] [7, 7] [6, 6] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [0, 0, 0] [7, 7, 7, 7, 7, 7] [8, 8, 8] [9, 7] [1, 1, 1, 1, 1, 1, 1] [1, 1] [9, 9] [7, 7, 7] [7, 7] [6, 6] [4, 4, 4] [1, 1] [1, 1, 1, 1, 1, 1] [6, 6] [1, 1, 1, 1, 1, 1, 1] [2, 2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6, 6] [9, 9] [7, 7, 7, 7] [9, 9, 9, 9] [0, 0, 0] [6, 6] [3, 3] [9, 9, 9] [8, 8] [8, 8, 8] [1, 1, 1] [1, 1, 1, 1] [1, 1, 1, 1] [7, 7, 7] [7, 7] [1, 1] [2, 2] [6, 6] [7, 7] [6, 6, 6] [7, 7] [0, 0] [0, 0] [7, 7, 7, 7, 7, 7] [7, 7] [1, 1, 1, 1] [3, 3] [9, 9] [8, 8] [1, 1, 1] [6, 6] [4, 4, 4, 4, 4] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6] [3, 3] [7, 7, 7, 7, 7] [7, 7] [3, 3] [9, 9] [9, 9] [6, 6] [6, 6, 6, 6, 6] [1, 1] [7, 7, 7, 7, 7, 7, 7] [8, 9, 9] [4, 4] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [3, 3] [9, 9] [0, 0] [1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0] [6, 6, 6] [7, 7, 7] [1, 1] [9, 9] [1, 1, 1] [1, 1, 1, 1, 1, 1, 1] [0, 0, 0] [8, 8, 8, 8] [6, 6, 6] [7, 7, 7] [8, 8] [1, 1, 1, 1] [6, 6] [7, 7, 7, 7] [2, 2] [5, 5, 5, 5] [7, 7, 7] [4, 4] [6, 6, 6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [8, 8] [7, 7] [4, 4] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [4, 4] [9, 9] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 9] [6, 6] [6, 6] [0, 0, 0] [7, 7, 7, 7, 7] [9, 9] [2, 2] [1, 1, 1, 1] [9, 4] [0, 0] [6, 6, 6, 6, 6, 6] [8, 8] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [3, 3, 3] [0, 0, 5] [7, 7] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0, 0, 0] [1, 1, 1, 1, 1, 1, 1] [6, 6, 6, 6] [8, 8] [6, 6, 6, 6, 6, 6, 6] [1, 1] [4, 4] [2, 2] [6, 6] [7, 7] [9, 9, 9, 9] [7, 7] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [9, 9] [5, 5] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [0, 0, 0] [8, 8, 8] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [2, 2, 2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1] [9, 9] [5, 8] [7, 7] [2, 2] [0, 0] [0, 0] [6, 6] [6, 6] [7, 7] [9, 9, 9, 9] [1, 1, 1, 1] [3, 3] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [0, 0] [7, 7, 7, 7, 7, 7, 7] [0, 0, 0, 0, 0, 0, 0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 9] [1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0] [6, 6] [6, 6, 6] [9, 9, 9] [1, 1, 1, 1, 1] [9, 9] [1, 1] [3, 3, 3] [8, 8] [1, 1, 1, 1, 1, 1] [4, 4] [1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [2, 2, 2] [0, 0, 0, 0, 0] [6, 6] [7, 7, 7, 7, 7, 7, 7] [6, 6] [6, 6, 6] [6, 6, 6] [1, 1] [7, 7] [3, 3] [7, 7] [2, 2, 2, 2, 2, 2] [0, 0] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [2, 2] [5, 5] [9, 9] [5, 3] [6, 6] [7, 7, 7, 7, 7, 7] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [5, 5] [9, 9, 9] [3, 3] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [8, 8, 8] [8, 8] [6, 6, 6] [0, 0] [0, 0, 0] [0, 0, 0, 0] [4, 4] [8, 8, 8] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [6, 6] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1] [8, 8, 8] [0, 0] [0, 0] [1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [4, 4] [1, 1, 1, 1, 1] [8, 8] [1, 1] [2, 2] [7, 7] [6, 6] [0, 0] [9, 9] [1, 1, 1, 1] [1, 1] [1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [9, 9] [7, 7] [1, 1, 1, 1, 1, 1] [1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1] [9, 9, 9] [4, 4] [1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0, 0] [1, 1, 1, 1] [9, 9] [8, 8] [5, 5] [2, 2] [0, 0] [0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0] [7, 7, 7, 7, 7] [0, 0] [0, 0] [6, 6] [0, 0, 0] [1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1] [6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7, 7, 7, 7] [3, 3] [9, 9] [3, 3] [7, 7, 7] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [1, 1, 1, 1, 1] [3, 3] [2, 2] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1] [6, 6] [5, 5] [3, 3, 3] [4, 4] [4, 4, 4] [0, 0, 0] [8, 8, 8] [7, 7] [9, 9, 9] [8, 8] [9, 9] [2, 2] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [3, 3, 3, 3] [1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [6, 6] [1, 1, 1, 1, 1, 1, 1] [6, 6] [4, 4] [3, 3] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [9, 9, 9] [6, 6] [3, 3] [6, 6, 6] [1, 1, 1, 1, 1, 1, 1, 1] [9, 4] [4, 4] [2, 2] [9, 9] [0, 0, 0] [0, 0, 0] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [6, 6] [1, 1] [4, 4, 4] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0, 0] [1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 6, 1] [2, 2] [8, 8] [9, 9] [5, 5] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [2, 2, 2] [9, 9] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [6, 6] [6, 6, 6, 6, 6, 6, 6] [0, 0] [0, 0] [6, 6] [6, 6] [6, 6, 6, 6] [1, 1, 1, 1, 1, 1, 1, 1] [6, 6, 6, 6, 6] [9, 9, 9, 9] [3, 3] [1, 1] [1, 1, 1] [7, 7, 7, 7, 7] [7, 7, 7, 7] [4, 4] [1, 1, 1] [7, 7] [4, 4, 4] [4, 4] [3, 3] [0, 0] [4, 4, 4] [1, 6] [0, 0, 0, 0, 0] [6, 6] [6, 6] [7, 7, 7] [9, 9] [6, 6, 6] [0, 0] [6, 6] [7, 7] [9, 9, 9, 9, 9, 9, 9, 9] [9, 4, 9, 9] [3, 3] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0] [7, 7, 7] [1, 1, 1] [2, 2] [1, 1, 1] [4, 4] [9, 9, 9, 9] [4, 4] [6, 6] [6, 6] [6, 6, 6] [8, 8] [3, 3] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1] [0, 0] [4, 4, 4] [8, 8, 8, 8] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6, 6] [0, 0] [2, 2, 2, 2] [9, 9] [1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [4, 4, 4] [8, 8] [8, 8] [6, 6, 6, 6, 6, 6] [4, 4] [6, 6, 6, 6, 6, 6] [9, 9] [4, 4] [0, 0, 0, 0] [1, 1] [2, 7] [0, 0, 0, 0, 0, 0, 0] [3, 3] [1, 1, 1, 1] [9, 9] [9, 9] [6, 6] [6, 6] [1, 1, 1] [3, 3, 3] [0, 0] [9, 9, 9, 9, 9] [0, 0, 0, 0, 0, 0, 0] [1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [4, 4] [1, 1, 1, 1, 1, 1, 1] [2, 2] [3, 3] [1, 1, 1] [3, 3] [9, 9] [7, 7] [5, 5] [6, 5, 6] [0, 0] [2, 2] [3, 3] [1, 1, 1, 1] [0, 0, 0] [1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [7, 7] [7, 7] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [5, 5] [8, 8] [0, 0, 0, 0, 0] [1, 1] [1, 1] [7, 7] [9, 9, 9, 9, 9, 9] [8, 8] [0, 0] [9, 9, 9] [9, 9] [4, 4] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1] [5, 5] [1, 1, 1, 1, 1, 1, 1, 1, 1] [4, 4] [3, 3, 3] [4, 4, 4] [3, 3] [8, 8] [0, 0] [9, 9] [1, 1, 1] [0, 0, 0] [8, 8] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [4, 4] [4, 4] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1] [1, 1, 1] [9, 9] [3, 8] [1, 1] [7, 7, 7, 7] [7, 7] [0, 0] [8, 8] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [3, 3] [1, 1] [2, 2, 2, 2, 2] [3, 3] [4, 4] [0, 0, 0] [1, 1, 1] [1, 1, 1] [1, 1, 1, 1] [2, 2] [7, 7] [9, 9] [9, 9, 9] [5, 5] [7, 7, 7] [6, 6] [7, 7] [8, 8] [5, 5] [1, 1] [9, 9] [5, 5] [7, 7, 7, 7] [4, 4, 4] [1, 1, 1, 1] [5, 5] [0, 6] [1, 1, 1] [1, 1, 1] [1, 1] [6, 6, 6, 6, 6] [1, 1] [2, 2] [8, 6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0, 0, 0] [0, 0, 0] [5, 5] [9, 9, 9] [9, 9] [7, 7, 7, 7] [7, 7] [2, 2, 2, 2, 2, 2] [6, 6, 6] [4, 4] [6, 6] [7, 7, 7, 7, 7] [5, 5] [1, 1, 1, 1, 1] [0, 0, 0] [9, 9] [5, 5] [2, 2] [6, 6] [7, 7] [3, 3] [6, 6] [8, 8] [6, 6] [9, 9] [0, 0] [9, 9] [2, 2, 2] [0, 0, 0] [1, 1, 1, 1, 1] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 7, 1, 1, 1] [0, 0, 0] [7, 7, 7, 7] [2, 2, 2] [1, 1, 1] [2, 2, 2, 2] [6, 6] [9, 9, 9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7, 7] [7, 7, 7, 7] [6, 6, 6] [5, 5, 5, 5, 5, 5] [0, 0, 0, 0, 0, 0, 0] [2, 2] [1, 1, 1, 1, 1, 1, 1, 1] [4, 4] [6, 6] [0, 0] [1, 1, 1] [1, 8, 1, 1, 1] [5, 5] [9, 9] [7, 7, 7] [6, 6] [0, 0] [6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6, 6] [6, 6] [7, 7, 7, 7] [0, 0] [8, 8] [1, 1, 1, 1, 1, 1] [2, 2, 2] [8, 8] [9, 9] [3, 3] [2, 2] [3, 3] [9, 9] [0, 0] [7, 7] [1, 1, 1] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1] [0, 0] [6, 6, 6] [8, 8, 8] [6, 6, 6, 6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7, 7, 7, 7] [3, 3] [4, 4] [3, 3] [1, 1] [9, 9, 9] [8, 8, 8] [7, 7, 7, 7] [2, 2] [8, 8] [7, 7, 7] [7, 7, 7, 7, 7, 7, 7] [6, 6, 6] [6, 6] [6, 6, 6, 6] [4, 4] [3, 3, 3] [6, 6] [2, 2] [9, 9, 9] [7, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [6, 6] [8, 8] [4, 4] [9, 9] [9, 9, 9, 9, 9] [8, 8] [4, 4] [6, 6] [7, 9] [0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0] [7, 7, 7] [6, 6] [5, 5, 5] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [1, 1] [0, 0] [1, 1, 1, 1] [1, 1] [2, 2] [0, 0, 0] [9, 9, 9, 9] [3, 3] [7, 7, 7, 7] [4, 4, 4, 4, 4, 4] [7, 7] [7, 7] [2, 2, 2, 2] [7, 7, 7, 7, 7, 7, 7, 7] [1, 1, 1, 1, 1] [9, 9] [1, 1] [8, 8, 8, 8, 8, 8] [8, 8, 8, 8, 8, 8, 8] [6, 6, 6] [4, 9] [3, 3] [6, 1, 1] [3, 3] [9, 9] [5, 5] [7, 7] [3, 3] [0, 0, 0] [9, 9] [3, 3] [1, 1] [8, 8] [4, 4] [7, 7, 7] [7, 7] [0, 0] [7, 7] [9, 9, 9, 9, 9, 9, 9, 9, 9] [7, 7] [1, 1, 1, 1] [7, 7] [4, 4, 4] [7, 7] [6, 6] [1, 1, 1, 1, 1, 1] [1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [2, 2] [8, 8] [3, 3, 3] [9, 9] [3, 3, 3] [0, 0, 0] [4, 4] [3, 3] [3, 3, 3] [2, 2] [6, 6, 6] [1, 1, 1, 1, 1, 1, 1, 1, 1] [3, 3] [1, 1, 1, 1, 1, 1, 1, 1, 1] [4, 4] [6, 6] [1, 1, 1, 1, 1] [0, 0] [9, 9] [7, 7, 7] [6, 6] [3, 3] [7, 7, 7] [6, 6, 6] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [1, 1] [6, 6, 6] [4, 4] [7, 7, 7] [7, 7] [7, 7, 7, 7] [6, 6] [4, 4] [1, 1] [7, 7, 7, 7, 7, 7, 7] [4, 4] [0, 0, 0, 0, 0, 0, 0, 0, 0] [9, 9, 9] [9, 9] [6, 6] [4, 4, 4] [6, 6] [1, 1] [6, 6] [2, 2] [6, 5] [3, 3, 3, 3] [6, 6] [1, 1] [2, 2] [1, 1, 1] [6, 6] [9, 9, 9] [7, 7] [4, 4] [0, 0, 0, 0] [8, 8] [7, 7] [6, 6] [6, 6] [9, 9] [8, 8] [8, 8, 8, 8] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0, 0] [6, 6, 6] [7, 7] [9, 9, 9] [4, 4] [4, 4, 4] [4, 4] [0, 0] [0, 0] [8, 8] [1, 1, 1] [0, 0] [0, 0] [1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [8, 8] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [2, 2] [1, 1] [0, 0] [4, 4] [7, 7] [0, 0, 0, 0] [7, 7, 7] [5, 5, 5] [9, 9] [7, 7] [7, 7, 7] [7, 7] [9, 2] [1, 1, 1] [1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [6, 6, 6] [6, 6, 6] [8, 8] [1, 1, 1, 1] [1, 1, 1, 1] [1, 1, 1, 1, 1] [9, 9] [7, 7, 7, 7, 7, 7] [9, 7, 9, 7] [8, 8] [0, 0, 0, 0, 0] [3, 3] [7, 7] [9, 9, 9, 9, 9, 9, 9] [4, 4] [1, 1] [1, 1] [3, 3] [1, 1] [8, 8] [0, 0, 0, 0] [9, 9] [9, 9] [7, 7] [8, 8, 8] [6, 6] [9, 9] [5, 5] [9, 9] [7, 9] [7, 7] [0, 0, 0] [1, 1] [0, 0, 0] [6, 6, 6] [7, 7] [6, 6] [1, 1, 1, 1, 1, 1, 1] [8, 8] [1, 1] [4, 4] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0, 0] [6, 6, 6, 6, 6, 6, 6] [0, 0, 0] [8, 3] [2, 2] [1, 1, 1, 1] [7, 7] [1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [4, 4] [1, 1, 1] [0, 0] [6, 6] [2, 2] [1, 1] [7, 7, 7] [1, 1] [0, 0] [3, 3, 3] [6, 6] [9, 9, 9] [7, 7, 7, 7, 7] [0, 0, 0, 0] [4, 4] [1, 1, 1] [1, 1, 1, 1, 1, 1, 1] [4, 9] [0, 0] [7, 7] [1, 1, 1, 1, 1, 1] [3, 3] [7, 7, 7] [6, 6, 6, 6] [9, 9, 9] [1, 1] [6, 6, 6, 6, 6, 6] [1, 1] [6, 6] [7, 7, 7] [9, 9] [1, 1, 1] [0, 0] [8, 8] [6, 6] [9, 9] [9, 9, 9, 9, 9, 9] [6, 6, 6] [3, 3] [2, 2, 2] [4, 4, 4] [7, 7, 7, 7] [1, 1, 1, 1, 1] [7, 7] [5, 5] [7, 7, 7, 7] [7, 7, 7, 7, 7] [1, 1, 1, 1] [6, 6] [6, 6] [9, 9] [0, 0, 0, 0] [0, 0, 0] [7, 7, 7, 7, 7] [6, 6, 6] [7, 7] [7, 7] [6, 6] [1, 1, 1] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [9, 9] [4, 4, 4] [8, 8, 8] [8, 8, 8] [1, 1, 1] [8, 8, 8] [4, 4] [2, 2] [1, 1, 1] [2, 2] [7, 7, 7] [1, 1, 1, 1] [6, 6, 5] [1, 1, 1] [9, 9] [6, 6] [8, 8] [3, 3, 3, 3, 3] [4, 4] [9, 9] [0, 0, 0, 0, 0] [9, 9] [7, 7] [6, 6, 6, 6, 6] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [7, 7] [5, 5] [7, 7, 7, 7, 7] [1, 1, 1, 1] [7, 7, 7] [5, 5] [1, 1, 1, 1, 1, 1, 1] [0, 0] [2, 2] [3, 3] [6, 6, 6, 6] [0, 0] [3, 3] [0, 0] [6, 3] [9, 9] [6, 6] [1, 1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 7] [9, 9] [0, 0] [9, 9] [9, 9] [9, 4, 4] [5, 5] [6, 6, 6] [5, 5] [9, 9] [0, 0, 0] [7, 7, 7, 7, 7, 7, 7] [4, 4, 4] [6, 6] [7, 7, 7] [1, 1, 1, 1, 1, 1] [1, 1] [9, 9] [9, 9] [9, 9, 9] [7, 7] [7, 7, 7, 7, 7] [1, 1] [6, 6, 6, 6] [9, 9] [0, 0, 0, 0] [9, 9] [8, 8] [7, 7] [1, 1] [9, 9] [3, 3] [0, 0] [9, 9] [4, 4] [7, 7, 7] [0, 0] [6, 6] [7, 7, 7] [1, 1] [7, 7, 7] [1, 1, 1] [7, 7] [6, 6] [9, 9] [2, 2] [7, 7, 7] [7, 7] [2, 2] [9, 9] [1, 1] [4, 4, 4] [9, 9, 9] [6, 6] [7, 7, 7] [0, 0, 0, 0, 0, 0, 0] [7, 7, 7, 7, 7] [9, 9] [6, 6, 6, 6, 6] [9, 9] [9, 9] [0, 0] [2, 2] [0, 0] [4, 4] [7, 7] [9, 9] [0, 0] [0, 0] [7, 7] [7, 7, 7, 7, 7] [7, 7] [9, 9] [5, 5, 5] [1, 1, 1] [1, 1, 1] [4, 4] [4, 4, 4] [0, 0] [8, 8] [5, 5] [1, 1, 1] [1, 1] [7, 7] [6, 6, 6, 6] [3, 3] [3, 3] [0, 0] [8, 8, 8] [0, 0, 0, 0, 0] [1, 1, 1] [1, 1, 1, 1, 1, 1, 1] [4, 4] [2, 2] [2, 2] [7, 7] [6, 6, 6, 6, 6, 6] [1, 1, 1, 1, 1] [7, 7, 7] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [2, 2] [0, 0] [3, 3, 3] [7, 7] [2, 2] [6, 6, 6, 6, 6, 6, 6, 6, 6, 6] [0, 0] [1, 1] [1, 1] [6, 6, 6] [6, 6, 6] [1, 1] [2, 2] [7, 7, 7] [0, 0, 0] [7, 7, 7] [7, 7] [3, 8] [1, 1] [1, 1, 1, 1, 1] [0, 0, 0] [9, 9, 9] [0, 0] [7, 7] [7, 7] [6, 6] [0, 0] [1, 1] [0, 0] [2, 2] [0, 0, 0] [1, 1, 1, 1, 1, 1] [4, 4, 4] [7, 7, 7, 7, 7] [1, 1] [9, 9] [1, 8, 1] [1, 1, 1] [7, 7] [1, 1, 1, 1] [3, 3] [9, 4] [3, 3, 3, 3] [5, 6, 6, 6, 6, 6] [8, 8, 8] [3, 3] [1, 1, 1, 1] [0, 0] [8, 8] [4, 4] [0, 0, 0, 0, 0, 0, 0] [2, 7] [0, 8, 0, 0] [4, 4, 4] [6, 6] [5, 5] [0, 0] [7, 7, 7, 7] [2, 2] [8, 8] [7, 7, 7] [7, 7, 7, 7, 7] [9, 9] [0, 0, 0] [6, 6] [6, 6, 6] [3, 3] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 9] [9, 9] [6, 6] [9, 9] [2, 2] [2, 2] [8, 8, 8] [1, 1, 1] [0, 0] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [9, 9] [7, 7] [3, 3] [7, 7] [1, 1, 1, 1] [0, 0] [6, 6] [6, 6] [4, 4] [1, 1] [6, 6] [4, 4] [7, 7, 7] [0, 0] [2, 2] [7, 7, 7] [1, 1] [9, 9] [1, 1, 1] [2, 2] [2, 2, 2] [6, 6] [2, 2] [0, 0] [0, 0, 0, 0] [6, 6] [6, 6, 6] [7, 7, 7, 7, 7] [1, 1, 1, 1] [7, 7, 7, 7, 7, 7, 7] [1, 1] [1, 1, 1, 1, 1] [2, 2, 2] [2, 2] [7, 7, 7] [2, 2, 2, 2] [1, 1, 1] [4, 4, 4] [3, 3] [1, 1, 1, 1, 1] [3, 3] [1, 1] [1, 1, 1] [9, 9, 9] [3, 3, 3] [6, 6] [5, 5] [3, 3] [7, 7, 7] [9, 9] [1, 1] [2, 2] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [3, 3, 3] [6, 6] [2, 2, 2] [7, 7] [7, 7, 7] [7, 7] [0, 0] [6, 6, 6] [5, 5] [0, 0, 0, 0] [0, 0] [6, 6, 6] [8, 8] [0, 0, 0] [0, 0, 0] [0, 0] [0, 0] [7, 7, 7, 7] [8, 1, 1] [4, 4] [0, 0] [9, 9] [2, 2, 2] [7, 7, 7] [6, 6] [4, 4] [6, 6] [8, 8, 8, 8] [4, 4] [0, 0] [5, 5] [5, 5] [6, 6] [7, 7] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [1, 1] [5, 5] [9, 9] [1, 1, 1, 1] [7, 7, 7, 7, 7, 7, 7, 7] [7, 7, 7] [7, 7] [3, 3] [9, 9] [4, 4] [3, 3] [3, 3] [7, 7] [9, 9, 9] [6, 6] [6, 6] [7, 7, 7] [1, 1] [7, 7] [7, 7] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [4, 4] [1, 1, 1] [1, 1] [0, 0, 0] [8, 8] [0, 0, 0, 0, 0, 0, 0] [9, 9] [9, 9] [9, 9, 9] [5, 5, 5] [1, 1, 1, 1] [4, 4] [0, 0, 0] [7, 7, 7] [0, 0] [8, 8, 8, 8] [7, 7] [7, 7, 7, 7, 7] [1, 1, 1] [7, 7, 7, 7, 7] [6, 6] [4, 4] [6, 6, 6] [1, 1, 1] [1, 6] [6, 6, 6, 6] [6, 6, 6] [7, 7] [4, 4, 4, 4, 4, 4] [7, 7] [7, 7, 7, 7] [7, 7, 7] [1, 1, 1, 1, 1] [6, 6] [0, 0, 0, 0, 0] [8, 8] [7, 7] [2, 2] [6, 6, 6] [6, 6] [4, 4, 4] [7, 7, 7] [4, 4] [7, 7, 7, 7, 7, 7, 7, 7] [4, 4] [3, 3, 3] [3, 3] [4, 4] [1, 1, 1, 1] [5, 5] [3, 3] [0, 0] [1, 1, 1, 1, 1] [1, 1, 1] [9, 9, 9, 9] [0, 0, 0] [7, 7] [4, 4] [4, 4, 4] [1, 1] [1, 1, 1, 1] [6, 6] [7, 7] [7, 7] [1, 1, 1, 8, 1, 1, 1] [7, 7] [1, 4] [3, 3] [8, 8] [6, 6, 6, 6, 6, 6, 6] [1, 1, 1, 1] [7, 7, 7, 7, 7, 7] [1, 1] [9, 9] [7, 7, 7, 7] [7, 7] [2, 2] [1, 1, 1, 1, 1] [7, 7, 7] [0, 0] [3, 3, 3] [1, 1, 1] [0, 0] [6, 6, 6] [3, 3] [5, 5] [8, 8, 8] [6, 6] [6, 6] [1, 1] [1, 6] [3, 1] [5, 5, 5] [6, 6] [7, 7] [1, 1] [0, 0] [9, 9] [9, 9] [8, 8] [6, 5] [2, 2] [1, 1] [1, 1, 1] [4, 4] [8, 8] [7, 7] [4, 4] [7, 7, 7, 7] [6, 6, 6] [7, 7, 7] [7, 7, 7] [8, 8] [8, 8] [6, 6] [8, 8] [8, 8] [8, 8] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [7, 7, 7] [9, 9] [7, 7] [8, 8] [1, 1, 1] [6, 6, 6, 6, 6, 6] [0, 0, 0] [7, 7] [8, 8] [6, 6] [0, 0, 0, 0, 0, 0] [0, 0, 0] [9, 9, 9] [7, 7, 7] [5, 5] [1, 1, 1, 1, 1] [1, 1, 1, 1, 1] [4, 4] [0, 0] [1, 1] [7, 7] [8, 8] [4, 4] [7, 7, 7, 7] [9, 9] [7, 7, 7, 7, 7, 7] [9, 9] [9, 9, 9] [9, 9] [4, 4] [6, 6, 6] [5, 3] [0, 0] [7, 7] [6, 6] [5, 5, 5] [9, 9] [1, 1] [5, 5] [9, 9] [6, 6, 6, 6] [2, 2] [1, 1, 1, 1, 1, 1] [7, 7] [8, 8] [1, 1, 1] [5, 5] [4, 4] [2, 2] [9, 9] [8, 6] [6, 6] [8, 8] [9, 9] [7, 7] [3, 3] [1, 1, 1, 1, 1, 1, 1] [0, 0] [1, 1] [5, 5] [7, 7, 7] [0, 0, 0, 0] [1, 1, 1] [5, 5] [4, 4] [1, 1, 1, 1, 1, 1, 1] [6, 4] [0, 0] [7, 7, 7, 7, 7, 7] [8, 3] [1, 1] [1, 1, 1, 1, 1] [6, 6] [7, 7] [6, 6] [6, 6, 6, 6, 6] [6, 6] [8, 3] [6, 6] [1, 1, 1, 1, 1, 1, 1, 1] [1, 1] [9, 9, 9, 9] [1, 1, 1, 1, 1] [6, 6] [1, 1, 1, 1, 1] [1, 1] [9, 9] [8, 8] [7, 7] [9, 9] [0, 0] [0, 0] [1, 1] [3, 3] [1, 1, 1, 1, 1] [5, 5] [6, 6, 6] [7, 7, 7] [5, 5] [8, 3] [4, 4] [7, 7] [5, 5] [1, 1, 1, 1, 1, 1, 1, 1] [0, 0, 0] [7, 7] [3, 3] [8, 8, 8, 8] [3, 3] [1, 1, 1, 1, 1, 1, 1, 1] [9, 9, 9] [1, 1, 1, 1, 1, 1, 1, 1] [3, 3] [9, 9] [6, 6] [2, 2, 2] [7, 7, 7] [8, 8] [1, 1] [6, 0] [9, 9, 9, 9, 9, 9] [8, 8, 8] [7, 7, 7, 7, 7, 7] [0, 0, 0] [8, 8, 8] [7, 7, 7] [7, 7] [1, 1, 1, 7, 1] [8, 8] [8, 8] [0, 0, 0, 0, 0] [2, 2] [1, 1] [3, 3, 3] [4, 4] [7, 7] [7, 7] [1, 1, 1, 1] [0, 0, 0] [9, 9, 9] [0, 0, 0, 0, 0, 0, 0] [0, 0] [6, 6] [6, 5] [1, 1] [3, 3, 3, 3, 3, 3] [6, 6, 6, 6, 6] [2, 2] [1, 1, 1] [3, 3, 3] [6, 6] [7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7] [6, 6] [1, 1, 1] [6, 6, 6, 6, 6, 6, 6, 6] [7, 7, 7] [7, 7] [6, 6, 6, 6] [3, 3] [6, 6] [1, 1] [9, 9] [9, 4] [0, 0, 0] [2, 2] [9, 9] [7, 7] [6, 6] [6, 6, 6, 6] [2, 2] [9, 9, 9] [7, 7, 7] [7, 7] [7, 7] [9, 9] [3, 3] [3, 3, 3, 3, 3] [5, 5] [6, 6] [7, 7] [2, 2] [6, 6] [6, 6] [8, 8] [1, 1, 1] [4, 4, 4] [5, 5] [0, 0] [4, 4] [4, 4] [6, 6] [1, 1] [9, 9] [0, 0] [6, 6, 6] [8, 8, 8] [3, 3] [6, 6] [2, 2] [6, 6, 6] [9, 9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [9, 9, 9, 9] [0, 0] [7, 7, 7] [9, 7] [3, 3] [6, 6] [9, 9, 9, 9] [6, 6] [1, 1] [8, 8] [7, 7] [6, 6] [7, 7] [6, 6] [0, 0, 0] [7, 7, 7, 7] [7, 7, 7] [5, 5] [7, 7] [0, 0] [0, 0, 0, 0] [4, 4, 4] [1, 1] [6, 6] [0, 0] [0, 0, 0] [9, 9] [5, 5] [1, 1, 1, 1] [0, 0, 0] [1, 1, 1, 1] [3, 3] [5, 5, 6] [0, 0] [5, 5, 5] [7, 7] [9, 9] [6, 6] [6, 5] [9, 9] [9, 9] [3, 3] [9, 9] [8, 8] [6, 6] [1, 1, 1, 1, 1] [6, 6, 6] [7, 7, 7, 7, 7] [0, 0] [1, 1, 1] [5, 5] [6, 6] [2, 2] [1, 1, 1, 1, 1, 8, 1, 1] [9, 9] [0, 0] [4, 4, 4] [1, 1, 1] [6, 6] [2, 2] [9, 9] [1, 1, 1, 1, 1, 1] [1, 1, 1] [7, 7] [7, 7, 7] [5, 5] [6, 6] [6, 6, 6, 6] [2, 2] [0, 0, 0, 0] [8, 8] [0, 0, 0] [0, 0, 0, 0, 0] [7, 7] [6, 6] [0, 0, 0] [7, 7] [7, 7] [6, 6] [1, 1, 1, 1, 1, 1, 1] [9, 9, 9, 9] [8, 8, 8] [5, 5] [6, 6, 6] [7, 7] [0, 0] [7, 7, 7, 7, 7, 7] [6, 6] [1, 1, 1] [0, 0] [8, 8] [4, 4] [9, 9] [3, 3] [6, 6] [8, 8] [0, 0] [5, 5] [6, 6] [9, 9] [3, 3] [0, 0, 0, 0] [6, 6] [7, 7] [9, 9, 9] [3, 3] [2, 2] [9, 9] [7, 7, 7] [2, 3] [7, 7] [2, 2] [7, 7] [9, 9] [8, 8] [4, 4] [3, 3, 3] [9, 9] [7, 7] [9, 9, 9] [1, 1] [9, 9, 4, 9] [1, 1, 1, 1, 1, 1, 1] [9, 9] [7, 7, 7] [8, 8] [7, 7] [1, 1, 1, 1] [9, 9, 9] [6, 6] [6, 6] [1, 1, 1] [6, 6, 6] [8, 8] [7, 7, 7] [1, 1, 1] [8, 8] [9, 9] [6, 6] [1, 1] [6, 6] [1, 1] [1, 1, 1, 1] [2, 2] [0, 0] [7, 7] [9, 9] [0, 0] [4, 4] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1] [0, 0] [9, 9] [7, 7] [9, 9] [9, 9, 9] [6, 6, 6] [7, 7, 7, 7, 7, 7, 7] [6, 6] [2, 2] [8, 8] [1, 1] [6, 6, 6] [2, 2] [1, 1, 1, 1] [4, 4, 4] [7, 7] [6, 6, 6, 6] [9, 9] [9, 9] [7, 7, 7, 7, 7] [3, 3] [1, 1, 1, 1] [3, 3] [8, 8] [7, 7, 7] [0, 0] [9, 9] [0, 0] [9, 9] [0, 0, 0] [8, 8] [6, 6] [1, 1] [1, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1] [8, 8] [9, 9, 9] [3, 3] [1, 1, 1, 1, 1] [0, 0, 0] [1, 1, 1] [1, 1, 1, 1] [7, 7] [3, 3] [5, 5] [6, 6] [6, 6] [7, 7] [1, 1] [1, 1] [0, 0, 0] [6, 6] [5, 5] [9, 9] [2, 2, 2, 2, 2] [7, 7] [9, 9] [9, 9] [2, 2] [6, 6, 6] [6, 6] [4, 4] [7, 7] [6, 6, 6] [7, 7] [9, 9] [1, 1, 1, 1, 1, 1, 1] [6, 6] [1, 1, 1] [1, 1] [9, 9] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [9, 9] [4, 4] [4, 4] [0, 0] [0, 0] [7, 7] [6, 6, 6] [6, 6] [2, 2] [4, 4] [9, 9, 9] [0, 0, 0] [1, 1] [7, 7] [0, 0] [6, 6] [8, 8] [9, 9] [9, 9] [2, 2] [7, 7] [0, 0, 0] [1, 1, 1, 1, 1, 1] [9, 9] [3, 3] [6, 6] [7, 7] [7, 7] [8, 3] [8, 8] [6, 8] [0, 0] [4, 4, 4] [9, 2] [8, 8] [6, 6, 6] [0, 0, 0] [1, 1, 1] [1, 1, 1, 1, 1] [6, 6] [0, 0] [3, 3, 3] [5, 5] [8, 8] [0, 0] [0, 0] [0, 0] [2, 2] [6, 6] [7, 7] [9, 9] [8, 8] [2, 2] [7, 7, 7] [0, 0] [0, 0] [0, 0, 0, 0, 0] [0, 0] [6, 6] [8, 8] [1, 1] [1, 1, 1, 1] [1, 1] [6, 6] [0, 0] [7, 7] [0, 0, 0] [1, 1, 1] [1, 1, 1] [9, 9] [6, 6, 6, 6, 6] [6, 6] [7, 7, 7] [4, 4] [0, 0, 0] [0, 0] [9, 9, 9] [0, 0] [4, 4, 4, 4, 4, 4, 4] [8, 8] [8, 8] [0, 0] [4, 9] [1, 1] [1, 1, 1, 1, 1] [0, 0] [7, 7] [9, 0] [1, 1] [6, 6] [0, 0] [2, 2] [8, 8, 8] [1, 1] [9, 9, 9] [6, 6] [9, 9] [7, 7, 7] [4, 4, 4] [6, 6, 6] [6, 6] [4, 4] [8, 8] [6, 6] [1, 1, 1] [0, 0] [5, 5] [4, 4] [1, 1] [2, 2, 2] [7, 7] [9, 9] [0, 0] [9, 9] [4, 4] [5, 5] [0, 0] [1, 1] [4, 4, 4] [8, 8] [6, 6] [7, 7, 7] [0, 0] [4, 4] [7, 7] [7, 7] [1, 1] [7, 7, 7, 7] [4, 4] [2, 2] [7, 7, 7, 7, 7] [2, 2] [5, 5] [4, 4] [6, 6] [1, 1] [7, 7] [9, 9, 9, 9, 9] [0, 0, 0] [0, 0] [7, 7] [6, 6] [0, 0] [7, 7] [5, 5] [3, 3] [6, 6] [6, 6] [5, 5, 5] [9, 9] [6, 6] [3, 3] [9, 9] [4, 4] [0, 0] [9, 7] [6, 6] [1, 1, 1, 1, 1, 1] [7, 7, 7] [5, 6] [0, 0] [1, 1, 1, 1] [8, 8, 8, 8] [1, 1] [6, 6] [8, 8, 8, 8, 8, 8, 8] [4, 4, 4, 4] [0, 0, 0, 0, 0] [5, 5] [6, 6] [7, 7, 7, 7, 7, 7] [0, 0, 0] [7, 7] [9, 9] [9, 9, 9] [6, 6] [9, 9, 9] [1, 1] [6, 6] [7, 7, 7, 7] [0, 0] [4, 4] [4, 4] [1, 1] [7, 7, 7, 7] [8, 8] [6, 6, 6] [9, 9] [8, 8] [7, 7] [5, 5] [9, 9] [0, 0] [1, 1] [0, 8] [7, 7] [1, 1] [8, 8] [6, 6, 6, 6] [6, 6, 6] [3, 3] [3, 3] [6, 6] [1, 1, 1, 1, 1, 1, 1, 1] [1, 1, 1] [9, 9] [0, 0] [4, 9, 4] [9, 9] [6, 6] [7, 7, 7] [5, 5] [2, 2, 2] [6, 6] [6, 6, 6] [6, 6] [9, 9, 9] [4, 4] [3, 3, 3] [6, 6] [2, 2] [7, 7] [9, 7] [0, 0, 0] [1, 1, 1] [8, 8] [6, 6] [7, 7] [0, 0, 0] [8, 8] [1, 1] [9, 9] [6, 6] [9, 9, 9] [2, 2, 2] [0, 0] [7, 7] [3, 3] [6, 6] [5, 5] [5, 5] [1, 1, 1, 1, 1] [7, 7] [6, 6, 6] [0, 0] [4, 4] [6, 6] [6, 6] [9, 9] [7, 7] [1, 1, 1] [1, 1] [9, 9] [5, 5] [0, 0, 0] [9, 9] [1, 1, 1, 1, 1, 1] [7, 7] [6, 6] [8, 8] [8, 8, 8] [4, 4] [4, 4, 4] [2, 2] [1, 1, 1, 1, 1, 1] [6, 6, 6] [7, 7, 7] [1, 1] [9, 9, 9] [0, 0] [2, 2, 2] [4, 4, 4] [1, 1] [0, 0] [3, 3] [1, 1, 1] [9, 9] [0, 0] [7, 7] [0, 0] [8, 8] [8, 8] [5, 5] [6, 6, 6] [3, 3] [6, 6, 6] [0, 0] [2, 2] [7, 7, 7, 7] [4, 4] [3, 3] [7, 7] [3, 3] [4, 4] [0, 0] [4, 4] [7, 7] [7, 7] [8, 8] [4, 4] [9, 9] [6, 6] [2, 2] [3, 3] [8, 8] [9, 9, 9] [6, 6, 6, 6] [6, 6, 6] [9, 9] [6, 6] [7, 7] [2, 2, 2] [3, 3] [8, 8, 8] [3, 3, 3] [7, 7, 7] [1, 1, 1, 1, 1] [0, 0] [5, 5] [9, 9, 9] [6, 6] [1, 1] [9, 9, 9] [6, 6] [4, 4] [0, 0, 0] [2, 2, 2] [7, 7] [6, 6, 6, 6, 6] [7, 7] [1, 1] [1, 1, 1, 1] [6, 6] [6, 6] [6, 6] [1, 1, 1] [9, 9] [1, 1] [4, 4] [0, 0] [3, 3, 3, 3, 3, 3] [1, 1, 1, 1, 1, 1, 1, 1] [8, 8] [8, 8, 8] [9, 9] [9, 4, 9, 4] [6, 6, 6] [3, 3] [9, 9] [6, 6, 6] [0, 0] [9, 9] [4, 4] [2, 2] [0, 0] [6, 6] [9, 9] [8, 8] [5, 5] [9, 9] [0, 0] [9, 9] [6, 6] [2, 2] [1, 1, 1, 1] [7, 7, 7, 7, 7, 7, 7] [8, 8] [7, 7] [7, 7, 7, 7, 7] [4, 4] [1, 1, 1, 1, 1] [0, 0] [2, 2] [8, 8] [3, 3] [4, 4] [8, 8] [3, 3] [8, 8, 8] [2, 2] [9, 9, 9] [9, 9, 9, 9] [6, 6] [0, 0, 0] [6, 6, 6] [6, 6] [4, 4] [1, 1] [2, 2] [6, 6] [0, 0] [0, 0] [6, 6] [1, 1, 1, 1, 1] [6, 6, 6, 6, 6] [9, 9, 9] [5, 5] [8, 8] [3, 3, 3] [0, 0, 0, 0] [4, 4] [7, 7, 7, 7] [3, 3, 3] [1, 1, 1, 1] [0, 0] [6, 6] [4, 4] [0, 0, 0] [0, 0] [6, 6] [9, 9] [1, 1, 1] [9, 9] [1, 1, 1] [4, 9] [6, 6] [2, 2] [9, 9] [9, 9] [0, 0, 0, 0] [9, 9, 9] [7, 7] [9, 9, 4] [1, 1] [8, 8] [6, 6] [6, 6, 6] [7, 7, 7] [6, 6] [1, 1, 1] [0, 0] [6, 6, 6] [2, 2] [0, 0] [2, 2] [9, 9] [6, 6, 6, 6] [4, 4] [8, 8] [7, 7] [7, 7, 7, 7] [7, 7] [0, 0] [6, 6, 6, 6, 6, 6, 6] [6, 6] [0, 0] [1, 1] [1, 1] [7, 7] [1, 1] [6, 6, 6, 6] [6, 6] [6, 6, 6, 6] [5, 5] [2, 2, 2, 2, 2] [7, 7, 7, 7] [7, 7] [1, 1] [6, 6] [6, 6] [6, 6] [3, 3] [1, 1, 1, 1, 1] [0, 0, 0] [6, 6] [0, 0] [2, 2] [8, 8] [7, 7, 7] [8, 8] [8, 8] [4, 4] [8, 8] [2, 2] [7, 7] [6, 6] [6, 6] [1, 1] [0, 0] [7, 7, 7, 7, 7] [2, 2] [7, 7] [9, 9] [4, 4, 4] [8, 8] [2, 2, 2] [6, 6] [8, 8] [6, 6, 6] [8, 8] [9, 9] [1, 1, 1, 1, 1, 1, 1] [4, 4] [0, 0] [4, 4] [9, 9] [4, 4] [5, 5] [4, 4] [7, 7] [4, 4] [9, 9, 9, 9] [4, 4, 4] [5, 5] [2, 2] [9, 9] [0, 0] [9, 9, 9, 9] [0, 0] [0, 0] [7, 7, 7] [1, 1, 1, 1, 1] [6, 6] [7, 7, 7] [3, 3] [2, 2] [9, 9] [4, 4] [7, 9] [4, 4] [7, 7, 7] [8, 8] [9, 9] [6, 6, 6] [8, 9] [0, 0] [0, 0] [1, 1] [0, 0] [3, 3] [6, 6] [4, 4] [3, 3] [6, 6] [5, 5] [1, 1] [5, 3] [6, 6] [1, 1, 1] [3, 3] [6, 6, 6, 6] [8, 8] [7, 7] [8, 8] [4, 4] [5, 5] [9, 9, 9] [4, 4] [7, 7] [8, 8] [8, 8] [3, 3] [3, 3] [0, 0, 0] [7, 7, 7] [0, 0] [9, 9] [7, 7] [9, 9, 9, 9, 9] [1, 1, 1, 1, 1, 1, 1] [4, 4] [0, 0] [4, 4] [0, 0] [5, 5] [9, 9, 9, 9] [4, 4, 4] [8, 8, 8] [5, 5] [9, 9] [9, 9, 9, 9] [9, 9] [4, 4] [0, 0] [4, 4, 4] [4, 4] [3, 3, 3] [6, 6] [8, 8] [3, 3] [6, 6] [9, 9] [0, 0] [1, 1] [3, 3, 3, 3, 3] [4, 4] [9, 9] [9, 9] [9, 9, 9, 9] [3, 3, 3] [8, 8] [8, 8] [1, 1, 1] [7, 7] [6, 6] [8, 8, 8] [8, 8] [7, 7, 7] [0, 0] [6, 6] [0, 0] [0, 0] [1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [9, 9] [9, 9] [2, 2] [8, 8] [1, 1] [1, 1, 1] [4, 4, 4] [9, 9, 9] [4, 4, 4] [6, 6, 6] [9, 9] [1, 1, 1, 1] [5, 5] [9, 9, 9] [9, 9] [0, 0, 0] [3, 5] [7, 7, 7, 7, 7, 7] [3, 3] [1, 1, 1, 1, 1, 1, 1, 8] [6, 6] [9, 9] [3, 3] [6, 6] [9, 9] [6, 6] [9, 9] [8, 8] [7, 7, 7] [6, 6, 6] [0, 0] [9, 9] [8, 8] [7, 7, 7] [3, 3, 3, 3] [9, 9] [1, 1] [7, 7] [3, 3] [7, 7, 7] [9, 9] [6, 6, 6, 6] [8, 8, 8] [2, 2] [1, 1] [6, 6] [3, 3] [6, 6] [8, 8] [2, 2] [0, 0] [9, 9] [1, 1, 1, 1, 1, 1] [8, 8] [7, 7] [6, 6] [0, 0] [8, 8] [1, 1] [8, 8] [2, 2] [4, 4] [7, 7, 7, 7] [7, 7] [7, 7, 7] [2, 2] [7, 7] [9, 9] [7, 9] [1, 1, 1] [2, 2] [0, 0] [7, 7, 7] [2, 2] [1, 1] [6, 6, 6, 6, 6, 6] [6, 6] [9, 7] [4, 4] [7, 7, 7] [9, 9] [0, 0] [9, 9] [7, 7] [6, 6, 6] [2, 2] [8, 8] [0, 0, 0, 0, 0] [8, 8] [9, 9] [4, 4] [1, 1] [3, 3] [9, 9] [4, 4, 4] [6, 6] [0, 0] [1, 1] [3, 3, 3] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [6, 6] [4, 4] [5, 5] [1, 1] [7, 7, 7] [6, 6, 6, 6] [3, 3] [7, 7] [7, 7] [0, 6] [1, 1, 1] [6, 6, 5] [0, 0, 0, 0, 0, 0] [1, 1, 1] [5, 5] [3, 3] [1, 1] [8, 8] [8, 8] [9, 9] [4, 4] [0, 0] [7, 7] [4, 4] [4, 4] [2, 2] [1, 1] [2, 2] [7, 4] [7, 7, 7, 7, 7, 7, 7, 7, 7] [4, 4] [2, 2] [5, 5] [1, 1] [1, 1, 1] [7, 7] [0, 0] [7, 7, 7] [1, 1, 1, 1] [7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [9, 9] [8, 8] [7, 7, 7] [5, 5] [1, 1] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [1, 1, 1, 1] [1, 1] [4, 4] [7, 7, 7] [0, 0, 0, 0] [2, 2, 2] [6, 6] [1, 1] [9, 9] [6, 6] [7, 7] [7, 7, 7] [9, 9] [1, 1] [9, 9] [8, 8] [0, 0] [7, 7] [0, 0] [6, 6] [3, 3, 3] [7, 7, 7, 7] [6, 6] [8, 8] [7, 4] [6, 6] [9, 9] [6, 6, 6, 6, 6, 6, 6, 6] [9, 9] [0, 0] [7, 7] [7, 7] [0, 0] [4, 4] [7, 7, 7, 7, 7] [9, 9] [9, 9] [9, 9] [0, 0, 0] [2, 2] [7, 7, 7] [6, 6, 6] [7, 7, 7] [5, 5] [3, 9] [5, 5] [8, 8] [1, 1] [1, 1] [9, 9] [4, 4] [3, 3] [7, 7] [9, 9] [6, 6] [8, 8] [3, 3] [8, 8] [1, 1, 1, 1, 1] [0, 0] [1, 1, 1, 1, 1, 1] [5, 5] [7, 7] [4, 4] [0, 0] [8, 8] [6, 6] [4, 4] [3, 3] [0, 0] [8, 8] [1, 1] [0, 0] [8, 8] [6, 6] [1, 1, 1] [2, 2] [1, 1] [0, 0] [7, 7] [7, 7] [2, 2] [4, 4] [5, 5] [7, 7] [9, 9] [6, 6] [1, 1, 1, 1, 1] [3, 3] [0, 0, 0] [9, 9, 9] [1, 1] [8, 8] [6, 6] [6, 6] [9, 9] [6, 6] [0, 0] [3, 3, 3] [4, 4] [6, 6] [3, 3] [5, 5] [2, 2] [7, 7, 7, 7] [2, 2] [6, 6, 6, 6] [8, 8] [5, 5] [9, 9] [8, 8] [5, 5] [3, 3] [7, 7] [7, 7] [9, 9] [1, 1] [4, 4] [0, 0] [0, 0] [5, 5] [9, 9] [6, 6, 6] [6, 6] [0, 0] [6, 6] [8, 8, 8] [2, 2, 2, 2] [1, 1, 1, 1] [1, 1, 1, 1] [3, 3] [5, 5] [1, 1] [6, 6, 6] [4, 4] [0, 0] [0, 0] [4, 4] [3, 2] [6, 6] [9, 9] [6, 6] [9, 9] [7, 7] [6, 6] [6, 6, 6] [0, 0] [8, 8] [5, 5] [1, 1, 1] [3, 3] [6, 6] [7, 7, 7, 7, 7, 7, 7] [6, 6] [4, 4, 4] [9, 9] [8, 8] [7, 7] [2, 2] [3, 3, 3] [2, 2] [9, 9] [0, 0] [4, 4] [4, 9] [3, 3] [2, 2] [2, 2] [4, 4] [9, 9] [7, 7, 7] [6, 6, 6] [7, 7] [7, 7] [6, 6, 6] [8, 8] [1, 1] [4, 4] [9, 9] [7, 7] [2, 2] [0, 0] [7, 7, 7] [4, 4] [0, 0] [9, 4] [7, 7] [9, 9, 9, 9] [0, 0, 0] [6, 6, 6] [0, 0] [6, 6, 6, 6] [6, 6] [2, 2] [9, 9] [6, 6, 6] [9, 9] [1, 1] [9, 9] [2, 2] [6, 6] [6, 6] [7, 7] [3, 3] [8, 8, 8] [2, 2] [9, 9, 9] [0, 9, 0, 0] [6, 6] [6, 6, 6] [0, 0] [9, 9] [0, 0] [7, 7] [3, 3] [9, 9] [3, 3] [1, 1] [0, 0] [0, 0] [5, 5] [5, 5] [7, 7, 7] [1, 1, 1, 1, 1, 1, 1, 1, 1] [7, 7] [4, 4, 4, 4] [4, 4] [2, 2] [7, 7, 7, 7] [8, 8] [8, 8] [6, 6, 6] [8, 8] [4, 4] [6, 6] [1, 1] [0, 0] [8, 8] [1, 1, 1, 1] [1, 1] [0, 0] [3, 3] [7, 7] [9, 9] [8, 8] [6, 6] [2, 2, 2, 2] [4, 9] [1, 1] [6, 6, 6, 6, 6] [6, 6] [2, 2, 2] [7, 7] [9, 9] [6, 6, 6, 6] [1, 1] [0, 0, 0] [1, 1] [5, 5] [4, 4, 4] [9, 9] [8, 8] [1, 1, 1, 1, 1] [7, 7] [7, 7, 7] [1, 1, 1] [7, 7, 7, 7, 7] [4, 4] [7, 7] [1, 1, 1, 1, 1, 1, 1] [9, 9] [0, 0] [4, 4] [4, 4] [4, 4] [3, 3, 3] [8, 8, 8] [8, 8] [9, 9] [2, 2] [1, 1] [3, 3] [5, 5] [5, 5] [2, 2] [1, 1] [0, 0] [5, 5, 5] [3, 3] [4, 4] [2, 2] [0, 0, 0] [5, 5] [7, 7, 7] [0, 0] [4, 4] [7, 7] [9, 9] [8, 8] [6, 6, 6] [6, 6, 6, 6, 6] [7, 7, 7, 7] [9, 9] [2, 2] [6, 6] [9, 9] [1, 1, 1] [5, 5] [6, 6] [0, 0] [8, 8] [6, 6] [6, 6] [7, 7] [9, 4, 9] [3, 3] [9, 9, 9] [7, 7] [3, 3] [7, 7] [4, 4] [1, 1] [1, 1, 1, 1, 1, 1] [7, 7, 7, 7] [7, 7] [1, 1, 1, 1] [8, 8, 8, 8] [3, 3] [9, 9] [9, 9, 9, 9, 9] [7, 7] [0, 0, 0, 0] [9, 9, 9, 9] [1, 1] [7, 7] [1, 1] [7, 7] [6, 6] [2, 2, 2] [1, 1, 1] [8, 8] [8, 8] [1, 1] [4, 4] [8, 8] [4, 4] [1, 1, 1, 1, 1] [4, 4] [8, 8] [3, 3] [7, 7] [6, 6] [6, 6] [6, 6] [3, 3, 3, 3] [3, 3] [8, 8, 8] [0, 0, 0, 0, 0] [6, 6] [4, 4, 4] [7, 7] [4, 4] [0, 0, 0] [3, 3] [7, 7] [7, 7] [9, 9] [8, 8] [0, 0] [1, 1] [9, 9, 9] [6, 6] [7, 7, 7, 7] [7, 9] [6, 6] [9, 9] [1, 1] [9, 9] [9, 9, 9] [7, 7] [9, 9, 9] [0, 0] [5, 3] [8, 8] [8, 8] [6, 6] [7, 7] [1, 1] [3, 3] [0, 0] [0, 0] [1, 1] [7, 7] [1, 1] [7, 7, 7] [6, 6, 6] [5, 5] [0, 0] [1, 1] [8, 8] [1, 1, 1] [1, 1] [8, 8] [8, 8, 8, 8] [8, 8] [0, 0] [4, 4] [1, 1] [0, 0] [4, 4] [8, 8] [1, 1] [1, 1, 1, 1] [0, 0, 0] [0, 0] [0, 0] [3, 3] [2, 2] [7, 7] [9, 9] [8, 8, 8] [2, 2] [0, 0] [9, 9] [7, 7] [1, 1, 1, 1] [0, 0] [9, 9] [6, 6] [5, 5, 5] [2, 2] [0, 0, 0] [4, 4] [6, 6] [6, 6, 6, 6] [7, 7] [8, 8, 8] [0, 0] [7, 7] [0, 0] [1, 1, 1, 1] [4, 4, 4, 4] [6, 6] [9, 9] [1, 1, 1] [0, 0] [0, 0] [3, 3, 3, 3] [4, 4] [7, 7] [7, 7] [0, 0] [8, 8] [6, 6] [9, 9, 9] [5, 5] [6, 6, 6] [0, 0] [6, 6, 6] [9, 9] [0, 0] [0, 0] [8, 8, 8] [9, 9] [1, 1] [6, 6] [1, 1] [4, 4] [9, 9, 9] [0, 0] [6, 6, 6] [9, 9] [0, 0] [5, 5] [0, 0] [1, 1] [9, 9] [7, 7] [0, 0] [0, 0] [4, 4] [2, 2] [9, 9] [1, 1] [0, 0] [6, 6] [9, 9] [1, 1] [9, 9] [0, 0, 0] [9, 9] [1, 1, 1, 1] [0, 0] [8, 8] [3, 3] [6, 6] [1, 1] [2, 2, 2] [6, 6, 6, 6] [8, 8] [1, 1] [2, 2] [9, 9] [7, 7] [6, 6, 6] [9, 9] [6, 6] [2, 2] [6, 6] [0, 0] [9, 1, 1] [0, 0, 0] [8, 8, 8, 8] [2, 2] [3, 3] [7, 7] [4, 4] [0, 0, 0] [0, 0] [0, 0] [2, 2] [3, 3] [9, 9] [6, 6, 6] [7, 7, 7] [4, 4] [1, 1, 1] [7, 7] [1, 1] [1, 1] [0, 0] [6, 6] [6, 6] [9, 9] [4, 4] [6, 6] [9, 9, 9] [2, 2] [3, 3] [9, 9, 9] [8, 8] [2, 2, 2] [2, 1] [6, 6] [6, 6] [7, 7] [6, 6] [2, 2] [4, 4] [6, 6] [8, 8] [3, 3] [0, 0] [6, 6] [0, 0] [7, 7, 7] [4, 4] [0, 0, 0, 0, 0] [9, 9] [9, 9] [0, 0] [0, 0] [8, 8, 8] [4, 4] [1, 1] [9, 9] [5, 5] [0, 0] [5, 5, 5] [3, 3] [5, 6] [3, 3] [9, 9] [2, 2] [7, 9, 9] [3, 3] [0, 0] [0, 0] [0, 0] [0, 0, 0] [9, 9] [6, 6] [9, 9] [4, 4] [3, 3] [7, 7] [2, 2] [6, 6] [3, 3] [5, 5] [1, 1] [7, 7] [3, 3, 3] [9, 7] [1, 1] [0, 0] [6, 6] [9, 9] [7, 7, 7] [7, 7] [1, 1] [0, 0] [8, 8] [3, 3] [2, 2] [7, 7] [9, 9] [8, 8] [6, 6] [6, 6] [8, 8] [0, 0, 0] [5, 5] [8, 8] [9, 9] [9, 9] [0, 0] [0, 0, 0] [4, 4] [6, 6] [6, 6] [8, 8] [3, 3] [6, 6] [8, 8] [8, 8] [0, 0] [8, 8] [8, 8] [1, 1] [2, 2] [7, 7] [6, 6] [9, 9] [2, 2] [6, 6] [9, 9] [7, 7] [7, 7, 7] [7, 7] [0, 0, 0] [1, 1] [2, 2, 2] [0, 0] [5, 5] [3, 3] [0, 0] [4, 4] [0, 0] [6, 6] [6, 6] [7, 7] [4, 4] [1, 1] [5, 5] [9, 9, 9] [2, 2] [8, 8] [8, 8] [2, 2, 2] [1, 1] [0, 0] [6, 6] [6, 6] [7, 7] [7, 7, 7] [4, 9] [6, 6, 6] [7, 7] [6, 6] [4, 4] [2, 2] [6, 6] [0, 0] [8, 8] [0, 0] [4, 4, 4] [7, 7] [3, 3] [9, 9] [9, 9] [4, 4] [0, 0] [6, 6] [2, 2] [6, 6] [3, 3] [1, 1] [9, 9, 9] [2, 8] [9, 9] [6, 6] [7, 7] [2, 2] [3, 3] [6, 6] [8, 8] [0, 9] [7, 7, 7, 7] [9, 9] [9, 9] [1, 1, 1] [3, 3, 3] [7, 7] [9, 9] [9, 9] [0, 0, 0] [9, 9] [6, 6] [6, 6] [4, 4] [9, 9, 9] [1, 1, 1] [7, 7] [3, 3] [7, 7] [2, 2] [2, 2] [9, 9, 9] [7, 7] [8, 8, 8] [3, 3] [8, 8] [0, 0] [8, 8, 8] [8, 8] [0, 0] [7, 7, 7] [6, 6, 6] [1, 1] [9, 9] [8, 8, 8] [0, 0] [7, 7] [3, 3] [4, 4] [7, 7] [7, 7] [2, 2] [6, 6, 6] [8, 8] [4, 4] [2, 2] [2, 2] [8, 8] [2, 2] [0, 0] [7, 7, 7] [9, 9] [9, 9] [2, 2] [0, 0, 0] [6, 6] [6, 6] [2, 2] [0, 0] [6, 6, 6] [6, 6, 6] [1, 1] [0, 0] [1, 1] [8, 8] [2, 2] [9, 9] [2, 2] [6, 6, 6] [0, 2] [9, 9] [9, 9] [6, 6] [1, 1] [2, 2] [5, 5] [0, 0] [8, 8] [1, 1] [0, 0] [0, 0] [0, 0] [9, 9] [2, 2] [1, 1] [1, 1] [7, 7] [6, 6, 6] [1, 8, 1] [7, 7] [0, 0] [6, 6] [7, 7] [9, 9] [6, 6, 6, 6] [6, 6] [3, 3] [2, 2] [4, 4] [0, 0, 0] [0, 0, 0] [4, 4, 4] [9, 9] [5, 5] [9, 7] [1, 1] [1, 1] [4, 4] [1, 1, 1] [6, 6, 6] [6, 6] [0, 0] [3, 3] [4, 4] [0, 0] [5, 5] [9, 9] [7, 7] [2, 2] [7, 7] [0, 0] [5, 5] [4, 4] [1, 1] [1, 8] [8, 8] [9, 9] [7, 7] [2, 2, 2] [1, 1] [2, 2] [0, 0] [7, 7, 7, 7, 7] [9, 9] [2, 2] [2, 2] [4, 4] [7, 7, 7] [8, 8] [1, 1] [1, 1, 1] [3, 3] [2, 2] [4, 4] [1, 1] [0, 0] [6, 6] [9, 9] [9, 9] [2, 2] [9, 9] [5, 5] [1, 1] [6, 6] [0, 0, 0, 0] [3, 3] [6, 6, 6] [6, 6] [7, 7] [0, 0] [6, 6] [0, 0, 0] [1, 1] [6, 6] [5, 5] [1, 1, 1, 1] [7, 7] [7, 7] [1, 1] [5, 5] [8, 8] [6, 6] [6, 6] [0, 0] [4, 4, 4] [2, 2] [9, 9, 9] [0, 0, 0] [6, 6] [9, 9] [9, 9] [9, 9] [6, 6] [1, 1] [3, 3] [3, 3] [6, 6] [2, 2] [8, 8, 8] [9, 9] [2, 2] [7, 7] [5, 5] [0, 0] [9, 9] [6, 6] [2, 2] [5, 5] [5, 5] [6, 6] [9, 9] [0, 0] [7, 7] [2, 2] [1, 1, 1, 1] [6, 6] [5, 5] [9, 9] [6, 6] [3, 3] [1, 1] [2, 2] [6, 6] [9, 9] [8, 8] [4, 4] [3, 3] [3, 3] [6, 6] [8, 8] [7, 7] [6, 6] [0, 0] [3, 3] [0, 0] [8, 8] [1, 1, 1] [1, 1] [4, 4] [5, 5] [1, 1] [9, 9] [1, 1, 1] [3, 0] [5, 5] [0, 0] [6, 6] [4, 4] [0, 0] [9, 9] [1, 1, 1] [0, 0] [5, 5] [7, 7] [9, 9] [7, 7] [8, 8] [1, 1] [0, 0, 0] [7, 7] [9, 9] [0, 0] [0, 0] [4, 4] [2, 2] [4, 4] [4, 4] [1, 1] [6, 6, 6] [1, 1] [9, 9] [7, 7] [7, 7] [3, 3] [0, 0] [3, 3] [8, 8] [8, 8] [5, 5] [0, 0] [6, 6] [6, 6] [8, 8] [0, 0, 0] [3, 3] [0, 0, 0] [1, 1] [8, 8] [6, 6] [9, 9] [0, 0] [6, 6] [2, 2] [2, 2] [4, 4] [6, 6] [2, 2] [4, 4] [3, 3] [6, 6] [3, 3] [1, 1, 1, 1, 1, 1] [4, 4] [8, 8] [2, 2] [1, 1] [1, 1, 1] [1, 1, 1, 1] [5, 5] [7, 7] [2, 2] [3, 3] [1, 1] [7, 7] [0, 0] [1, 1, 1] [4, 4, 4] [6, 6] [0, 0] [6, 6] [5, 5, 5, 5] [6, 6] [2, 2] [4, 4] [7, 7] [6, 6] [9, 9] [1, 1, 1, 1] [3, 3] [3, 3] [0, 0] [1, 1] [7, 7] [3, 3] [9, 9] [9, 9] [4, 4] [0, 0, 0] [0, 0] [9, 9] [3, 3] [7, 7] [3, 3] [6, 6] [1, 1, 1, 1] [9, 9] [3, 3] [6, 6] [8, 8] [5, 5] [2, 2] [8, 8] [7, 7] [4, 4] [6, 6] [6, 6] [8, 8] [0, 0] [3, 3] [2, 2, 2, 2] [0, 0] [0, 0] [9, 9] [3, 3] [0, 0] [8, 8, 8] [5, 6] [0, 0, 0] [6, 6, 6] [7, 7] [0, 0] [6, 6] [0, 0] [5, 5] [9, 9] [4, 4] [0, 0] [6, 6, 6] [7, 7] [6, 6] [6, 6, 6] [1, 1] [7, 7] [7, 7, 7, 7, 7, 7] [9, 9] [0, 0] [1, 1] [0, 0] [7, 7] [7, 7] [6, 6] [3, 3] [6, 6] [1, 1] [6, 6] [9, 9] [3, 3] [3, 3, 3] [8, 8] [9, 9] [5, 5] [6, 6] [9, 9] [9, 9, 9] [2, 2] [9, 9] [1, 1] [9, 9] [0, 0] [6, 6] [4, 4] [9, 9] [7, 7] [5, 5] [6, 6] [7, 7] [9, 9] [4, 4] [4, 4] [4, 4] [9, 9] [5, 5] [6, 6] [2, 2] [4, 4] [9, 9, 9] [4, 4] [7, 7] [2, 2] [6, 6] [2, 2] [3, 3] [0, 0] [6, 6] [0, 0] [0, 0] [1, 1] [6, 6] [3, 3] [1, 1] [6, 6] [9, 9] [1, 1, 1, 1] [7, 7] [7, 7] [7, 7] [6, 6] [7, 7] [9, 9] [7, 7] [1, 1] [7, 7] [7, 7] [3, 3] [2, 2] [3, 3] [1, 1] [8, 8] [6, 6] [7, 7] [8, 8] [7, 7] [0, 0] [0, 0] [6, 6] [6, 6] [0, 0, 0] [6, 6] [6, 6] [1, 1] [6, 6] [7, 7] [5, 5] [6, 6] [5, 5] [9, 9] [2, 2] [7, 7] [0, 0] [5, 5] [6, 6] [2, 2] [1, 1] [6, 6] [0, 0] [6, 6] [7, 7] [7, 7, 7] [9, 9] [3, 3] [0, 0, 0] [5, 5] [2, 2] [4, 4] [5, 5] [9, 9] [5, 5] [2, 2] [4, 4] [7, 7] [9, 9] [7, 7] [5, 5] [9, 1] [0, 0] [1, 1] [2, 2] [3, 3] [8, 8] [5, 5] [3, 3] [6, 6] [0, 0, 0] [4, 4] [1, 6] [4, 4] [5, 5] [3, 3] [8, 8] [7, 7] [5, 5] [0, 0] [1, 1] [2, 2] [8, 8] [0, 0] [2, 2] [2, 2] [5, 5] [4, 4] [7, 7] [1, 1] [0, 0] [6, 6] [4, 4] [1, 1] [9, 9, 9] [1, 1] [3, 3] [1, 1] [1, 1] [4, 4] [1, 1] [6, 6] [2, 8] [7, 7] [1, 1] [7, 7] [6, 6] [3, 3] [1, 1, 1] [7, 7] [8, 8] [7, 7] [6, 6] [9, 9] [9, 9] [2, 2] [7, 7] [4, 4] [4, 4] [7, 7] [0, 0] [8, 8] [5, 5] [6, 6] [0, 0] [6, 6] [8, 8] [0, 0, 0] [6, 6] [7, 9] [0, 0, 0] [5, 5] [7, 7] [7, 7] [6, 6, 6] [1, 1, 1] [7, 8] [4, 4] [7, 7] [6, 6] [5, 5, 5] [1, 1] [7, 7] [8, 8] [4, 4] [7, 7] [6, 6] [0, 0] [7, 7] [4, 4] [7, 7] [7, 7] [3, 3] [6, 6] [6, 6] [5, 5] [8, 8] [7, 7] [8, 0] [3, 3] [0, 0] [1, 1] [9, 9] [1, 1] [6, 6] [7, 7] [4, 4] [1, 1] [9, 9] [7, 7, 7] [6, 6] [7, 7] [6, 6] [9, 9] [7, 7] [8, 8] [2, 2] [3, 3] [2, 2] [7, 7] [5, 5] [7, 7] [7, 7] [9, 9] [3, 3] [7, 7] [5, 5] [7, 7] [1, 1] [6, 6] [6, 6] [0, 0] [1, 1] [1, 1] [6, 6] [9, 9] [1, 1] [1, 1] [2, 2] [8, 8] [3, 3] [8, 8] [6, 6] [2, 2] [1, 1, 1] [0, 0] [0, 0] [6, 6] [0, 0] [6, 6] [2, 2] [4, 4] [7, 7] [4, 4] [4, 4] [9, 9] [2, 2] [7, 7] [6, 6] [9, 7] [2, 2] [9, 9] [8, 8] [6, 6] [2, 2] [8, 8] [9, 9] [3, 3] [7, 7] [1, 1] [9, 9] [6, 6] [4, 4] [2, 2] [2, 2] [9, 9] [8, 8] [5, 5] [6, 6] [7, 7] [7, 7] [9, 9] [2, 2] [7, 7] [8, 8] [0, 0] [7, 7] [0, 0] [1, 1, 1] [9, 9, 9] [6, 6] [3, 3] [7, 7] [0, 0] [9, 9] [9, 9] [8, 8] [9, 9] [7, 7, 7] [6, 6] [9, 9] [2, 3] [9, 9] [0, 0]
# 여러 이미지를 가지고 있는 해시
non_unique_hashes = len(mnist.train.images) - len(coding_int_to_labels)
print("Number of images that share a hash:", non_unique_hashes)
print("Percentage of images that share a hash:", non_unique_hashes / len(mnist.train.images) * 100.0)
# 쓸모없는 해시
n_invalid_hash_sets = 0
for hash_int, labels in coding_int_to_labels.items():
if len(set(labels)) > 1:
n_invalid_hash_sets += 1
print("Number of invalid hash sets:", n_invalid_hash_sets)
print("Percentage of invalid hash sets:", n_invalid_hash_sets / non_unique_hashes * 100.0)
Number of images that share a hash: 9990 Percentage of images that share a hash: 18.163636363636364 Number of invalid hash sets: 141 Percentage of invalid hash sets: 1.4114114114114114
더 많은 이미지가 해시를 공유하도록 코드를 변경할 수 있지만 유효하지 않은 해시 설정 비율이 1.4 %로 낮으므로 충분합니다.
• 레이블된 데이터셋으로 할 수 있는 다른 접근 방법은 분류를 위한 합성곱 신경망 (13장 참조)을 훈련시키는 것입니다. 그런 다음 해시를 생성하기 위해 출력층 아래에 있는 층을 사용합니다. 진마 귀(Jinma Guo)와 지앤민 리(Jianmin li)의 2015년 논문을 참고하세요. 이 방법이 더 나은 성능을 내는지 확인해보세요.
행렬을 재구성하는 대신에 CNN 기반 해싱 방법을 제안했습니다. 즉, 완전히 연결된 레이어의 활성화를 임계 값 0으로 이진화하고 이진 결과를 해시 코드로 취하는 방법을 제안했습니다. 이 방법은 CIFAR-10에서 최고의 성능을 달성했으며 MNIST의 최첨단 기술과 비교되었습니다.
해싱 기능과 피쳐 추출 기능의 공동 적응이 중요하다고 제안했습니다. 우리는 CNN만으로 분류 된 데이터로 해싱을 배우는 방법으로 CIFAR-10에서 최상의 성능을 보았습니다 (8 % ~ 16 % 향상).
우리는 CNN이 배운 해싱이 KSH [24]보다 낫다는 것을 보여주었습니다.
우리가 제안한 CNNBH에 관해서는, 각 데이터 세트의 5000 개의 표본 샘플을 5 개의 폴드로 나누어 교차 검증을 실시하여 가장 적합한 모델을 찾은 다음 모든 5000 개의 샘플로 트레이닝했습니다.
훈련을 위해 단지 5000 개의 샘플을 사용해서 우리는 어떤 완전 연결된 레이어의 출력을 이진화하여 주어진 이미지의 바이너리 해시 코드를 얻을 수있는 새로운 방법을 제안했습니다. (우리는 처음에 완전히 연결된 레이어를 선택했다.) CIFAR-10에서 최상의 결과를 얻었습니다.