이 노트북은 케라스 창시자에게 배우는 딥러닝 2판의 예제 코드를 담고 있습니다.
간단한 컨브넷 만들기
from tensorflow import keras
from tensorflow.keras import layers
inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.Flatten()(x)
outputs = layers.Dense(10, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
모델의 summary()
메서드 출력
model.summary()
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 28, 28, 1)] 0 conv2d (Conv2D) (None, 26, 26, 32) 320 max_pooling2d (MaxPooling2 (None, 13, 13, 32) 0 D) conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 max_pooling2d_1 (MaxPoolin (None, 5, 5, 64) 0 g2D) conv2d_2 (Conv2D) (None, 3, 3, 128) 73856 flatten (Flatten) (None, 1152) 0 dense (Dense) (None, 10) 11530 ================================================================= Total params: 104202 (407.04 KB) Trainable params: 104202 (407.04 KB) Non-trainable params: 0 (0.00 Byte) _________________________________________________________________
MNIST 이미지에서 컨브넷 훈련하기
from tensorflow.keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype("float32") / 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype("float32") / 255
model.compile(optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
model.fit(train_images, train_labels, epochs=5, batch_size=64)
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11490434/11490434 [==============================] - 1s 0us/step Epoch 1/5 938/938 [==============================] - 19s 6ms/step - loss: 0.1624 - accuracy: 0.9495 Epoch 2/5 938/938 [==============================] - 5s 5ms/step - loss: 0.0451 - accuracy: 0.9857 Epoch 3/5 938/938 [==============================] - 4s 4ms/step - loss: 0.0300 - accuracy: 0.9902 Epoch 4/5 938/938 [==============================] - 4s 4ms/step - loss: 0.0227 - accuracy: 0.9930 Epoch 5/5 938/938 [==============================] - 4s 4ms/step - loss: 0.0179 - accuracy: 0.9944
<keras.src.callbacks.History at 0x7a8c1c191270>
컨브넷 평가하기
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f"테스트 정확도: {test_acc:.3f}")
313/313 [==============================] - 1s 4ms/step - loss: 0.0249 - accuracy: 0.9914 테스트 정확도: 0.991
최대 풀링 층이 빠진 잘못된 구조의 컨브넷
inputs = keras.Input(shape=(28, 28, 1))
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(inputs)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.Flatten()(x)
outputs = layers.Dense(10, activation="softmax")(x)
model_no_max_pool = keras.Model(inputs=inputs, outputs=outputs)
model_no_max_pool.summary()
Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 28, 28, 1)] 0 conv2d_3 (Conv2D) (None, 26, 26, 32) 320 conv2d_4 (Conv2D) (None, 24, 24, 64) 18496 conv2d_5 (Conv2D) (None, 22, 22, 128) 73856 flatten_1 (Flatten) (None, 61952) 0 dense_1 (Dense) (None, 10) 619530 ================================================================= Total params: 712202 (2.72 MB) Trainable params: 712202 (2.72 MB) Non-trainable params: 0 (0.00 Byte) _________________________________________________________________
캐글에서 dogs-vs-cats 데이터셋을 다운로드하려면 캐글에 가입해야 한 후 생성한 API 키를 사용해야 합니다. 다운로드에 문제가 있다면 다음 명령으로 구글 드라이브에서 직접 다운로드할 수 있습니다.
import gdown
gdown.download(id='18uC7WTuEXKJDDxbj-Jq6EjzpFrgE7IAd', output='dogs-vs-cats.zip')
Downloading... From: https://drive.google.com/uc?id=18uC7WTuEXKJDDxbj-Jq6EjzpFrgE7IAd To: /content/dogs-vs-cats.zip 100%|██████████| 852M/852M [00:16<00:00, 52.8MB/s]
'dogs-vs-cats.zip'
코랩의 Secrets 탭에 캐글 키를 저장해 놓았다면 다음처럼 노트북에서 불러올 수 있습니다. Secrets에 저장한 키 이름이 'kaggle'
이라 가정합니다.
from google.colab import userdata
key = userdata.get('kaggle')
with open('kaggle.json', 'w') as f:
f.write(f'{{"username":"haesunpark","key":"{key}"}}')
또는 로컬 컴퓨터에 저장되어 있는 'kaggle.json'
파일을 직접 코랩에 업로드할 수 있습니다.
# kaggle.json 파일을 업로드하세요.
from google.colab import files
files.upload()
'kaggle.json'
파일이 생성되었다면 적절한 위치로 이동시키고 파일 권한을 변경한 다음 캐글에서 데이터를 다운로드합니다.
!mkdir ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
!kaggle competitions download -c dogs-vs-cats
!unzip -qq dogs-vs-cats.zip
!unzip -qq train.zip
이미지를 훈련, 검증, 테스트 디렉토리로 복사하기
import os, shutil, pathlib
original_dir = pathlib.Path("train")
new_base_dir = pathlib.Path("cats_vs_dogs_small")
def make_subset(subset_name, start_index, end_index):
for category in ("cat", "dog"):
dir = new_base_dir / subset_name / category
os.makedirs(dir)
fnames = [f"{category}.{i}.jpg" for i in range(start_index, end_index)]
for fname in fnames:
shutil.copyfile(src=original_dir / fname,
dst=dir / fname)
make_subset("train", start_index=0, end_index=1000)
make_subset("validation", start_index=1000, end_index=1500)
make_subset("test", start_index=1500, end_index=2500)
강아지 vs. 고양이 분류를 위한 소규모 컨브넷 만들기
from tensorflow import keras
from tensorflow.keras import layers
inputs = keras.Input(shape=(180, 180, 3))
x = layers.Rescaling(1./255)(inputs)
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.Flatten()(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.summary()
Model: "model_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_3 (InputLayer) [(None, 180, 180, 3)] 0 rescaling (Rescaling) (None, 180, 180, 3) 0 conv2d_6 (Conv2D) (None, 178, 178, 32) 896 max_pooling2d_2 (MaxPoolin (None, 89, 89, 32) 0 g2D) conv2d_7 (Conv2D) (None, 87, 87, 64) 18496 max_pooling2d_3 (MaxPoolin (None, 43, 43, 64) 0 g2D) conv2d_8 (Conv2D) (None, 41, 41, 128) 73856 max_pooling2d_4 (MaxPoolin (None, 20, 20, 128) 0 g2D) conv2d_9 (Conv2D) (None, 18, 18, 256) 295168 max_pooling2d_5 (MaxPoolin (None, 9, 9, 256) 0 g2D) conv2d_10 (Conv2D) (None, 7, 7, 256) 590080 flatten_2 (Flatten) (None, 12544) 0 dense_2 (Dense) (None, 1) 12545 ================================================================= Total params: 991041 (3.78 MB) Trainable params: 991041 (3.78 MB) Non-trainable params: 0 (0.00 Byte) _________________________________________________________________
모델 훈련 설정하기
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"])
image_dataset_from_directory
를 사용하여 이미지 읽기
from tensorflow.keras.utils import image_dataset_from_directory
train_dataset = image_dataset_from_directory(
new_base_dir / "train",
image_size=(180, 180),
batch_size=32)
validation_dataset = image_dataset_from_directory(
new_base_dir / "validation",
image_size=(180, 180),
batch_size=32)
test_dataset = image_dataset_from_directory(
new_base_dir / "test",
image_size=(180, 180),
batch_size=32)
Found 2000 files belonging to 2 classes. Found 1000 files belonging to 2 classes. Found 2000 files belonging to 2 classes.
import numpy as np
import tensorflow as tf
random_numbers = np.random.normal(size=(1000, 16))
dataset = tf.data.Dataset.from_tensor_slices(random_numbers)
for i, element in enumerate(dataset):
print(element.shape)
if i >= 2:
break
(16,) (16,) (16,)
batched_dataset = dataset.batch(32)
for i, element in enumerate(batched_dataset):
print(element.shape)
if i >= 2:
break
(32, 16) (32, 16) (32, 16)
reshaped_dataset = dataset.map(lambda x: tf.reshape(x, (4, 4)))
for i, element in enumerate(reshaped_dataset):
print(element.shape)
if i >= 2:
break
(4, 4) (4, 4) (4, 4)
Dataset
이 반환하는 데이터와 레이블 크기 확인하기
for data_batch, labels_batch in train_dataset:
print("데이터 배치 크기:", data_batch.shape)
print("레이블 배치 크기:", labels_batch.shape)
break
데이터 배치 크기: (32, 180, 180, 3) 레이블 배치 크기: (32,)
Dataset
을 사용해 모델 훈련하기
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="convnet_from_scratch.h5",
save_best_only=True,
monitor="val_loss")
]
history = model.fit(
train_dataset,
epochs=30,
validation_data=validation_dataset,
callbacks=callbacks)
Epoch 1/30 63/63 [==============================] - 8s 112ms/step - loss: 0.6960 - accuracy: 0.5075 - val_loss: 0.6923 - val_accuracy: 0.4990 Epoch 2/30
/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py:3079: UserWarning: You are saving your model as an HDF5 file via `model.save()`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')`. saving_api.save_model(
63/63 [==============================] - 7s 108ms/step - loss: 0.6915 - accuracy: 0.5485 - val_loss: 0.6654 - val_accuracy: 0.5880 Epoch 3/30 63/63 [==============================] - 5s 66ms/step - loss: 0.6474 - accuracy: 0.6305 - val_loss: 0.6311 - val_accuracy: 0.6330 Epoch 4/30 63/63 [==============================] - 4s 57ms/step - loss: 0.6200 - accuracy: 0.6500 - val_loss: 0.5871 - val_accuracy: 0.6720 Epoch 5/30 63/63 [==============================] - 5s 73ms/step - loss: 0.5742 - accuracy: 0.7105 - val_loss: 0.6439 - val_accuracy: 0.6040 Epoch 6/30 63/63 [==============================] - 3s 52ms/step - loss: 0.5459 - accuracy: 0.7315 - val_loss: 0.6093 - val_accuracy: 0.6640 Epoch 7/30 63/63 [==============================] - 4s 55ms/step - loss: 0.5168 - accuracy: 0.7340 - val_loss: 0.6367 - val_accuracy: 0.6680 Epoch 8/30 63/63 [==============================] - 5s 77ms/step - loss: 0.4920 - accuracy: 0.7670 - val_loss: 0.5390 - val_accuracy: 0.7270 Epoch 9/30 63/63 [==============================] - 3s 52ms/step - loss: 0.4676 - accuracy: 0.7815 - val_loss: 0.6119 - val_accuracy: 0.7100 Epoch 10/30 63/63 [==============================] - 4s 56ms/step - loss: 0.4379 - accuracy: 0.7945 - val_loss: 0.5321 - val_accuracy: 0.7480 Epoch 11/30 63/63 [==============================] - 5s 80ms/step - loss: 0.3891 - accuracy: 0.8270 - val_loss: 0.7300 - val_accuracy: 0.7250 Epoch 12/30 63/63 [==============================] - 4s 56ms/step - loss: 0.3598 - accuracy: 0.8325 - val_loss: 0.5617 - val_accuracy: 0.7520 Epoch 13/30 63/63 [==============================] - 5s 76ms/step - loss: 0.3098 - accuracy: 0.8625 - val_loss: 0.5804 - val_accuracy: 0.7510 Epoch 14/30 63/63 [==============================] - 3s 51ms/step - loss: 0.2619 - accuracy: 0.8910 - val_loss: 0.7100 - val_accuracy: 0.7300 Epoch 15/30 63/63 [==============================] - 4s 54ms/step - loss: 0.2316 - accuracy: 0.8990 - val_loss: 0.7081 - val_accuracy: 0.7490 Epoch 16/30 63/63 [==============================] - 5s 71ms/step - loss: 0.1924 - accuracy: 0.9315 - val_loss: 0.6981 - val_accuracy: 0.7490 Epoch 17/30 63/63 [==============================] - 3s 51ms/step - loss: 0.1696 - accuracy: 0.9305 - val_loss: 0.8787 - val_accuracy: 0.7040 Epoch 18/30 63/63 [==============================] - 3s 51ms/step - loss: 0.1289 - accuracy: 0.9465 - val_loss: 1.1363 - val_accuracy: 0.7300 Epoch 19/30 63/63 [==============================] - 5s 79ms/step - loss: 0.1067 - accuracy: 0.9625 - val_loss: 0.9820 - val_accuracy: 0.7200 Epoch 20/30 63/63 [==============================] - 3s 52ms/step - loss: 0.0919 - accuracy: 0.9685 - val_loss: 1.1634 - val_accuracy: 0.7390 Epoch 21/30 63/63 [==============================] - 4s 53ms/step - loss: 0.0683 - accuracy: 0.9760 - val_loss: 1.1908 - val_accuracy: 0.7530 Epoch 22/30 63/63 [==============================] - 4s 59ms/step - loss: 0.0782 - accuracy: 0.9720 - val_loss: 1.2229 - val_accuracy: 0.7600 Epoch 23/30 63/63 [==============================] - 6s 87ms/step - loss: 0.0889 - accuracy: 0.9755 - val_loss: 1.3101 - val_accuracy: 0.7340 Epoch 24/30 63/63 [==============================] - 4s 52ms/step - loss: 0.0462 - accuracy: 0.9870 - val_loss: 1.2682 - val_accuracy: 0.7370 Epoch 25/30 63/63 [==============================] - 5s 74ms/step - loss: 0.0843 - accuracy: 0.9765 - val_loss: 1.2661 - val_accuracy: 0.7560 Epoch 26/30 63/63 [==============================] - 3s 51ms/step - loss: 0.0454 - accuracy: 0.9820 - val_loss: 1.5378 - val_accuracy: 0.7500 Epoch 27/30 63/63 [==============================] - 4s 55ms/step - loss: 0.1287 - accuracy: 0.9685 - val_loss: 1.3562 - val_accuracy: 0.7540 Epoch 28/30 63/63 [==============================] - 5s 72ms/step - loss: 0.0463 - accuracy: 0.9855 - val_loss: 1.3917 - val_accuracy: 0.7520 Epoch 29/30 63/63 [==============================] - 3s 51ms/step - loss: 0.0632 - accuracy: 0.9830 - val_loss: 1.4469 - val_accuracy: 0.7510 Epoch 30/30 63/63 [==============================] - 3s 51ms/step - loss: 0.0340 - accuracy: 0.9885 - val_loss: 1.7205 - val_accuracy: 0.7430
훈련 정확도와 손실 그래프 그리기
import matplotlib.pyplot as plt
accuracy = history.history["accuracy"]
val_accuracy = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(accuracy) + 1)
plt.plot(epochs, accuracy, "bo", label="Training accuracy")
plt.plot(epochs, val_accuracy, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
테스트 세트에서 모델 평가하기
test_model = keras.models.load_model("convnet_from_scratch.h5")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"테스트 정확도: {test_acc:.3f}")
63/63 [==============================] - 2s 30ms/step - loss: 0.5597 - accuracy: 0.7390 테스트 정확도: 0.739
컨브넷에 추가할 데이터 증식 단계 정의하기
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.2),
]
)
랜덤하게 증식된 훈련 이미지 출력하기
plt.figure(figsize=(10, 10))
for images, _ in train_dataset.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
이미지 증식과 드롭아웃을 포함한 컨브넷 만들기
inputs = keras.Input(shape=(180, 180, 3))
x = data_augmentation(inputs)
x = layers.Rescaling(1./255)(x)
x = layers.Conv2D(filters=32, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.Flatten()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"])
규제를 추가한 컨브넷 훈련하기
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="convnet_from_scratch_with_augmentation.h5",
save_best_only=True,
monitor="val_loss")
]
history = model.fit(
train_dataset,
epochs=100,
validation_data=validation_dataset,
callbacks=callbacks)
Epoch 1/100 63/63 [==============================] - 8s 77ms/step - loss: 0.7003 - accuracy: 0.4970 - val_loss: 0.6922 - val_accuracy: 0.5020 Epoch 2/100 63/63 [==============================] - 4s 55ms/step - loss: 0.6937 - accuracy: 0.5080 - val_loss: 0.6897 - val_accuracy: 0.5370 Epoch 3/100 63/63 [==============================] - 4s 54ms/step - loss: 0.6907 - accuracy: 0.5300 - val_loss: 0.6883 - val_accuracy: 0.5070 Epoch 4/100 63/63 [==============================] - 6s 88ms/step - loss: 0.6805 - accuracy: 0.5845 - val_loss: 0.6627 - val_accuracy: 0.6190 Epoch 5/100 63/63 [==============================] - 4s 54ms/step - loss: 0.6598 - accuracy: 0.6230 - val_loss: 0.6515 - val_accuracy: 0.6140 Epoch 6/100 63/63 [==============================] - 5s 68ms/step - loss: 0.6410 - accuracy: 0.6235 - val_loss: 0.6611 - val_accuracy: 0.5910 Epoch 7/100 63/63 [==============================] - 6s 87ms/step - loss: 0.6371 - accuracy: 0.6405 - val_loss: 0.6536 - val_accuracy: 0.6020 Epoch 8/100 63/63 [==============================] - 5s 69ms/step - loss: 0.6350 - accuracy: 0.6540 - val_loss: 0.6067 - val_accuracy: 0.6550 Epoch 9/100 63/63 [==============================] - 5s 80ms/step - loss: 0.6011 - accuracy: 0.6795 - val_loss: 0.6283 - val_accuracy: 0.6250 Epoch 10/100 63/63 [==============================] - 4s 61ms/step - loss: 0.5876 - accuracy: 0.6870 - val_loss: 0.6041 - val_accuracy: 0.6620 Epoch 11/100 63/63 [==============================] - 4s 58ms/step - loss: 0.5808 - accuracy: 0.7000 - val_loss: 0.6126 - val_accuracy: 0.6760 Epoch 12/100 63/63 [==============================] - 6s 87ms/step - loss: 0.5609 - accuracy: 0.7150 - val_loss: 0.6550 - val_accuracy: 0.6300 Epoch 13/100 63/63 [==============================] - 4s 54ms/step - loss: 0.5606 - accuracy: 0.7190 - val_loss: 0.7295 - val_accuracy: 0.6060 Epoch 14/100 63/63 [==============================] - 6s 93ms/step - loss: 0.5454 - accuracy: 0.7265 - val_loss: 0.5223 - val_accuracy: 0.7330 Epoch 15/100 63/63 [==============================] - 8s 113ms/step - loss: 0.5232 - accuracy: 0.7435 - val_loss: 0.5284 - val_accuracy: 0.7300 Epoch 16/100 63/63 [==============================] - 5s 80ms/step - loss: 0.5201 - accuracy: 0.7390 - val_loss: 0.5144 - val_accuracy: 0.7330 Epoch 17/100 63/63 [==============================] - 4s 55ms/step - loss: 0.5041 - accuracy: 0.7560 - val_loss: 0.5674 - val_accuracy: 0.7240 Epoch 18/100 63/63 [==============================] - 4s 57ms/step - loss: 0.5118 - accuracy: 0.7440 - val_loss: 0.5840 - val_accuracy: 0.6820 Epoch 19/100 63/63 [==============================] - 5s 71ms/step - loss: 0.4863 - accuracy: 0.7595 - val_loss: 0.5141 - val_accuracy: 0.7310 Epoch 20/100 63/63 [==============================] - 4s 59ms/step - loss: 0.4864 - accuracy: 0.7715 - val_loss: 0.5880 - val_accuracy: 0.7150 Epoch 21/100 63/63 [==============================] - 4s 58ms/step - loss: 0.4736 - accuracy: 0.7685 - val_loss: 0.4394 - val_accuracy: 0.7880 Epoch 22/100 63/63 [==============================] - 5s 69ms/step - loss: 0.4650 - accuracy: 0.7790 - val_loss: 0.9202 - val_accuracy: 0.6770 Epoch 23/100 63/63 [==============================] - 4s 55ms/step - loss: 0.4710 - accuracy: 0.7725 - val_loss: 0.4541 - val_accuracy: 0.7920 Epoch 24/100 63/63 [==============================] - 4s 53ms/step - loss: 0.4631 - accuracy: 0.7870 - val_loss: 0.4545 - val_accuracy: 0.7830 Epoch 25/100 63/63 [==============================] - 5s 78ms/step - loss: 0.4421 - accuracy: 0.7920 - val_loss: 0.4915 - val_accuracy: 0.7690 Epoch 26/100 63/63 [==============================] - 4s 55ms/step - loss: 0.4377 - accuracy: 0.8035 - val_loss: 0.4701 - val_accuracy: 0.7660 Epoch 27/100 63/63 [==============================] - 4s 58ms/step - loss: 0.4345 - accuracy: 0.7980 - val_loss: 0.4340 - val_accuracy: 0.8030 Epoch 28/100 63/63 [==============================] - 5s 71ms/step - loss: 0.4421 - accuracy: 0.7945 - val_loss: 0.4206 - val_accuracy: 0.8000 Epoch 29/100 63/63 [==============================] - 4s 59ms/step - loss: 0.4081 - accuracy: 0.8125 - val_loss: 0.5170 - val_accuracy: 0.7750 Epoch 30/100 63/63 [==============================] - 4s 55ms/step - loss: 0.3992 - accuracy: 0.8230 - val_loss: 0.4144 - val_accuracy: 0.8040 Epoch 31/100 63/63 [==============================] - 5s 81ms/step - loss: 0.4147 - accuracy: 0.8095 - val_loss: 0.5203 - val_accuracy: 0.7660 Epoch 32/100 63/63 [==============================] - 4s 54ms/step - loss: 0.3883 - accuracy: 0.8255 - val_loss: 0.4129 - val_accuracy: 0.8140 Epoch 33/100 63/63 [==============================] - 4s 57ms/step - loss: 0.3868 - accuracy: 0.8350 - val_loss: 0.4975 - val_accuracy: 0.7670 Epoch 34/100 63/63 [==============================] - 5s 70ms/step - loss: 0.3764 - accuracy: 0.8315 - val_loss: 0.4423 - val_accuracy: 0.8130 Epoch 35/100 63/63 [==============================] - 4s 56ms/step - loss: 0.3760 - accuracy: 0.8300 - val_loss: 0.4145 - val_accuracy: 0.8150 Epoch 36/100 63/63 [==============================] - 4s 56ms/step - loss: 0.3668 - accuracy: 0.8355 - val_loss: 0.5161 - val_accuracy: 0.7920 Epoch 37/100 63/63 [==============================] - 4s 68ms/step - loss: 0.3513 - accuracy: 0.8495 - val_loss: 0.4094 - val_accuracy: 0.8340 Epoch 38/100 63/63 [==============================] - 4s 56ms/step - loss: 0.3418 - accuracy: 0.8580 - val_loss: 0.5209 - val_accuracy: 0.8040 Epoch 39/100 63/63 [==============================] - 6s 89ms/step - loss: 0.3440 - accuracy: 0.8540 - val_loss: 0.4714 - val_accuracy: 0.8020 Epoch 40/100 63/63 [==============================] - 4s 59ms/step - loss: 0.3533 - accuracy: 0.8520 - val_loss: 0.4243 - val_accuracy: 0.8190 Epoch 41/100 63/63 [==============================] - 4s 58ms/step - loss: 0.3390 - accuracy: 0.8545 - val_loss: 0.4507 - val_accuracy: 0.7980 Epoch 42/100 63/63 [==============================] - 4s 59ms/step - loss: 0.3310 - accuracy: 0.8540 - val_loss: 0.4368 - val_accuracy: 0.8050 Epoch 43/100 63/63 [==============================] - 4s 56ms/step - loss: 0.3178 - accuracy: 0.8735 - val_loss: 0.4311 - val_accuracy: 0.8260 Epoch 44/100 63/63 [==============================] - 5s 80ms/step - loss: 0.3051 - accuracy: 0.8615 - val_loss: 0.5064 - val_accuracy: 0.8010 Epoch 45/100 63/63 [==============================] - 4s 55ms/step - loss: 0.3127 - accuracy: 0.8775 - val_loss: 0.4691 - val_accuracy: 0.8160 Epoch 46/100 63/63 [==============================] - 4s 60ms/step - loss: 0.2953 - accuracy: 0.8790 - val_loss: 0.4048 - val_accuracy: 0.8500 Epoch 47/100 63/63 [==============================] - 5s 77ms/step - loss: 0.3172 - accuracy: 0.8740 - val_loss: 0.4554 - val_accuracy: 0.8260 Epoch 48/100 63/63 [==============================] - 4s 57ms/step - loss: 0.2817 - accuracy: 0.8815 - val_loss: 0.5961 - val_accuracy: 0.7750 Epoch 49/100 63/63 [==============================] - 4s 60ms/step - loss: 0.2867 - accuracy: 0.8725 - val_loss: 0.4722 - val_accuracy: 0.8240 Epoch 50/100 63/63 [==============================] - 4s 58ms/step - loss: 0.2656 - accuracy: 0.8840 - val_loss: 0.4357 - val_accuracy: 0.8310 Epoch 51/100 63/63 [==============================] - 4s 57ms/step - loss: 0.2636 - accuracy: 0.8910 - val_loss: 0.4494 - val_accuracy: 0.8380 Epoch 52/100 63/63 [==============================] - 5s 78ms/step - loss: 0.2481 - accuracy: 0.8945 - val_loss: 0.3950 - val_accuracy: 0.8530 Epoch 53/100 63/63 [==============================] - 4s 54ms/step - loss: 0.2599 - accuracy: 0.8935 - val_loss: 0.4964 - val_accuracy: 0.8280 Epoch 54/100 63/63 [==============================] - 4s 59ms/step - loss: 0.2599 - accuracy: 0.8925 - val_loss: 0.4742 - val_accuracy: 0.8260 Epoch 55/100 63/63 [==============================] - 5s 80ms/step - loss: 0.2389 - accuracy: 0.8980 - val_loss: 0.6732 - val_accuracy: 0.7560 Epoch 56/100 63/63 [==============================] - 4s 57ms/step - loss: 0.2637 - accuracy: 0.9050 - val_loss: 0.4595 - val_accuracy: 0.8400 Epoch 57/100 63/63 [==============================] - 4s 59ms/step - loss: 0.2466 - accuracy: 0.8985 - val_loss: 0.4427 - val_accuracy: 0.8180 Epoch 58/100 63/63 [==============================] - 4s 53ms/step - loss: 0.2529 - accuracy: 0.9040 - val_loss: 0.4680 - val_accuracy: 0.8200 Epoch 59/100 63/63 [==============================] - 4s 54ms/step - loss: 0.2352 - accuracy: 0.9005 - val_loss: 0.4595 - val_accuracy: 0.8350 Epoch 60/100 63/63 [==============================] - 5s 75ms/step - loss: 0.2472 - accuracy: 0.8980 - val_loss: 0.4352 - val_accuracy: 0.8450 Epoch 61/100 63/63 [==============================] - 4s 59ms/step - loss: 0.1959 - accuracy: 0.9180 - val_loss: 0.4158 - val_accuracy: 0.8390 Epoch 62/100 63/63 [==============================] - 4s 57ms/step - loss: 0.2260 - accuracy: 0.9070 - val_loss: 0.4310 - val_accuracy: 0.8530 Epoch 63/100 63/63 [==============================] - 5s 73ms/step - loss: 0.2230 - accuracy: 0.9095 - val_loss: 0.4558 - val_accuracy: 0.8530 Epoch 64/100 63/63 [==============================] - 4s 58ms/step - loss: 0.2263 - accuracy: 0.9040 - val_loss: 0.5447 - val_accuracy: 0.8120 Epoch 65/100 63/63 [==============================] - 4s 55ms/step - loss: 0.2125 - accuracy: 0.9140 - val_loss: 0.5205 - val_accuracy: 0.8360 Epoch 66/100 63/63 [==============================] - 5s 77ms/step - loss: 0.1901 - accuracy: 0.9180 - val_loss: 0.5928 - val_accuracy: 0.8100 Epoch 67/100 63/63 [==============================] - 4s 55ms/step - loss: 0.2070 - accuracy: 0.9130 - val_loss: 0.4628 - val_accuracy: 0.8410 Epoch 68/100 63/63 [==============================] - 4s 55ms/step - loss: 0.1903 - accuracy: 0.9235 - val_loss: 0.5531 - val_accuracy: 0.8330 Epoch 69/100 63/63 [==============================] - 6s 90ms/step - loss: 0.2024 - accuracy: 0.9270 - val_loss: 0.5903 - val_accuracy: 0.8340 Epoch 70/100 63/63 [==============================] - 4s 54ms/step - loss: 0.1881 - accuracy: 0.9225 - val_loss: 0.5197 - val_accuracy: 0.8490 Epoch 71/100 63/63 [==============================] - 4s 58ms/step - loss: 0.2226 - accuracy: 0.9195 - val_loss: 0.5125 - val_accuracy: 0.8310 Epoch 72/100 63/63 [==============================] - 5s 77ms/step - loss: 0.1865 - accuracy: 0.9255 - val_loss: 0.4902 - val_accuracy: 0.8400 Epoch 73/100 63/63 [==============================] - 4s 56ms/step - loss: 0.1840 - accuracy: 0.9275 - val_loss: 0.7038 - val_accuracy: 0.8130 Epoch 74/100 63/63 [==============================] - 4s 57ms/step - loss: 0.1792 - accuracy: 0.9310 - val_loss: 0.6351 - val_accuracy: 0.8240 Epoch 75/100 63/63 [==============================] - 5s 76ms/step - loss: 0.2093 - accuracy: 0.9175 - val_loss: 0.5427 - val_accuracy: 0.8360 Epoch 76/100 63/63 [==============================] - 4s 57ms/step - loss: 0.1906 - accuracy: 0.9265 - val_loss: 0.4341 - val_accuracy: 0.8420 Epoch 77/100 63/63 [==============================] - 4s 54ms/step - loss: 0.1844 - accuracy: 0.9310 - val_loss: 0.4883 - val_accuracy: 0.8420 Epoch 78/100 63/63 [==============================] - 5s 78ms/step - loss: 0.1831 - accuracy: 0.9285 - val_loss: 0.5849 - val_accuracy: 0.8300 Epoch 79/100 63/63 [==============================] - 4s 59ms/step - loss: 0.1852 - accuracy: 0.9335 - val_loss: 0.5026 - val_accuracy: 0.8430 Epoch 80/100 63/63 [==============================] - 4s 57ms/step - loss: 0.1750 - accuracy: 0.9335 - val_loss: 0.5190 - val_accuracy: 0.8460 Epoch 81/100 63/63 [==============================] - 5s 76ms/step - loss: 0.1720 - accuracy: 0.9325 - val_loss: 0.4670 - val_accuracy: 0.8490 Epoch 82/100 63/63 [==============================] - 4s 59ms/step - loss: 0.1729 - accuracy: 0.9330 - val_loss: 0.5196 - val_accuracy: 0.8450 Epoch 83/100 63/63 [==============================] - 4s 55ms/step - loss: 0.1455 - accuracy: 0.9465 - val_loss: 0.5579 - val_accuracy: 0.8260 Epoch 84/100 63/63 [==============================] - 5s 74ms/step - loss: 0.1567 - accuracy: 0.9425 - val_loss: 0.5719 - val_accuracy: 0.8570 Epoch 85/100 63/63 [==============================] - 4s 59ms/step - loss: 0.1601 - accuracy: 0.9420 - val_loss: 1.5743 - val_accuracy: 0.7630 Epoch 86/100 63/63 [==============================] - 4s 57ms/step - loss: 0.2093 - accuracy: 0.9250 - val_loss: 0.5752 - val_accuracy: 0.8310 Epoch 87/100 63/63 [==============================] - 6s 85ms/step - loss: 0.1596 - accuracy: 0.9360 - val_loss: 0.6116 - val_accuracy: 0.8330 Epoch 88/100 63/63 [==============================] - 4s 55ms/step - loss: 0.1688 - accuracy: 0.9355 - val_loss: 0.5526 - val_accuracy: 0.8410 Epoch 89/100 63/63 [==============================] - 4s 55ms/step - loss: 0.1872 - accuracy: 0.9315 - val_loss: 0.5534 - val_accuracy: 0.8530 Epoch 90/100 63/63 [==============================] - 5s 76ms/step - loss: 0.1565 - accuracy: 0.9415 - val_loss: 0.6778 - val_accuracy: 0.8390 Epoch 91/100 63/63 [==============================] - 4s 57ms/step - loss: 0.1738 - accuracy: 0.9360 - val_loss: 0.4340 - val_accuracy: 0.8650 Epoch 92/100 63/63 [==============================] - 4s 56ms/step - loss: 0.1572 - accuracy: 0.9420 - val_loss: 0.6186 - val_accuracy: 0.8330 Epoch 93/100 63/63 [==============================] - 5s 81ms/step - loss: 0.1383 - accuracy: 0.9565 - val_loss: 0.6392 - val_accuracy: 0.8470 Epoch 94/100 63/63 [==============================] - 4s 56ms/step - loss: 0.1558 - accuracy: 0.9425 - val_loss: 0.5186 - val_accuracy: 0.8590 Epoch 95/100 63/63 [==============================] - 4s 60ms/step - loss: 0.1760 - accuracy: 0.9435 - val_loss: 0.4665 - val_accuracy: 0.8690 Epoch 96/100 63/63 [==============================] - 6s 89ms/step - loss: 0.1442 - accuracy: 0.9490 - val_loss: 0.5487 - val_accuracy: 0.8460 Epoch 97/100 63/63 [==============================] - 4s 57ms/step - loss: 0.1570 - accuracy: 0.9380 - val_loss: 0.5418 - val_accuracy: 0.8540 Epoch 98/100 63/63 [==============================] - 5s 78ms/step - loss: 0.1628 - accuracy: 0.9430 - val_loss: 0.8025 - val_accuracy: 0.8310 Epoch 99/100 63/63 [==============================] - 4s 59ms/step - loss: 0.1419 - accuracy: 0.9430 - val_loss: 0.7532 - val_accuracy: 0.8280 Epoch 100/100 63/63 [==============================] - 4s 56ms/step - loss: 0.1373 - accuracy: 0.9515 - val_loss: 0.6122 - val_accuracy: 0.8590
테스트 세트에서 모델 훈련하기
test_model = keras.models.load_model(
"convnet_from_scratch_with_augmentation.h5")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"테스트 정확도: {test_acc:.3f}")
63/63 [==============================] - 4s 47ms/step - loss: 0.4454 - accuracy: 0.8475 테스트 정확도: 0.848
VGG16 합성곱 기반 층 만들기
conv_base = keras.applications.vgg16.VGG16(
weights="imagenet",
include_top=False,
input_shape=(180, 180, 3))
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5 58889256/58889256 [==============================] - 2s 0us/step
conv_base.summary()
Model: "vgg16" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_5 (InputLayer) [(None, 180, 180, 3)] 0 block1_conv1 (Conv2D) (None, 180, 180, 64) 1792 block1_conv2 (Conv2D) (None, 180, 180, 64) 36928 block1_pool (MaxPooling2D) (None, 90, 90, 64) 0 block2_conv1 (Conv2D) (None, 90, 90, 128) 73856 block2_conv2 (Conv2D) (None, 90, 90, 128) 147584 block2_pool (MaxPooling2D) (None, 45, 45, 128) 0 block3_conv1 (Conv2D) (None, 45, 45, 256) 295168 block3_conv2 (Conv2D) (None, 45, 45, 256) 590080 block3_conv3 (Conv2D) (None, 45, 45, 256) 590080 block3_pool (MaxPooling2D) (None, 22, 22, 256) 0 block4_conv1 (Conv2D) (None, 22, 22, 512) 1180160 block4_conv2 (Conv2D) (None, 22, 22, 512) 2359808 block4_conv3 (Conv2D) (None, 22, 22, 512) 2359808 block4_pool (MaxPooling2D) (None, 11, 11, 512) 0 block5_conv1 (Conv2D) (None, 11, 11, 512) 2359808 block5_conv2 (Conv2D) (None, 11, 11, 512) 2359808 block5_conv3 (Conv2D) (None, 11, 11, 512) 2359808 block5_pool (MaxPooling2D) (None, 5, 5, 512) 0 ================================================================= Total params: 14714688 (56.13 MB) Trainable params: 14714688 (56.13 MB) Non-trainable params: 0 (0.00 Byte) _________________________________________________________________
VGG16 특성과 해당 레이블 추출하기
import numpy as np
def get_features_and_labels(dataset):
all_features = []
all_labels = []
for images, labels in dataset:
preprocessed_images = keras.applications.vgg16.preprocess_input(images)
features = conv_base.predict(preprocessed_images)
all_features.append(features)
all_labels.append(labels)
return np.concatenate(all_features), np.concatenate(all_labels)
train_features, train_labels = get_features_and_labels(train_dataset)
val_features, val_labels = get_features_and_labels(validation_dataset)
test_features, test_labels = get_features_and_labels(test_dataset)
1/1 [==============================] - 1s 1s/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 28ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 37ms/step 1/1 [==============================] - 0s 30ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 32ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 33ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 28ms/step 1/1 [==============================] - 0s 28ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 27ms/step 1/1 [==============================] - 0s 28ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 34ms/step 1/1 [==============================] - 0s 30ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 27ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 42ms/step 1/1 [==============================] - 0s 38ms/step 1/1 [==============================] - 0s 37ms/step 1/1 [==============================] - 0s 31ms/step 1/1 [==============================] - 0s 33ms/step 1/1 [==============================] - 0s 37ms/step 1/1 [==============================] - 0s 37ms/step 1/1 [==============================] - 0s 39ms/step 1/1 [==============================] - 0s 38ms/step 1/1 [==============================] - 0s 38ms/step 1/1 [==============================] - 0s 36ms/step 1/1 [==============================] - 0s 33ms/step 1/1 [==============================] - 0s 40ms/step 1/1 [==============================] - 0s 32ms/step 1/1 [==============================] - 0s 34ms/step 1/1 [==============================] - 0s 36ms/step 1/1 [==============================] - 0s 28ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 32ms/step 1/1 [==============================] - 0s 27ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 1s 681ms/step 1/1 [==============================] - 0s 30ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 30ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 32ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 28ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 32ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 29ms/step 1/1 [==============================] - 0s 37ms/step 1/1 [==============================] - 0s 27ms/step 1/1 [==============================] - 0s 28ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 394ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 28ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 22ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 28ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 29ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 25ms/step 1/1 [==============================] - 0s 31ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 33ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 42ms/step 1/1 [==============================] - 0s 37ms/step 1/1 [==============================] - 0s 40ms/step 1/1 [==============================] - 0s 36ms/step 1/1 [==============================] - 0s 38ms/step 1/1 [==============================] - 0s 34ms/step 1/1 [==============================] - 0s 46ms/step 1/1 [==============================] - 0s 44ms/step 1/1 [==============================] - 0s 32ms/step 1/1 [==============================] - 0s 39ms/step 1/1 [==============================] - 0s 47ms/step 1/1 [==============================] - 0s 35ms/step 1/1 [==============================] - 0s 34ms/step 1/1 [==============================] - 0s 40ms/step 1/1 [==============================] - 0s 38ms/step 1/1 [==============================] - 0s 37ms/step 1/1 [==============================] - 0s 32ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 29ms/step 1/1 [==============================] - 0s 23ms/step 1/1 [==============================] - 0s 27ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 27ms/step 1/1 [==============================] - 0s 27ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 32ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 24ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 26ms/step 1/1 [==============================] - 0s 23ms/step
train_features.shape
(2000, 5, 5, 512)
밀집 연결 분류기 정의하고 훈련하기
inputs = keras.Input(shape=(5, 5, 512))
x = layers.Flatten()(inputs)
x = layers.Dense(256)(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"])
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="feature_extraction.h5",
save_best_only=True,
monitor="val_loss")
]
history = model.fit(
train_features, train_labels,
epochs=20,
validation_data=(val_features, val_labels),
callbacks=callbacks)
Epoch 1/20 63/63 [==============================] - 1s 10ms/step - loss: 16.0650 - accuracy: 0.9210 - val_loss: 4.1520 - val_accuracy: 0.9690 Epoch 2/20 63/63 [==============================] - 1s 9ms/step - loss: 3.2429 - accuracy: 0.9770 - val_loss: 3.2732 - val_accuracy: 0.9740 Epoch 3/20 63/63 [==============================] - 1s 9ms/step - loss: 1.2996 - accuracy: 0.9870 - val_loss: 4.1026 - val_accuracy: 0.9740 Epoch 4/20 63/63 [==============================] - 1s 8ms/step - loss: 1.3476 - accuracy: 0.9885 - val_loss: 8.0825 - val_accuracy: 0.9630 Epoch 5/20 63/63 [==============================] - 1s 9ms/step - loss: 1.5037 - accuracy: 0.9905 - val_loss: 4.2695 - val_accuracy: 0.9740 Epoch 6/20 63/63 [==============================] - 1s 9ms/step - loss: 0.4634 - accuracy: 0.9945 - val_loss: 4.3755 - val_accuracy: 0.9730 Epoch 7/20 63/63 [==============================] - 0s 7ms/step - loss: 1.0309 - accuracy: 0.9950 - val_loss: 4.0909 - val_accuracy: 0.9750 Epoch 8/20 63/63 [==============================] - 0s 7ms/step - loss: 0.5972 - accuracy: 0.9940 - val_loss: 4.6923 - val_accuracy: 0.9800 Epoch 9/20 63/63 [==============================] - 0s 7ms/step - loss: 0.4753 - accuracy: 0.9955 - val_loss: 5.0274 - val_accuracy: 0.9780 Epoch 10/20 63/63 [==============================] - 0s 7ms/step - loss: 0.3256 - accuracy: 0.9955 - val_loss: 6.0209 - val_accuracy: 0.9710 Epoch 11/20 63/63 [==============================] - 0s 7ms/step - loss: 0.1812 - accuracy: 0.9985 - val_loss: 4.9708 - val_accuracy: 0.9800 Epoch 12/20 63/63 [==============================] - 0s 6ms/step - loss: 0.3422 - accuracy: 0.9980 - val_loss: 5.2527 - val_accuracy: 0.9770 Epoch 13/20 63/63 [==============================] - 0s 6ms/step - loss: 2.1319e-20 - accuracy: 1.0000 - val_loss: 5.2527 - val_accuracy: 0.9770 Epoch 14/20 63/63 [==============================] - 0s 6ms/step - loss: 0.3310 - accuracy: 0.9980 - val_loss: 5.1895 - val_accuracy: 0.9770 Epoch 15/20 63/63 [==============================] - 0s 6ms/step - loss: 0.1589 - accuracy: 0.9985 - val_loss: 5.5420 - val_accuracy: 0.9780 Epoch 16/20 63/63 [==============================] - 0s 6ms/step - loss: 0.1981 - accuracy: 0.9980 - val_loss: 4.8748 - val_accuracy: 0.9770 Epoch 17/20 63/63 [==============================] - 0s 6ms/step - loss: 0.0514 - accuracy: 0.9990 - val_loss: 4.8347 - val_accuracy: 0.9760 Epoch 18/20 63/63 [==============================] - 0s 6ms/step - loss: 0.0069 - accuracy: 0.9995 - val_loss: 5.8248 - val_accuracy: 0.9760 Epoch 19/20 63/63 [==============================] - 0s 7ms/step - loss: 0.4186 - accuracy: 0.9980 - val_loss: 5.8325 - val_accuracy: 0.9770 Epoch 20/20 63/63 [==============================] - 0s 6ms/step - loss: 0.0965 - accuracy: 0.9990 - val_loss: 6.3267 - val_accuracy: 0.9770
결과를 그래프로 나타내기
import matplotlib.pyplot as plt
acc = history.history["accuracy"]
val_acc = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, "bo", label="Training accuracy")
plt.plot(epochs, val_acc, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
test_model = keras.models.load_model(
"feature_extraction.h5")
test_loss, test_acc = test_model.evaluate(test_features,test_labels)
print(f"테스트 정확도: {test_acc:.3f}")
63/63 [==============================] - 0s 3ms/step - loss: 4.7459 - accuracy: 0.9690 테스트 정확도: 0.969
VGG16 합성곱 기반 층을 만들고 동결하기
conv_base = keras.applications.vgg16.VGG16(
weights="imagenet",
include_top=False)
conv_base.trainable = False
동결하기 전과 후에 훈련 가능한 가중치 리스트를 출력하기
conv_base.trainable = True
print("합성곱 기반 층을 동결하기 전의 훈련 가능한 가중치 개수:",
len(conv_base.trainable_weights))
합성곱 기반 층을 동결하기 전의 훈련 가능한 가중치 개수: 26
conv_base.trainable = False
print("합성곱 기반 층을 동결한 후의 훈련 가능한 가중치 개수:",
len(conv_base.trainable_weights))
합성곱 기반 층을 동결한 후의 훈련 가능한 가중치 개수: 0
데이터 증식 단계와 밀집 분류기를 합성곱 기반 층에 추가하기
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
layers.RandomZoom(0.2),
]
)
inputs = keras.Input(shape=(180, 180, 3))
x = data_augmentation(inputs)
x = keras.applications.vgg16.preprocess_input(x)
x = conv_base(x)
x = layers.Flatten()(x)
x = layers.Dense(256)(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"])
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="feature_extraction_with_data_augmentation.h5",
save_best_only=True,
monitor="val_loss")
]
history = model.fit(
train_dataset,
epochs=50,
validation_data=validation_dataset,
callbacks=callbacks)
Epoch 1/50 63/63 [==============================] - 7s 86ms/step - loss: 27.5396 - accuracy: 0.8825 - val_loss: 3.3493 - val_accuracy: 0.9760 Epoch 2/50 63/63 [==============================] - 6s 84ms/step - loss: 6.0381 - accuracy: 0.9465 - val_loss: 6.5161 - val_accuracy: 0.9650 Epoch 3/50 63/63 [==============================] - 7s 109ms/step - loss: 5.2937 - accuracy: 0.9555 - val_loss: 6.4148 - val_accuracy: 0.9680 Epoch 4/50 63/63 [==============================] - 5s 67ms/step - loss: 4.6671 - accuracy: 0.9620 - val_loss: 3.2696 - val_accuracy: 0.9730 Epoch 5/50 63/63 [==============================] - 5s 80ms/step - loss: 5.0869 - accuracy: 0.9570 - val_loss: 5.6903 - val_accuracy: 0.9710 Epoch 6/50 63/63 [==============================] - 5s 68ms/step - loss: 4.7102 - accuracy: 0.9630 - val_loss: 6.2713 - val_accuracy: 0.9680 Epoch 7/50 63/63 [==============================] - 6s 90ms/step - loss: 3.1661 - accuracy: 0.9695 - val_loss: 4.1725 - val_accuracy: 0.9740 Epoch 8/50 63/63 [==============================] - 4s 63ms/step - loss: 3.6655 - accuracy: 0.9730 - val_loss: 4.5015 - val_accuracy: 0.9740 Epoch 9/50 63/63 [==============================] - 4s 63ms/step - loss: 3.1351 - accuracy: 0.9735 - val_loss: 3.9157 - val_accuracy: 0.9790 Epoch 10/50 63/63 [==============================] - 5s 80ms/step - loss: 2.9035 - accuracy: 0.9755 - val_loss: 4.3873 - val_accuracy: 0.9760 Epoch 11/50 63/63 [==============================] - 4s 64ms/step - loss: 3.0732 - accuracy: 0.9730 - val_loss: 3.3032 - val_accuracy: 0.9830 Epoch 12/50 63/63 [==============================] - 5s 73ms/step - loss: 2.2468 - accuracy: 0.9755 - val_loss: 3.6286 - val_accuracy: 0.9780 Epoch 13/50 63/63 [==============================] - 5s 67ms/step - loss: 2.1672 - accuracy: 0.9800 - val_loss: 5.4965 - val_accuracy: 0.9760 Epoch 14/50 63/63 [==============================] - 4s 65ms/step - loss: 1.9934 - accuracy: 0.9805 - val_loss: 3.3659 - val_accuracy: 0.9750 Epoch 15/50 63/63 [==============================] - 6s 90ms/step - loss: 1.4587 - accuracy: 0.9800 - val_loss: 4.2784 - val_accuracy: 0.9760 Epoch 16/50 63/63 [==============================] - 4s 63ms/step - loss: 1.5343 - accuracy: 0.9805 - val_loss: 5.7522 - val_accuracy: 0.9710 Epoch 17/50 63/63 [==============================] - 4s 62ms/step - loss: 2.3660 - accuracy: 0.9810 - val_loss: 4.6266 - val_accuracy: 0.9710 Epoch 18/50 63/63 [==============================] - 6s 87ms/step - loss: 1.3076 - accuracy: 0.9820 - val_loss: 2.4205 - val_accuracy: 0.9820 Epoch 19/50 63/63 [==============================] - 4s 62ms/step - loss: 1.6061 - accuracy: 0.9810 - val_loss: 2.8045 - val_accuracy: 0.9760 Epoch 20/50 63/63 [==============================] - 5s 82ms/step - loss: 0.9476 - accuracy: 0.9870 - val_loss: 2.6325 - val_accuracy: 0.9820 Epoch 21/50 63/63 [==============================] - 5s 66ms/step - loss: 0.7263 - accuracy: 0.9870 - val_loss: 2.2694 - val_accuracy: 0.9810 Epoch 22/50 63/63 [==============================] - 4s 63ms/step - loss: 1.1414 - accuracy: 0.9860 - val_loss: 2.6363 - val_accuracy: 0.9730 Epoch 23/50 63/63 [==============================] - 6s 85ms/step - loss: 1.2269 - accuracy: 0.9815 - val_loss: 3.1028 - val_accuracy: 0.9770 Epoch 24/50 63/63 [==============================] - 5s 68ms/step - loss: 1.2315 - accuracy: 0.9825 - val_loss: 1.9853 - val_accuracy: 0.9810 Epoch 25/50 63/63 [==============================] - 4s 63ms/step - loss: 0.7361 - accuracy: 0.9865 - val_loss: 2.4947 - val_accuracy: 0.9810 Epoch 26/50 63/63 [==============================] - 5s 77ms/step - loss: 1.0883 - accuracy: 0.9855 - val_loss: 5.1640 - val_accuracy: 0.9640 Epoch 27/50 63/63 [==============================] - 4s 64ms/step - loss: 1.1818 - accuracy: 0.9850 - val_loss: 2.2265 - val_accuracy: 0.9830 Epoch 28/50 63/63 [==============================] - 5s 81ms/step - loss: 1.5403 - accuracy: 0.9830 - val_loss: 2.6528 - val_accuracy: 0.9820 Epoch 29/50 63/63 [==============================] - 4s 66ms/step - loss: 0.9042 - accuracy: 0.9840 - val_loss: 2.6863 - val_accuracy: 0.9780 Epoch 30/50 63/63 [==============================] - 4s 64ms/step - loss: 0.8185 - accuracy: 0.9855 - val_loss: 2.9739 - val_accuracy: 0.9760 Epoch 31/50 63/63 [==============================] - 6s 84ms/step - loss: 0.5321 - accuracy: 0.9895 - val_loss: 2.9481 - val_accuracy: 0.9810 Epoch 32/50 63/63 [==============================] - 4s 65ms/step - loss: 1.2687 - accuracy: 0.9815 - val_loss: 2.1398 - val_accuracy: 0.9830 Epoch 33/50 63/63 [==============================] - 5s 83ms/step - loss: 0.5632 - accuracy: 0.9895 - val_loss: 2.3096 - val_accuracy: 0.9810 Epoch 34/50 63/63 [==============================] - 4s 62ms/step - loss: 0.8468 - accuracy: 0.9890 - val_loss: 2.2996 - val_accuracy: 0.9810 Epoch 35/50 63/63 [==============================] - 4s 63ms/step - loss: 0.6550 - accuracy: 0.9900 - val_loss: 2.4656 - val_accuracy: 0.9770 Epoch 36/50 63/63 [==============================] - 6s 86ms/step - loss: 1.0803 - accuracy: 0.9835 - val_loss: 2.3595 - val_accuracy: 0.9810 Epoch 37/50 63/63 [==============================] - 5s 83ms/step - loss: 0.8687 - accuracy: 0.9845 - val_loss: 2.3860 - val_accuracy: 0.9810 Epoch 38/50 63/63 [==============================] - 5s 83ms/step - loss: 0.6165 - accuracy: 0.9875 - val_loss: 2.4853 - val_accuracy: 0.9770 Epoch 39/50 63/63 [==============================] - 4s 63ms/step - loss: 0.4877 - accuracy: 0.9905 - val_loss: 2.0442 - val_accuracy: 0.9800 Epoch 40/50 63/63 [==============================] - 5s 83ms/step - loss: 0.6258 - accuracy: 0.9885 - val_loss: 2.0928 - val_accuracy: 0.9810 Epoch 41/50 63/63 [==============================] - 4s 63ms/step - loss: 0.7813 - accuracy: 0.9845 - val_loss: 3.0703 - val_accuracy: 0.9710 Epoch 42/50 63/63 [==============================] - 5s 66ms/step - loss: 0.6428 - accuracy: 0.9880 - val_loss: 2.2408 - val_accuracy: 0.9740 Epoch 43/50 63/63 [==============================] - 4s 65ms/step - loss: 0.8418 - accuracy: 0.9880 - val_loss: 2.0657 - val_accuracy: 0.9780 Epoch 44/50 63/63 [==============================] - 6s 85ms/step - loss: 0.5136 - accuracy: 0.9900 - val_loss: 1.8341 - val_accuracy: 0.9800 Epoch 45/50 63/63 [==============================] - 4s 64ms/step - loss: 0.4828 - accuracy: 0.9890 - val_loss: 2.4873 - val_accuracy: 0.9780 Epoch 46/50 63/63 [==============================] - 4s 66ms/step - loss: 0.2494 - accuracy: 0.9930 - val_loss: 1.6702 - val_accuracy: 0.9780 Epoch 47/50 63/63 [==============================] - 5s 80ms/step - loss: 0.3937 - accuracy: 0.9905 - val_loss: 2.3460 - val_accuracy: 0.9770 Epoch 48/50 63/63 [==============================] - 4s 62ms/step - loss: 0.8661 - accuracy: 0.9860 - val_loss: 2.6933 - val_accuracy: 0.9780 Epoch 49/50 63/63 [==============================] - 4s 63ms/step - loss: 0.4490 - accuracy: 0.9900 - val_loss: 2.2663 - val_accuracy: 0.9770 Epoch 50/50 63/63 [==============================] - 5s 82ms/step - loss: 0.5585 - accuracy: 0.9880 - val_loss: 1.7303 - val_accuracy: 0.9790
import matplotlib.pyplot as plt
acc = history.history["accuracy"]
val_acc = history.history["val_accuracy"]
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, "bo", label="Training accuracy")
plt.plot(epochs, val_acc, "b", label="Validation accuracy")
plt.title("Training and validation accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, loss, "bo", label="Training loss")
plt.plot(epochs, val_loss, "b", label="Validation loss")
plt.title("Training and validation loss")
plt.legend()
plt.show()
테스트 세트에서 모델 평가하기
test_model = keras.models.load_model(
"feature_extraction_with_data_augmentation.h5")
test_loss, test_acc = test_model.evaluate(test_dataset)
print(f"테스트 정확도: {test_acc:.3f}")
63/63 [==============================] - 3s 43ms/step - loss: 2.7124 - accuracy: 0.9750 테스트 정확도: 0.975
conv_base.summary()
Model: "vgg16" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_8 (InputLayer) [(None, None, None, 3)] 0 block1_conv1 (Conv2D) (None, None, None, 64) 1792 block1_conv2 (Conv2D) (None, None, None, 64) 36928 block1_pool (MaxPooling2D) (None, None, None, 64) 0 block2_conv1 (Conv2D) (None, None, None, 128) 73856 block2_conv2 (Conv2D) (None, None, None, 128) 147584 block2_pool (MaxPooling2D) (None, None, None, 128) 0 block3_conv1 (Conv2D) (None, None, None, 256) 295168 block3_conv2 (Conv2D) (None, None, None, 256) 590080 block3_conv3 (Conv2D) (None, None, None, 256) 590080 block3_pool (MaxPooling2D) (None, None, None, 256) 0 block4_conv1 (Conv2D) (None, None, None, 512) 1180160 block4_conv2 (Conv2D) (None, None, None, 512) 2359808 block4_conv3 (Conv2D) (None, None, None, 512) 2359808 block4_pool (MaxPooling2D) (None, None, None, 512) 0 block5_conv1 (Conv2D) (None, None, None, 512) 2359808 block5_conv2 (Conv2D) (None, None, None, 512) 2359808 block5_conv3 (Conv2D) (None, None, None, 512) 2359808 block5_pool (MaxPooling2D) (None, None, None, 512) 0 ================================================================= Total params: 14714688 (56.13 MB) Trainable params: 0 (0.00 Byte) Non-trainable params: 14714688 (56.13 MB) _________________________________________________________________
마지막에서 네 번째 층까지 모든 층 동결하기
conv_base.trainable = True
for layer in conv_base.layers[:-4]:
layer.trainable = False
모델 미세 조정하기
model.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.RMSprop(learning_rate=1e-5),
metrics=["accuracy"])
callbacks = [
keras.callbacks.ModelCheckpoint(
filepath="fine_tuning.h5",
save_best_only=True,
monitor="val_loss")
]
history = model.fit(
train_dataset,
epochs=30,
validation_data=validation_dataset,
callbacks=callbacks)
Epoch 1/30 63/63 [==============================] - 7s 72ms/step - loss: 0.6374 - accuracy: 0.9865 - val_loss: 1.4696 - val_accuracy: 0.9780 Epoch 2/30 63/63 [==============================] - 5s 75ms/step - loss: 0.4983 - accuracy: 0.9885 - val_loss: 1.3302 - val_accuracy: 0.9820 Epoch 3/30 63/63 [==============================] - 5s 82ms/step - loss: 0.2552 - accuracy: 0.9910 - val_loss: 1.4036 - val_accuracy: 0.9840 Epoch 4/30 63/63 [==============================] - 4s 65ms/step - loss: 0.4464 - accuracy: 0.9915 - val_loss: 1.9670 - val_accuracy: 0.9760 Epoch 5/30 63/63 [==============================] - 4s 65ms/step - loss: 0.3372 - accuracy: 0.9915 - val_loss: 1.6539 - val_accuracy: 0.9800 Epoch 6/30 63/63 [==============================] - 6s 84ms/step - loss: 0.2833 - accuracy: 0.9900 - val_loss: 1.4424 - val_accuracy: 0.9840 Epoch 7/30 63/63 [==============================] - 4s 64ms/step - loss: 0.4230 - accuracy: 0.9905 - val_loss: 1.6918 - val_accuracy: 0.9830 Epoch 8/30 63/63 [==============================] - 5s 74ms/step - loss: 0.2311 - accuracy: 0.9940 - val_loss: 1.1869 - val_accuracy: 0.9840 Epoch 9/30 63/63 [==============================] - 5s 71ms/step - loss: 0.2980 - accuracy: 0.9910 - val_loss: 1.6423 - val_accuracy: 0.9810 Epoch 10/30 63/63 [==============================] - 4s 64ms/step - loss: 0.1508 - accuracy: 0.9950 - val_loss: 1.3114 - val_accuracy: 0.9870 Epoch 11/30 63/63 [==============================] - 5s 80ms/step - loss: 0.1765 - accuracy: 0.9955 - val_loss: 1.3686 - val_accuracy: 0.9850 Epoch 12/30 63/63 [==============================] - 4s 64ms/step - loss: 0.1291 - accuracy: 0.9930 - val_loss: 1.4521 - val_accuracy: 0.9820 Epoch 13/30 63/63 [==============================] - 4s 65ms/step - loss: 0.2314 - accuracy: 0.9930 - val_loss: 1.4160 - val_accuracy: 0.9870 Epoch 14/30 63/63 [==============================] - 5s 82ms/step - loss: 0.0992 - accuracy: 0.9935 - val_loss: 1.5309 - val_accuracy: 0.9830 Epoch 15/30 63/63 [==============================] - 4s 64ms/step - loss: 0.1552 - accuracy: 0.9955 - val_loss: 1.7732 - val_accuracy: 0.9770 Epoch 16/30 63/63 [==============================] - 4s 65ms/step - loss: 0.2415 - accuracy: 0.9945 - val_loss: 1.2300 - val_accuracy: 0.9850 Epoch 17/30 63/63 [==============================] - 5s 74ms/step - loss: 0.0865 - accuracy: 0.9960 - val_loss: 1.6470 - val_accuracy: 0.9790 Epoch 18/30 63/63 [==============================] - 4s 65ms/step - loss: 0.0256 - accuracy: 0.9970 - val_loss: 1.3077 - val_accuracy: 0.9810 Epoch 19/30 63/63 [==============================] - 6s 85ms/step - loss: 0.0404 - accuracy: 0.9975 - val_loss: 1.5982 - val_accuracy: 0.9810 Epoch 20/30 63/63 [==============================] - 4s 66ms/step - loss: 0.1358 - accuracy: 0.9955 - val_loss: 1.2171 - val_accuracy: 0.9830 Epoch 21/30 63/63 [==============================] - 4s 64ms/step - loss: 0.0520 - accuracy: 0.9980 - val_loss: 1.4802 - val_accuracy: 0.9800 Epoch 22/30 63/63 [==============================] - 5s 81ms/step - loss: 0.2755 - accuracy: 0.9965 - val_loss: 1.7662 - val_accuracy: 0.9790 Epoch 23/30 63/63 [==============================] - 5s 76ms/step - loss: 0.0659 - accuracy: 0.9965 - val_loss: 1.5368 - val_accuracy: 0.9820 Epoch 24/30 63/63 [==============================] - 4s 64ms/step - loss: 0.1401 - accuracy: 0.9960 - val_loss: 1.6876 - val_accuracy: 0.9800 Epoch 25/30 63/63 [==============================] - 4s 65ms/step - loss: 0.1207 - accuracy: 0.9940 - val_loss: 1.8289 - val_accuracy: 0.9800 Epoch 26/30 63/63 [==============================] - 5s 83ms/step - loss: 0.0641 - accuracy: 0.9970 - val_loss: 1.8116 - val_accuracy: 0.9770 Epoch 27/30 63/63 [==============================] - 5s 83ms/step - loss: 0.2243 - accuracy: 0.9950 - val_loss: 1.7115 - val_accuracy: 0.9790 Epoch 28/30 63/63 [==============================] - 4s 64ms/step - loss: 0.0751 - accuracy: 0.9985 - val_loss: 1.5558 - val_accuracy: 0.9780 Epoch 29/30 63/63 [==============================] - 6s 84ms/step - loss: 0.0570 - accuracy: 0.9970 - val_loss: 1.5470 - val_accuracy: 0.9770 Epoch 30/30 63/63 [==============================] - 4s 63ms/step - loss: 0.1392 - accuracy: 0.9950 - val_loss: 1.4864 - val_accuracy: 0.9830
model = keras.models.load_model("fine_tuning.h5")
test_loss, test_acc = model.evaluate(test_dataset)
print(f"테스트 정확도: {test_acc:.3f}")
63/63 [==============================] - 3s 37ms/step - loss: 2.0847 - accuracy: 0.9735 테스트 정확도: 0.974