به نام خدا

مقدمه‌ای بر شبکه‌های عصبی و چارچوب Keras (کراس)

لود کتابخانه‌های مورد استفاده

برای اجرای این نوت‌بوک نیاز به نصب کتابخانه کراس (Keras) دارید. برای نصب این کتابخانه میتوانید لینک زیر را مطالعه کنید.

http://blog.class.vision/1396/12/installing-keras-with-tensorflow-backend/

در صورتی که تمام کتابخانه‌های مورد نیاز شما نصب باشد سلول زیر باید بدون مشکل اجرا شود.
In [1]:
import keras
from keras.models import Sequential
from keras.layers import Dense,  Activation
import numpy as np

from dataset import load_hoda
Using TensorFlow backend.
برای اینکه موقع اجرای کدها دقیقا نتایج سر کلاس را بتوانید مشاهده کنید :
In [2]:
np.random.seed(123)  # for reproducibility

لود مجموعه داده (dataset)

In [3]:
x_train_original, y_train_original, x_test_original, y_test_original = load_hoda()

پیش‌پردازش داده‌ها برای Keras

تبدیل x_train و x_test به فرمت آرایه‌های نامپای یا ndarray و تبدیل y_train و y_test به one-hot-encoding :
ابتدا تابعی ساده تعریف کردهایم که ابعاد، نوع داده ای و اطلاعات دیتاست لود شده را چاپ کند.
این اطلاعات را قبل و بعد از پیش‌پردازش داده ها چاپ خواهیم کرد تا متوجه تغییرات بشویم!
In [4]:
def print_data_info(x_train, y_train, x_test, y_test):
    #Check data Type
    print ("\ttype(x_train): {}".format(type(x_train)))
    print ("\ttype(y_train): {}".format(type(y_train)))

    #check data Shape
    print ("\tx_train.shape: {}".format(np.shape(x_train)))
    print ("\ty_train.shape: {}".format(np.shape(y_train)))
    print ("\tx_test.shape: {}".format(np.shape(x_test)))
    print ("\ty_test.shape: {}".format(np.shape(y_test)))

    #sample data
    print ("\ty_train[0]: {}".format(y_train[0]))
In [5]:
# Preprocess input data for Keras. 
x_train = np.array(x_train_original)
y_train = keras.utils.to_categorical(y_train_original, num_classes=10)
x_test = np.array(x_test_original)
y_test = keras.utils.to_categorical(y_test_original, num_classes=10)
In [6]:
print("Before Preprocessing:")
print_data_info(x_train_original, y_train_original, x_test_original, y_test_original)
print("After Preprocessing:")
print_data_info(x_train, y_train, x_test, y_test)
Before Preprocessing:
	type(x_train): <class 'list'>
	type(y_train): <class 'numpy.ndarray'>
	x_train.shape: (1000, 25)
	y_train.shape: (1000,)
	x_test.shape: (200, 25)
	y_test.shape: (200,)
	y_train[0]: 6
After Preprocessing:
	type(x_train): <class 'numpy.ndarray'>
	type(y_train): <class 'numpy.ndarray'>
	x_train.shape: (1000, 25)
	y_train.shape: (1000, 10)
	x_test.shape: (200, 25)
	y_test.shape: (200, 10)
	y_train[0]: [0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
آخرین گام پیش‌پردازش تبدیل داده‌ها به **float32** و نرمال سازی مقادیر به مقدار بین 0 و 1 است.
In [7]:
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

تعریف معماری مدل (model architecture)

In [12]:
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=25))
model.add(Dense(10, activation='softmax'))
In [13]:
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_7 (Dense)              (None, 64)                1664      
_________________________________________________________________
dense_8 (Dense)              (None, 10)                650       
=================================================================
Total params: 2,314
Trainable params: 2,314
Non-trainable params: 0
_________________________________________________________________

Compile model

In [14]:
model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

آموش مدل با داده‌های آموزشی

In [15]:
model.fit(x_train, y_train,
          epochs=30,
          batch_size=64, validation_split=0.2)
Train on 800 samples, validate on 200 samples
Epoch 1/30
800/800 [==============================] - 5s 6ms/step - loss: 2.1480 - acc: 0.2213 - val_loss: 2.0126 - val_acc: 0.3600
Epoch 2/30
800/800 [==============================] - 0s 92us/step - loss: 1.9575 - acc: 0.3725 - val_loss: 1.8539 - val_acc: 0.5150
Epoch 3/30
800/800 [==============================] - 0s 87us/step - loss: 1.7997 - acc: 0.4987 - val_loss: 1.7062 - val_acc: 0.6100
Epoch 4/30
800/800 [==============================] - 0s 97us/step - loss: 1.6491 - acc: 0.6013 - val_loss: 1.5655 - val_acc: 0.6400
Epoch 5/30
800/800 [==============================] - 0s 91us/step - loss: 1.5039 - acc: 0.6512 - val_loss: 1.4288 - val_acc: 0.6800
Epoch 6/30
800/800 [==============================] - 0s 77us/step - loss: 1.3661 - acc: 0.6938 - val_loss: 1.3076 - val_acc: 0.7200
Epoch 7/30
800/800 [==============================] - 0s 80us/step - loss: 1.2389 - acc: 0.7563 - val_loss: 1.1904 - val_acc: 0.7200
Epoch 8/30
800/800 [==============================] - 0s 90us/step - loss: 1.1225 - acc: 0.7850 - val_loss: 1.0848 - val_acc: 0.7600
Epoch 9/30
800/800 [==============================] - 0s 86us/step - loss: 1.0188 - acc: 0.7913 - val_loss: 0.9926 - val_acc: 0.7950
Epoch 10/30
800/800 [==============================] - 0s 76us/step - loss: 0.9268 - acc: 0.8113 - val_loss: 0.9109 - val_acc: 0.8100
Epoch 11/30
800/800 [==============================] - 0s 72us/step - loss: 0.8466 - acc: 0.8263 - val_loss: 0.8405 - val_acc: 0.8200
Epoch 12/30
800/800 [==============================] - 0s 86us/step - loss: 0.7775 - acc: 0.8300 - val_loss: 0.7799 - val_acc: 0.8150
Epoch 13/30
800/800 [==============================] - 0s 99us/step - loss: 0.7177 - acc: 0.8463 - val_loss: 0.7282 - val_acc: 0.8150
Epoch 14/30
800/800 [==============================] - 0s 106us/step - loss: 0.6662 - acc: 0.8525 - val_loss: 0.6833 - val_acc: 0.8150
Epoch 15/30
800/800 [==============================] - 0s 109us/step - loss: 0.6243 - acc: 0.8550 - val_loss: 0.6532 - val_acc: 0.8300
Epoch 16/30
800/800 [==============================] - 0s 105us/step - loss: 0.5884 - acc: 0.8638 - val_loss: 0.6199 - val_acc: 0.8350
Epoch 17/30
800/800 [==============================] - 0s 104us/step - loss: 0.5549 - acc: 0.8700 - val_loss: 0.5930 - val_acc: 0.8350
Epoch 18/30
800/800 [==============================] - 0s 109us/step - loss: 0.5263 - acc: 0.8712 - val_loss: 0.5654 - val_acc: 0.8400
Epoch 19/30
800/800 [==============================] - 0s 104us/step - loss: 0.4993 - acc: 0.8738 - val_loss: 0.5436 - val_acc: 0.8350
Epoch 20/30
800/800 [==============================] - 0s 115us/step - loss: 0.4757 - acc: 0.8800 - val_loss: 0.5245 - val_acc: 0.8400
Epoch 21/30
800/800 [==============================] - 0s 102us/step - loss: 0.4557 - acc: 0.8750 - val_loss: 0.5110 - val_acc: 0.8450
Epoch 22/30
800/800 [==============================] - 0s 102us/step - loss: 0.4376 - acc: 0.8837 - val_loss: 0.4938 - val_acc: 0.8450
Epoch 23/30
800/800 [==============================] - 0s 107us/step - loss: 0.4199 - acc: 0.8862 - val_loss: 0.4797 - val_acc: 0.8600
Epoch 24/30
800/800 [==============================] - 0s 114us/step - loss: 0.4053 - acc: 0.8912 - val_loss: 0.4668 - val_acc: 0.8650
Epoch 25/30
800/800 [==============================] - 0s 80us/step - loss: 0.3912 - acc: 0.8888 - val_loss: 0.4570 - val_acc: 0.8750
Epoch 26/30
800/800 [==============================] - 0s 81us/step - loss: 0.3807 - acc: 0.8900 - val_loss: 0.4476 - val_acc: 0.8750
Epoch 27/30
800/800 [==============================] - 0s 80us/step - loss: 0.3672 - acc: 0.8975 - val_loss: 0.4401 - val_acc: 0.8750
Epoch 28/30
800/800 [==============================] - 0s 86us/step - loss: 0.3564 - acc: 0.9000 - val_loss: 0.4295 - val_acc: 0.8700
Epoch 29/30
800/800 [==============================] - 0s 86us/step - loss: 0.3468 - acc: 0.8975 - val_loss: 0.4284 - val_acc: 0.8700
Epoch 30/30
800/800 [==============================] - 0s 87us/step - loss: 0.3369 - acc: 0.8987 - val_loss: 0.4149 - val_acc: 0.8750
Out[15]:
<keras.callbacks.History at 0x21b50473f28>

ارزیابی مدل روی داده های آزمون

In [16]:
loss, acc = model.evaluate(x_test, y_test)
print('\nTesting loss: %.2f, acc: %.2f%%'%(loss, acc))
200/200 [==============================] - 0s 75us/step

Testing loss: 0.36, acc: 0.91%

پیش‌بینی داده‌های آموزشی

In [17]:
# The predict_classes function outputs the highest probability class
# according to the trained classifier for each input example.
predicted_classes = model.predict_classes(x_test)
print("predicted:")
print(predicted_classes)
print("True Label:")
print(y_test_original)
predicted:
[7 2 3 8 5 5 4 7 3 2 0 8 8 0 2 9 3 6 7 4 0 3 6 3 9 2 7 5 2 9 7 5 5 8 9 2 5
 1 4 8 8 4 7 2 1 2 7 9 0 3 7 5 7 5 7 9 8 2 9 8 8 6 6 6 7 6 2 4 2 4 1 5 9 1
 8 4 0 5 6 2 4 3 2 7 7 7 7 0 8 1 7 8 7 7 8 9 6 2 3 1 0 2 9 6 3 5 5 0 0 9 6
 7 9 3 9 9 8 7 9 2 5 2 5 5 9 6 9 2 0 3 7 9 5 2 9 0 4 1 8 2 2 3 5 2 9 3 8 2
 7 0 9 9 0 7 6 2 4 7 9 3 7 0 7 1 9 4 7 3 4 1 5 6 7 9 1 3 5 3 5 7 4 1 3 3 1
 1 4 3 8 9 6 7 7 2 3 0 1 4 9 5]
True Label:
[7 2 3 1 5 5 4 7 3 2 0 8 8 0 2 9 3 6 7 4 0 3 6 3 9 2 7 5 2 9 7 5 5 8 9 6 5
 1 4 8 8 4 7 7 1 2 7 9 0 3 7 4 7 5 2 9 8 2 9 8 8 6 6 6 6 6 2 4 3 4 4 5 9 1
 8 2 0 5 6 2 4 3 2 7 7 7 7 1 8 1 7 8 7 7 8 9 3 2 3 1 0 2 9 6 3 5 5 0 0 3 6
 7 9 3 9 9 8 7 9 2 5 2 5 5 9 6 9 2 0 3 7 6 5 2 9 0 4 1 8 2 2 3 0 2 9 3 8 6
 7 0 9 9 0 7 6 5 4 7 9 3 7 0 7 1 9 4 7 3 4 1 5 6 7 9 1 3 5 4 5 7 4 1 3 3 1
 2 3 3 8 9 6 7 7 2 3 0 1 4 9 5]

کد کامل، از ابتدا تا انتها

In [ ]:
# 1. Import libraries and modules
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
import numpy as np
from dataset import load_hoda

np.random.seed(123)  # for reproducibility

# 2. Load pre-shuffled HODA data into train and test sets
x_train_original, y_train_original, x_test_original, y_test_original = load_hoda()

# 3. Preprocess input data
''' 3.1: input data in numpy array format'''
x_train = np.array(x_train_original)
x_test = np.array(x_test_original)
'''3.2 normalize our data values to the range [0, 1]'''
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

# 4. Preprocess class labels
y_train = keras.utils.to_categorical(y_train_original, num_classes=10)
y_test = keras.utils.to_categorical(y_test_original, num_classes=10)

# 5. Define model architecture
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=25))
model.add(Dense(10, activation='softmax'))

# 6. Compile model
model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

# 7. Fit model on training data
model.fit(x_train, y_train,
          epochs=30,
          batch_size=64)

# 8. Evaluate model on test data
loss, acc = model.evaluate(x_test, y_test)
print('\nTesting loss: %.2f, acc: %.2f%%'%(loss, acc))
دوره مقدماتی یادگیری عمیق
علیرضا اخوان پور
پنج شنبه، ۱۸ بهمن ۱۳۹۷
Class.Vision - AkhavanPour.ir - GitHub
In [ ]: