다운로드
작성자: admin 작성일시: 2017-06-11 09:56:49 조회수: 2254 다운로드: 214
카테고리: 머신 러닝 태그목록:

Keras를 이용한 CNN 구현 - CIFAR10

CIFAR10 데이터 로드

In [1]:
from keras.datasets import cifar10

(X_train, y_train0), (X_test, y_test0) = cifar10.load_data()
print(X_train.shape, X_train.dtype)
print(y_train0.shape, y_train0.dtype)
print(X_test.shape, X_test.dtype)
print(y_test0.shape, y_test0.dtype)
Using TensorFlow backend.
(50000, 32, 32, 3) uint8
(50000, 1) uint8
(10000, 32, 32, 3) uint8
(10000, 1) int64

데이터 확인

In [2]:
plt.subplot(141)
plt.imshow(X_train[0], interpolation="bicubic")
plt.grid(False)
plt.subplot(142)
plt.imshow(X_train[4], interpolation="bicubic")
plt.grid(False)
plt.subplot(143)
plt.imshow(X_train[8], interpolation="bicubic")
plt.grid(False)
plt.subplot(144)
plt.imshow(X_train[12], interpolation="bicubic")
plt.grid(False)
plt.show()

자료형 변환 및 스케일링

In [3]:
X_train = X_train.astype('float32')/255.0
X_test = X_test.astype('float32')/255.0

print(X_train.shape, X_train.dtype)
(50000, 32, 32, 3) float32
In [4]:
from keras.utils import np_utils

Y_train = np_utils.to_categorical(y_train0, 10)
Y_test = np_utils.to_categorical(y_test0, 10)
Y_train[:4]
Out:
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  1.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  1.],
       [ 0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.,  0.]])

모형 구현

In [6]:
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from keras.regularizers import l2

np.random.seed(0)

model = Sequential()

model.add(Conv2D(64, (5, 5), activation='relu', input_shape=(32, 32, 3), padding='same', kernel_regularizer=l2(0.001)))
model.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model.add(MaxPooling2D())
model.add(Dropout(0.1))

model.add(Conv2D(64, (5, 5), activation='relu', padding='same', kernel_regularizer=l2(0.001)))
model.add(Conv2D(64, (5, 5), activation='relu', padding='same', kernel_regularizer=l2(0.001)))
model.add(MaxPooling2D())
model.add(Dropout(0.2))

model.add(Conv2D(64, (5, 5), activation='relu', padding='same', kernel_regularizer=l2(0.001)))
model.add(Conv2D(64, (5, 5), activation='relu', padding='same', kernel_regularizer=l2(0.001)))
model.add(MaxPooling2D())
model.add(Dropout(0.3))

model.add(Conv2D(64, (5, 5), activation='relu', padding='same', kernel_regularizer=l2(0.001)))
model.add(Conv2D(64, (5, 5), activation='relu', padding='same', kernel_regularizer=l2(0.001)))
model.add(MaxPooling2D())
model.add(Dropout(0.4))

model.add(Flatten())
model.add(Dense(128, activation='relu', kernel_regularizer=l2(0.001)))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
In [7]:
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy'])

트레이닝

In [8]:
%%time
hist = model.fit(X_train, Y_train, epochs=50, batch_size=50, validation_data=(X_test, Y_test), verbose=2)
Train on 50000 samples, validate on 10000 samples
Epoch 1/50
805s - loss: 2.3446 - acc: 0.1953 - val_loss: 2.0038 - val_acc: 0.2895
Epoch 2/50
799s - loss: 1.8346 - acc: 0.3548 - val_loss: 1.5998 - val_acc: 0.4470
Epoch 3/50
798s - loss: 1.6218 - acc: 0.4480 - val_loss: 1.4994 - val_acc: 0.4986
Epoch 4/50
798s - loss: 1.4549 - acc: 0.5235 - val_loss: 1.2818 - val_acc: 0.5847
Epoch 5/50
797s - loss: 1.3146 - acc: 0.5852 - val_loss: 1.1751 - val_acc: 0.6336
Epoch 6/50
798s - loss: 1.2116 - acc: 0.6293 - val_loss: 1.1841 - val_acc: 0.6374
Epoch 7/50
796s - loss: 1.1256 - acc: 0.6652 - val_loss: 1.0515 - val_acc: 0.6906
Epoch 8/50
796s - loss: 1.0624 - acc: 0.6915 - val_loss: 1.0393 - val_acc: 0.6980
Epoch 9/50
796s - loss: 1.0071 - acc: 0.7157 - val_loss: 0.9961 - val_acc: 0.7227
Epoch 10/50
796s - loss: 0.9634 - acc: 0.7335 - val_loss: 0.9150 - val_acc: 0.7507
Epoch 11/50
796s - loss: 0.9236 - acc: 0.7517 - val_loss: 0.9543 - val_acc: 0.7400
Epoch 12/50
796s - loss: 0.8937 - acc: 0.7654 - val_loss: 0.8969 - val_acc: 0.7649
Epoch 13/50
795s - loss: 0.8581 - acc: 0.7776 - val_loss: 0.9003 - val_acc: 0.7674
Epoch 14/50
795s - loss: 0.8344 - acc: 0.7901 - val_loss: 0.8698 - val_acc: 0.7726
Epoch 15/50
795s - loss: 0.8110 - acc: 0.8007 - val_loss: 0.8746 - val_acc: 0.7763
Epoch 16/50
795s - loss: 0.7919 - acc: 0.8061 - val_loss: 0.8840 - val_acc: 0.7717
Epoch 17/50
794s - loss: 0.7741 - acc: 0.8156 - val_loss: 0.9260 - val_acc: 0.7673
Epoch 18/50
795s - loss: 0.7569 - acc: 0.8194 - val_loss: 0.8693 - val_acc: 0.7861
Epoch 19/50
795s - loss: 0.7374 - acc: 0.8300 - val_loss: 0.8852 - val_acc: 0.7850
Epoch 20/50
794s - loss: 0.7225 - acc: 0.8354 - val_loss: 0.8494 - val_acc: 0.7936
Epoch 21/50
795s - loss: 0.7094 - acc: 0.8417 - val_loss: 0.8625 - val_acc: 0.7960
Epoch 22/50
794s - loss: 0.7018 - acc: 0.8445 - val_loss: 0.8976 - val_acc: 0.7882
Epoch 23/50
794s - loss: 0.6889 - acc: 0.8489 - val_loss: 0.8776 - val_acc: 0.7931
Epoch 24/50
795s - loss: 0.6732 - acc: 0.8562 - val_loss: 0.8531 - val_acc: 0.8004
Epoch 25/50
793s - loss: 0.6696 - acc: 0.8595 - val_loss: 0.9632 - val_acc: 0.7807
Epoch 26/50
794s - loss: 0.6544 - acc: 0.8641 - val_loss: 0.9218 - val_acc: 0.7935
Epoch 27/50
795s - loss: 0.6459 - acc: 0.8677 - val_loss: 0.9813 - val_acc: 0.7725
Epoch 28/50
795s - loss: 0.6412 - acc: 0.8698 - val_loss: 0.8767 - val_acc: 0.8068
Epoch 29/50
794s - loss: 0.6361 - acc: 0.8729 - val_loss: 0.9312 - val_acc: 0.7919
Epoch 30/50
795s - loss: 0.6241 - acc: 0.8774 - val_loss: 0.8659 - val_acc: 0.8063
Epoch 31/50
794s - loss: 0.6181 - acc: 0.8812 - val_loss: 0.9105 - val_acc: 0.8014
Epoch 32/50
794s - loss: 0.6146 - acc: 0.8835 - val_loss: 0.8629 - val_acc: 0.8119
Epoch 33/50
794s - loss: 0.6028 - acc: 0.8877 - val_loss: 0.8504 - val_acc: 0.8202
Epoch 34/50
793s - loss: 0.5987 - acc: 0.8910 - val_loss: 0.8816 - val_acc: 0.8079
Epoch 35/50
793s - loss: 0.5925 - acc: 0.8913 - val_loss: 0.8411 - val_acc: 0.8246
Epoch 36/50
795s - loss: 0.5870 - acc: 0.8932 - val_loss: 0.8778 - val_acc: 0.8151
Epoch 37/50
794s - loss: 0.5848 - acc: 0.8955 - val_loss: 0.8574 - val_acc: 0.8132
Epoch 38/50
793s - loss: 0.5774 - acc: 0.8978 - val_loss: 0.9006 - val_acc: 0.8067
Epoch 39/50
793s - loss: 0.5765 - acc: 0.8976 - val_loss: 0.8683 - val_acc: 0.8229
Epoch 40/50
794s - loss: 0.5677 - acc: 0.9034 - val_loss: 0.9396 - val_acc: 0.8107
Epoch 41/50
793s - loss: 0.5690 - acc: 0.9017 - val_loss: 0.8781 - val_acc: 0.8197
Epoch 42/50
794s - loss: 0.5611 - acc: 0.9043 - val_loss: 0.9152 - val_acc: 0.8151
Epoch 43/50
793s - loss: 0.5572 - acc: 0.9084 - val_loss: 0.9119 - val_acc: 0.8185
Epoch 44/50
794s - loss: 0.5530 - acc: 0.9084 - val_loss: 0.9204 - val_acc: 0.8164
Epoch 45/50
794s - loss: 0.5556 - acc: 0.9087 - val_loss: 0.9119 - val_acc: 0.8101
Epoch 46/50
793s - loss: 0.5501 - acc: 0.9092 - val_loss: 0.9623 - val_acc: 0.8086
Epoch 47/50
794s - loss: 0.5463 - acc: 0.9104 - val_loss: 0.9223 - val_acc: 0.8141
Epoch 48/50
794s - loss: 0.5435 - acc: 0.9119 - val_loss: 0.9160 - val_acc: 0.8148
Epoch 49/50
793s - loss: 0.5389 - acc: 0.9147 - val_loss: 0.9242 - val_acc: 0.8130
Epoch 50/50
794s - loss: 0.5390 - acc: 0.9148 - val_loss: 0.9189 - val_acc: 0.8125
CPU times: user 6d 5h 59min 21s, sys: 6h 38min 28s, total: 6d 12h 37min 49s
Wall time: 11h 2min 49s
In [9]:
plt.plot(hist.history["acc"])
plt.plot(hist.history["val_acc"])
plt.show()
In [10]:
model.save("cifar10_2.hdf5")

질문/덧글

weight 초기화 tada*** 2018년 7월 29일 8:16 오전

코드에는 weight 초기화는 적용안하시는 것 같은데 왜 그런가요?