다운로드
작성자: admin 작성일시: 2019-01-27 15:41:56 조회수: 1113 다운로드: 83
카테고리: 머신 러닝 태그목록:
In [1]:
# 코랩에서는 다음 코드로 베타버전을 설치한다.
!pip install tensorflow-gpu==2.0.0-rc
!apt install -y -q fonts-nanum
Requirement already satisfied: tensorflow-gpu==2.0.0-rc in /usr/local/lib/python3.6/dist-packages (2.0.0rc0)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (1.12.0)
Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (1.0.8)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (3.7.1)
Requirement already satisfied: tf-estimator-nightly<1.14.0.dev2019080602,>=1.14.0.dev2019080601 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (1.14.0.dev2019080601)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (0.8.0)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (1.1.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (0.33.4)
Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (0.1.7)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (3.0.1)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (0.7.1)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (1.16.4)
Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (0.2.2)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (1.1.0)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (1.15.0)
Requirement already satisfied: tb-nightly<1.15.0a20190807,>=1.15.0a20190806 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (1.15.0a20190806)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu==2.0.0-rc) (1.11.2)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow-gpu==2.0.0-rc) (2.8.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.6.1->tensorflow-gpu==2.0.0-rc) (41.2.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<1.15.0a20190807,>=1.15.0a20190806->tensorflow-gpu==2.0.0-rc) (0.15.5)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tb-nightly<1.15.0a20190807,>=1.15.0a20190806->tensorflow-gpu==2.0.0-rc) (3.1.1)
Reading package lists...
Building dependency tree...
Reading state information...
fonts-nanum is already the newest version (20170925-1).
0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded.

오토인코더

오토인코더(autoencoder1)는 입력 데이터 그 자체를 예측 목표(target)로 하는 학습 방법이다. 히든 레이터의 변수를 입력 차원보다 작게 놓으면 압축(compression) 또는 차원 감소(dimension reduction) 효과를 가진다.

In [0]:
import logging
logging.getLogger("tensorflow").setLevel(logging.ERROR)

import tensorflow
from tensorflow.keras.datasets import mnist
import numpy as np

(x_train_2d, _), (_, _) = mnist.load_data()
x_train_2d = x_train_2d.astype(np.float32) / 255.0
x_train = x_train_2d.reshape(60000, 784) 
In [3]:
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Sequential

autoencoder1 = Sequential()
autoencoder1.add(Dense(32, input_dim=784, activation='relu'))
autoencoder1.add(Dense(784, activation='sigmoid'))
autoencoder1.compile(optimizer="adam", loss="binary_crossentropy")

autoencoder1.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 32)                25120     
_________________________________________________________________
dense_1 (Dense)              (None, 784)               25872     
=================================================================
Total params: 50,992
Trainable params: 50,992
Non-trainable params: 0
_________________________________________________________________
In [4]:
%%time
history = autoencoder1.fit(x_train, x_train, epochs=50, batch_size=256, verbose=0)
CPU times: user 1min, sys: 6.55 s, total: 1min 7s
Wall time: 48.3 s
In [0]:
import matplotlib as mpl
import matplotlib.pylab as plt

mpl.rc('font', family='NanumGothic')
mpl.rc('axes', unicode_minus=False)
mpl.rc('figure', dpi=300)
In [6]:
%matplotlib inline

n = 4
x_train_recoverd = autoencoder1.predict(x_train[:n, :])
plt.figure(figsize=(10, 4))
for i in range(n):
    plt.subplot(2, n, i + 1)
    plt.imshow(x_train[i, :].reshape(28, 28))
    plt.title("원본 이미지 {}".format(i + 1))
    plt.gray(); plt.axis("off")
    
    plt.subplot(2, n, i + 1 + n)
    plt.imshow(x_train_recoverd[i, :].reshape(28, 28))
    plt.title("복원 이미지 {}".format(i + 1))
    plt.gray(); plt.axis("off")

plt.tight_layout()
plt.show()
In [0]:
from tensorflow.keras.models import Model

encoder = Model(autoencoder1.input, autoencoder1.layers[0].output)
In [8]:
encoder.summary()
Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_input (InputLayer)     [(None, 784)]             0         
_________________________________________________________________
dense (Dense)                (None, 32)                25120     
=================================================================
Total params: 25,120
Trainable params: 25,120
Non-trainable params: 0
_________________________________________________________________
In [0]:
from tensorflow.keras.layers import Input

input_decoder = Input(shape=(32,))
decoder = Model(input_decoder, autoencoder1.layers[1](input_decoder))
In [10]:
decoder.summary()
Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 32)]              0         
_________________________________________________________________
dense_1 (Dense)              (None, 784)               25872     
=================================================================
Total params: 25,872
Trainable params: 25,872
Non-trainable params: 0
_________________________________________________________________
In [11]:
n = 4
x_train_encoded = encoder.predict(x_train[:n, :])
x_train_recoverd = decoder.predict(x_train_encoded)

plt.figure(figsize=(10, 5))
for i in range(n):
    plt.subplot(3, n, i + 1)
    plt.imshow(x_train[i, :].reshape(28, 28))
    plt.title("원본 이미지 {}".format(i + 1))
    plt.gray(); plt.axis("off")
    
    plt.subplot(3, n, i + 1 + n)
    plt.imshow(x_train_encoded[i].reshape(1, 32), aspect=3)
    plt.title("인코딩 벡터 {}".format(i + 1))
    plt.axis("off")
    
    plt.subplot(3, n, i + 1 + 2 * n)
    plt.imshow(x_train_recoverd[i, :].reshape(28, 28))
    plt.title("복원 이미지 {}".format(i + 1))
    plt.gray(); plt.axis("off")

plt.tight_layout()
plt.show()

다층 오토인코더

In [0]:
autoencoder2 = Sequential()
autoencoder2.add(Dense(128, input_dim=784, activation='relu'))
autoencoder2.add(Dense(64, activation='relu'))
autoencoder2.add(Dense(32, activation='relu'))
autoencoder2.add(Dense(64, activation='relu'))
autoencoder2.add(Dense(128, activation='relu'))
autoencoder2.add(Dense(784, activation='sigmoid'))
autoencoder2.compile(optimizer="adam", loss="binary_crossentropy")
In [13]:
%%time
history = autoencoder2.fit(x_train, x_train, epochs=100, batch_size=256, verbose=0)
CPU times: user 2min 21s, sys: 18.6 s, total: 2min 40s
Wall time: 2min 4s
In [14]:
n = 4
x_train_recoverd = autoencoder2.predict(x_train[:n, :])
plt.figure(figsize=(10, 4))
for i in range(n):
    plt.subplot(2, n, i + 1)
    plt.imshow(x_train[i, :].reshape(28, 28))
    plt.title("원본 이미지 {}".format(i + 1))
    plt.gray(); plt.axis("off")
    
    plt.subplot(2, n, i + 1 + n)
    plt.imshow(x_train_recoverd[i, :].reshape(28, 28))
    plt.title("복원 이미지 {}".format(i + 1))
    plt.gray(); plt.axis("off")

plt.tight_layout()
plt.show()

CNN 오토인코더

2D Deconvolution

In [15]:
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, MaxPooling2D

autoencoder3 = Sequential()

# 인코딩
autoencoder3.add(Conv2D(16, 3, input_shape=(28, 28, 1), activation='relu', padding='same'))
autoencoder3.add(MaxPooling2D(2, padding='same'))
autoencoder3.add(Conv2D(32, 3, activation='relu', padding='same'))
autoencoder3.add(MaxPooling2D(2, padding='same'))

# 디코딩
autoencoder3.add(Conv2DTranspose(32, 3, strides=2, padding='same'))
autoencoder3.add(Conv2D(16, 3, activation='relu', padding='same'))
autoencoder3.add(Conv2DTranspose(32, 3, strides=2, padding='same'))
autoencoder3.add(Conv2D(1, 3, activation='relu', padding='same'))

autoencoder3.compile(optimizer="adam", loss="binary_crossentropy")

autoencoder3.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 16)        160       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 16)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 14, 14, 32)        4640      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 32)          0         
_________________________________________________________________
conv2d_transpose (Conv2DTran (None, 14, 14, 32)        9248      
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 14, 14, 16)        4624      
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr (None, 28, 28, 32)        4640      
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 28, 28, 1)         289       
=================================================================
Total params: 23,601
Trainable params: 23,601
Non-trainable params: 0
_________________________________________________________________
In [16]:
%%time
x_train_2d_c = np.expand_dims(x_train_2d, -1)
history = autoencoder3.fit(x_train_2d_c, x_train_2d_c, epochs=20, batch_size=256, verbose=0)
CPU times: user 1min 3s, sys: 21.5 s, total: 1min 25s
Wall time: 1min 46s
In [17]:
n = 4
x_train_recoverd = autoencoder3.predict(x_train_2d_c[:n])
plt.figure(figsize=(10, 4))
for i in range(n):
    plt.subplot(2, n, i + 1)
    plt.imshow(x_train_2d[i])
    plt.title("원본 이미지 {}".format(i + 1))
    plt.gray(); plt.axis("off")
    
    plt.subplot(2, n, i + 1 + n)
    plt.imshow(x_train_recoverd[i, :].reshape(28, 28))
    plt.title("복원 이미지 {}".format(i + 1))
    plt.gray(); plt.axis("off")

plt.tight_layout()
plt.show()

질문/덧글

아직 질문이나 덧글이 없습니다. 첫번째 글을 남겨주세요!