• Home
  • About
    • 기근화 photo

      기근화

      A man who want to be Batman

    • Learn More
    • Github
  • Posts
    • All Posts
    • All Tags
  • Projects

building model from scratch

02 Jan 2019

Reading time ~4 minutes

cat vs dog classification using Keras

Image Generator

from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,rotation_range=40,
                                       width_shift_range=0.2,
                                       height_shift_range=0.2,
                                       shear_range=0.2,
                                       zoom_range=0.2,
                                       channel_shift_range=10,
                                       horizontal_flip=True,
                                       fill_mode='nearest')

** 처음에는 간단히 이미지 전처리만 넣었음. 단순히 flow_from_directory를 사용하려고 Generator를 사용함**

validation_datagen = ImageDataGenerator(rescale = 1./255,rotation_range=40,
                                       width_shift_range=0.2,
                                       height_shift_range=0.2,
                                       shear_range=0.2,
                                       zoom_range=0.2,
                                       channel_shift_range=10,
                                       horizontal_flip=True,
                                       fill_mode='nearest')

** 사실 밸리데이션을 할 때는 딱히 Generator를 쓸 필요가 없을 것 같다. 순순히 validation 용도로 만 쓰기 때문에**

BATCH_SIZE = 64
training_set = train_datagen.flow_from_directory('train',
                                                 target_size = (128, 128),
                                                 batch_size = BATCH_SIZE,
                                                 class_mode = 'binary')
Found 15724 images belonging to 2 classes.
validation_set = validation_datagen.flow_from_directory('validation',
                                            target_size = (128, 128),
                                            batch_size = BATCH_SIZE,
                                            class_mode = 'binary')
Found 1964 images belonging to 2 classes.

layers

import keras.layers as layers
from keras.models import Model
from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, ZeroPadding2D,MaxPool2D,Flatten,Dense,Dropout,BatchNormalization
from keras.callbacks import TensorBoard
input_shape=(128,128,3)
epochs = 150
input_img = Input(input_shape)
x = Conv2D(filters=16, kernel_size=(3,3), activation='relu', padding='Same')(input_img)
x = BatchNormalization()(x)
x = Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='Same')(x)
x = BatchNormalization()(x)
x = MaxPool2D(pool_size=(2,2))(x)
x = Conv2D(filters=128, kernel_size=(3,3), activation='relu', padding='Same')(x)
x = BatchNormalization()(x)
x = MaxPool2D(pool_size=(2,2))(x)
x = Conv2D(filters=64, kernel_size=(3,3), activation='relu', padding='Same')(x)
x = BatchNormalization()(x)
x = MaxPool2D(pool_size=(2,2))(x)
x = Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='Same')(x)
x = BatchNormalization()(x)
x = MaxPool2D(pool_size=(2,2))(x)
x = Flatten()(x)
x = Dense(50,activation='relu')(x)
x = Dropout(0.3)(x)
x = BatchNormalization()(x)
x = Dense(50,activation='relu')(x)
x = Dropout(0.3)(x)
tensor = Dense(1,activation='sigmoid')(x)
model = Model([input_img],[tensor])
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_7 (InputLayer)         (None, 128, 128, 3)       0         
_________________________________________________________________
conv2d_31 (Conv2D)           (None, 128, 128, 16)      448       
_________________________________________________________________
batch_normalization_12 (Batc (None, 128, 128, 16)      64        
_________________________________________________________________
conv2d_32 (Conv2D)           (None, 128, 128, 64)      9280      
_________________________________________________________________
batch_normalization_13 (Batc (None, 128, 128, 64)      256       
_________________________________________________________________
max_pooling2d_25 (MaxPooling (None, 64, 64, 64)        0         
_________________________________________________________________
conv2d_33 (Conv2D)           (None, 64, 64, 128)       73856     
_________________________________________________________________
batch_normalization_14 (Batc (None, 64, 64, 128)       512       
_________________________________________________________________
max_pooling2d_26 (MaxPooling (None, 32, 32, 128)       0         
_________________________________________________________________
conv2d_34 (Conv2D)           (None, 32, 32, 64)        73792     
_________________________________________________________________
batch_normalization_15 (Batc (None, 32, 32, 64)        256       
_________________________________________________________________
max_pooling2d_27 (MaxPooling (None, 16, 16, 64)        0         
_________________________________________________________________
conv2d_35 (Conv2D)           (None, 16, 16, 32)        18464     
_________________________________________________________________
batch_normalization_16 (Batc (None, 16, 16, 32)        128       
_________________________________________________________________
max_pooling2d_28 (MaxPooling (None, 8, 8, 32)          0         
_________________________________________________________________
flatten_7 (Flatten)          (None, 2048)              0         
_________________________________________________________________
dense_19 (Dense)             (None, 50)                102450    
_________________________________________________________________
dropout_13 (Dropout)         (None, 50)                0         
_________________________________________________________________
batch_normalization_17 (Batc (None, 50)                200       
_________________________________________________________________
dense_20 (Dense)             (None, 50)                2550      
_________________________________________________________________
dropout_14 (Dropout)         (None, 50)                0         
_________________________________________________________________
dense_21 (Dense)             (None, 1)                 51        
=================================================================
Total params: 282,307
Trainable params: 281,599
Non-trainable params: 708
_________________________________________________________________

** Transfer Learning하는 경우 정확도가 99퍼가 나온다. 그런데, 게임과 같은 가상현실의 이미지를 분석하는 것을 대비해서 직접 모델링하는 것을 해보았다. **

** 이런 간단한 모델도 ensemble을 하면 성능이 어느정도 상승하고, Hyperparameters도 튜닝하면 성능이 많이 상승할 것이다**

** CNN에서 가장 중요한 것은 필터링인데, 항상 Hidden Layer의 개수에 대한 고민을 했었던 것이 문제였던 것이었다.**

** 필터링 설계에서 가장 중요한 것은 물론 trial and error지만 Dense Layer를 일정하게 놔두었을 떄 filtering의 개수를 계속 조정해보는 것이다.**

** 해당 모델이 feature extration을 잘 하였다면 단순한 Dense Layer로도 이진분류는 쉽게 해낼 거라는 상상이다**

** 그리고 performance가 계속 80퍼센트 언저리가 될 떄가 있었는데, 그때 normaralization을 적용시켰더니 성능이 많이 상승됐고, 그에 따라 이미지 전처리를 조금 빡세게 하였다**

from keras.optimizers import RMSprop
learning_rate = 0.01
decay_rate = learning_rate / epochs
sgd = RMSprop(lr=learning_rate, decay=decay_rate)
model.compile(optimizer="adam", loss='binary_crossentropy', metrics=['accuracy'])
model.fit_generator(training_set,
                         steps_per_epoch = 245,
                         epochs = epochs,
                         validation_data = validation_set,
                         validation_steps = 30,
                         shuffle=True,
                         callbacks=[TensorBoard(log_dir='/home/ki/git/cat_dog_retrain/tb', histogram_freq=0, write_graph=False)])
Epoch 1/150


/home/ki/miniconda3/envs/cuda/lib/python3.5/site-packages/PIL/TiffImagePlugin.py:763: UserWarning: Possibly corrupt EXIF data.  Expecting to read 8589934590 bytes but only got 37573. Skipping tag 34855
  " Skipping tag %s" % (size, len(data), tag))


  9/245 [>.............................] - ETA: 1:56 - loss: 0.7927 - acc: 0.5104

  Skip ...

Epoch 141/150
245/245 [==============================] - 107s 438ms/step - loss: 0.1268 - acc: 0.9494 - val_loss: 0.2688 - val_acc: 0.8884
Epoch 142/150
245/245 [==============================] - 107s 437ms/step - loss: 0.1248 - acc: 0.9514 - val_loss: 0.2191 - val_acc: 0.9100
Epoch 143/150
245/245 [==============================] - 107s 438ms/step - loss: 0.1310 - acc: 0.9485 - val_loss: 0.2229 - val_acc: 0.9147
Epoch 144/150
245/245 [==============================] - 107s 437ms/step - loss: 0.1264 - acc: 0.9514 - val_loss: 0.2121 - val_acc: 0.9095
Epoch 145/150
245/245 [==============================] - 107s 438ms/step - loss: 0.1223 - acc: 0.9526 - val_loss: 0.3029 - val_acc: 0.8968
Epoch 146/150
245/245 [==============================] - 107s 438ms/step - loss: 0.1219 - acc: 0.9528 - val_loss: 0.2086 - val_acc: 0.9153
Epoch 147/150
245/245 [==============================] - 107s 438ms/step - loss: 0.1212 - acc: 0.9510 - val_loss: 0.2349 - val_acc: 0.9116
Epoch 148/150
245/245 [==============================] - 107s 437ms/step - loss: 0.1234 - acc: 0.9490 - val_loss: 0.2373 - val_acc: 0.9074
Epoch 149/150
245/245 [==============================] - 108s 439ms/step - loss: 0.1189 - acc: 0.9522 - val_loss: 0.1908 - val_acc: 0.9268
Epoch 150/150
245/245 [==============================] - 107s 438ms/step - loss: 0.1250 - acc: 0.9517 - val_loss: 0.2822 - val_acc: 0.8974





<keras.callbacks.History at 0x7fe4c57c52e8>


pythonkerasdeep learning Share Tweet +1
repo="ghk829" issue-term="pathname" theme="github-dark" crossorigin="anonymous" async>