[DL 프로젝트: 패션 스타일 분류 앱 'FASI'] 2. CNN 분류 모델링

데이터 전처리

image

  • 저번 시간에 무신사 스트릿 스냅 사진 7가지 스타일 각 600장, 총 4200장의 크롤링을 완료했다.
  • 애초에 정제되어있는 데이터를 수집했기 때문에 이상치는 존재하지 않는다.
  • image_dataset_from_directory을 사용할 것인데, 모델을 개발할 때 검증 분할을 사용할 것이다.
  • 이미지의 80%를 훈련에 사용하고 20%를 유효성 검사에 사용하기 위해 디렉토리를 분할해준다.

디렉토리 구조

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
style
├── tr
│   ├── classic
│   ├── girlish
│   ├── sexy
│   ├── simple
│   ├── street
│   ├── unique
│   └── work
└── val
    ├── classic
    ├── girlish
    ├── sexy
    ├── simple
    ├── street
    ├── unique
    └── work

image

CNN 분류 모델링

라이브러리 불러오기

1
2
3
4
5
6
7
8
9
10
import tensorflow as tf
import numpy as np

from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.applications.resnet50 import ResNet50
from keras.applications import vgg16

from tensorflow.keras.preprocessing import image
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Conv2D, MaxPooling2D, Flatten, BatchNormalization, Dropout
from tensorflow.keras.models import Model, Sequential

데이터 불러오기

  • image_dataset_from_directory를 이용하여 import한다.
  • batch size 및 shapes는 기본값으로 진행한다.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
tr = '/content/drive/MyDrive/style/tr'
val = '/content/drive/MyDrive/style/val'

# 클래스 리스트 선언
class_list = ['street', 'simple', 'classic', 'work', 'unique', 'sexy', 'girlish']

tr = tf.keras.preprocessing.image_dataset_from_directory(
    tr,
    labels="inferred",
    label_mode="categorical",
    class_names=class_list,
    seed=6,
)

val = tf.keras.preprocessing.image_dataset_from_directory(
    val,
    labels="inferred",
    label_mode="categorical",
    class_names=class_list,
    seed=6,
)

tr, val
'''
Found 3212 files belonging to 7 classes.
Found 830 files belonging to 7 classes.
(<BatchDataset shapes: ((None, 256, 256, 3), (None, 7)), types: (tf.float32, tf.float32)>,
 <BatchDataset shapes: ((None, 256, 256, 3), (None, 7)), types: (tf.float32, tf.float32)>)
'''

베이스라인 모델

  • 7가지의 class이기 때문에 Chance level은 14%이다.(성능이 적어도 찍어서 맞출 확률보단 높아야한다.)
  • Chance level을 Baseline으로 잡지 않고, 성능 향상의 비교를 위해 간단한 CNN Model을 생성한다.
  • 마지막 Dense 층은 class 수가 7가지이므로 7로 설정하고, activation 함수는 softmax로 설정한다.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# 간단한 CNN 신경망 구축
model1 = Sequential()
model1.add(Conv2D(32, (3, 3), activation='relu', input_shape=(256, 256, 3)))
model1.add(MaxPooling2D((2, 2)))
model1.add(Conv2D(64, (3, 3), activation='relu'))
model1.add(MaxPooling2D((2, 2)))
model1.add(Conv2D(64, (3, 3), activation='relu'))
model1.add(Flatten())
model1.add(Dense(64, activation='relu'))
model1.add(Dense(7, activation='softmax')) # 7가지 class로 분류

model1.summary()
'''
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 254, 254, 32)      896       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 127, 127, 32)      0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 125, 125, 64)      18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 62, 62, 64)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 60, 60, 64)        36928     
_________________________________________________________________
flatten (Flatten)            (None, 230400)            0         
_________________________________________________________________
dense (Dense)                (None, 64)                14745664  
_________________________________________________________________
dense_1 (Dense)              (None, 7)                 455       
=================================================================
Total params: 14,802,439
Trainable params: 14,802,439
Non-trainable params: 0
_________________________________________________________________
'''

컴파일 및 핏(Compile & fit)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
model1.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model1.fit(tr, validation_data=val, epochs=5)
'''
Epoch 1/5
105/105 [==============================] - 586s 5s/step - loss: 99.1621 - accuracy: 0.1586 - val_loss: 1.9354 - val_accuracy: 0.2005
Epoch 2/5
105/105 [==============================] - 31s 281ms/step - loss: 1.7764 - accuracy: 0.2984 - val_loss: 1.9811 - val_accuracy: 0.1874
Epoch 3/5
105/105 [==============================] - 31s 283ms/step - loss: 1.3210 - accuracy: 0.5030 - val_loss: 2.3484 - val_accuracy: 0.1933
Epoch 4/5
105/105 [==============================] - 31s 280ms/step - loss: 0.9013 - accuracy: 0.6694 - val_loss: 3.5334 - val_accuracy: 0.1802
Epoch 5/5
105/105 [==============================] - 31s 281ms/step - loss: 0.6528 - accuracy: 0.7645 - val_loss: 3.8732 - val_accuracy: 0.1706
'''
  • 쩝… 그래도 눈감고 찍는 것 보다는 살짝 잘 나왔다.
  • 이제 성능을 개선해보자.

성능 개선

  • 전이학습 전에 신경망 층을 여러겹 쌓아 개선하여 성능을 올려본다.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
model2 = Sequential()
model2.add(Conv2D(filters = 32, kernel_size = (3,3),padding = 'Same', 
                    activation ='relu', input_shape = (256, 256, 3)))
model2.add(Conv2D(filters = 32, kernel_size = (3,3),padding = 'Same', 
                    activation ='relu'))
model2.add(MaxPooling2D(pool_size=(2,2)))
model2.add(BatchNormalization())
model2.add(Dropout(0.25))
model2.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', 
                    activation ='relu'))
model2.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', 
                    activation ='relu'))
model2.add(MaxPooling2D(pool_size=(2,2)))
model2.add(BatchNormalization())
model2.add(Dropout(0.25))
model2.add(Conv2D(filters = 86, kernel_size = (3,3),padding = 'Same', 
                    activation ='relu'))
model2.add(Conv2D(filters = 86, kernel_size = (3,3),padding = 'Same', 
                    activation ='relu'))
model2.add(MaxPooling2D(pool_size=(2,2)))
model2.add(BatchNormalization())
model2.add(Dropout(0.25))
model2.add(Flatten())
model2.add(Dense(1024, activation = "relu"))
model2.add(Dropout(0.5))
model2.add(Dense(512, activation = "relu"))
model2.add(Dropout(0.5))
model2.add(Dense(7, activation = "softmax"))
model2.summary()
'''
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_3 (Conv2D)            (None, 256, 256, 32)      896       
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 256, 256, 32)      9248      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 128, 128, 32)      0         
_________________________________________________________________
batch_normalization (BatchNo (None, 128, 128, 32)      128       
_________________________________________________________________
dropout (Dropout)            (None, 128, 128, 32)      0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 128, 128, 64)      18496     
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 128, 128, 64)      36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 64, 64, 64)        0         
_________________________________________________________________
batch_normalization_1 (Batch (None, 64, 64, 64)        256       
_________________________________________________________________
dropout_1 (Dropout)          (None, 64, 64, 64)        0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 64, 64, 86)        49622     
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 64, 64, 86)        66650     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 32, 32, 86)        0         
_________________________________________________________________
batch_normalization_2 (Batch (None, 32, 32, 86)        344       
_________________________________________________________________
dropout_2 (Dropout)          (None, 32, 32, 86)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 88064)             0         
_________________________________________________________________
dense_2 (Dense)              (None, 1024)              90178560  
_________________________________________________________________
dropout_3 (Dropout)          (None, 1024)              0         
_________________________________________________________________
dense_3 (Dense)              (None, 512)               524800    
_________________________________________________________________
dropout_4 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_4 (Dense)              (None, 7)                 3591      
=================================================================
Total params: 90,889,519
Trainable params: 90,889,155
Non-trainable params: 364
_________________________________________________________________
'''

컴파일 및 핏(Compile & fit)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
model2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model2.fit(tr, validation_data=val, epochs=5)
'''
Epoch 1/10
105/105 [==============================] - 47s 423ms/step - loss: 1.7905 - accuracy: 0.3196 - val_loss: 2.5823 - val_accuracy: 0.2196
Epoch 2/10
105/105 [==============================] - 46s 419ms/step - loss: 1.6277 - accuracy: 0.3778 - val_loss: 1.9698 - val_accuracy: 0.2446
Epoch 3/10
105/105 [==============================] - 46s 421ms/step - loss: 1.4714 - accuracy: 0.4499 - val_loss: 1.9101 - val_accuracy: 0.2506
Epoch 4/10
105/105 [==============================] - 46s 420ms/step - loss: 1.3013 - accuracy: 0.5176 - val_loss: 2.0795 - val_accuracy: 0.2780
Epoch 5/10
105/105 [==============================] - 45s 415ms/step - loss: 1.0184 - accuracy: 0.6237 - val_loss: 2.4519 - val_accuracy: 0.2840
Epoch 6/10
105/105 [==============================] - 45s 418ms/step - loss: 0.8468 - accuracy: 0.6968 - val_loss: 2.4766 - val_accuracy: 0.2768
Epoch 7/10
105/105 [==============================] - 46s 422ms/step - loss: 0.7311 - accuracy: 0.7394 - val_loss: 2.7377 - val_accuracy: 0.2900
Epoch 8/10
105/105 [==============================] - 46s 422ms/step - loss: 0.5902 - accuracy: 0.7928 - val_loss: 2.7308 - val_accuracy: 0.2721
Epoch 9/10
105/105 [==============================] - 45s 417ms/step - loss: 0.4384 - accuracy: 0.8456 - val_loss: 3.4322 - val_accuracy: 0.2637
Epoch 10/10
105/105 [==============================] - 46s 418ms/step - loss: 0.3729 - accuracy: 0.8739 - val_loss: 2.8680 - val_accuracy: 0.3007
<keras.callbacks.History at 0x7f0d8196b350>
'''
  • 나름 어느정도의 개선은 있지만 아쉽다.

전이 학습

  • 성능을 대폭 끌어올리기 위해 전이학습을 진행한다.
  • ResNet을 이용할 것인데, 이미 훈련이 되어있기에 훌륭하게 ‘특징점’을 잘 추출해 준다.
  • Fully Connected layer를 목적에 맞게 구성해준다.

ResNet 불러오기

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
# Fully Connected layer 부분을 제거하는 역할
resnet = ResNet50(weights='imagenet', include_top=False)

# ResNet50 레이어의 파라미터를 학습하지 않도록 설정
# 역전파를 통해 오차 정보가 전파 되더라도 파라미터가 업데이트되지 않는다.
for layer in resnet.layers:
    layer.trainable = False

# Fully connected layer 추가
x = resnet.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(7, activation='sigmoid')(x)
model3 = Model(resnet.input, predictions)
model3.summary()
'''
Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, None, None,  0                                            
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D)       (None, None, None, 3 0           input_1[0][0]                    
__________________________________________________________________________________________________
conv1_conv (Conv2D)             (None, None, None, 6 9472        conv1_pad[0][0]                  
__________________________________________________________________________________________________
conv1_bn (BatchNormalization)   (None, None, None, 6 256         conv1_conv[0][0]                 
__________________________________________________________________________________________________
conv1_relu (Activation)         (None, None, None, 6 0           conv1_bn[0][0]                   
__________________________________________________________________________________________________
pool1_pad (ZeroPadding2D)       (None, None, None, 6 0           conv1_relu[0][0]                 
__________________________________________________________________________________________________
pool1_pool (MaxPooling2D)       (None, None, None, 6 0           pool1_pad[0][0]                  
__________________________________________________________________________________________________
conv2_block1_1_conv (Conv2D)    (None, None, None, 6 4160        pool1_pool[0][0]                 
__________________________________________________________________________________________________
conv2_block1_1_bn (BatchNormali (None, None, None, 6 256         conv2_block1_1_conv[0][0]        
__________________________________________________________________________________________________
conv2_block1_1_relu (Activation (None, None, None, 6 0           conv2_block1_1_bn[0][0]          
__________________________________________________________________________________________________
conv2_block1_2_conv (Conv2D)    (None, None, None, 6 36928       conv2_block1_1_relu[0][0]        
__________________________________________________________________________________________________
conv2_block1_2_bn (BatchNormali (None, None, None, 6 256         conv2_block1_2_conv[0][0]        
__________________________________________________________________________________________________
conv2_block1_2_relu (Activation (None, None, None, 6 0           conv2_block1_2_bn[0][0]          
__________________________________________________________________________________________________
conv2_block1_0_conv (Conv2D)    (None, None, None, 2 16640       pool1_pool[0][0]                 
__________________________________________________________________________________________________
conv2_block1_3_conv (Conv2D)    (None, None, None, 2 16640       conv2_block1_2_relu[0][0]        
__________________________________________________________________________________________________
conv2_block1_0_bn (BatchNormali (None, None, None, 2 1024        conv2_block1_0_conv[0][0]        
__________________________________________________________________________________________________
conv2_block1_3_bn (BatchNormali (None, None, None, 2 1024        conv2_block1_3_conv[0][0]        
__________________________________________________________________________________________________
conv2_block1_add (Add)          (None, None, None, 2 0           conv2_block1_0_bn[0][0]          
                                                                 conv2_block1_3_bn[0][0]          
__________________________________________________________________________________________________
conv2_block1_out (Activation)   (None, None, None, 2 0           conv2_block1_add[0][0]           
__________________________________________________________________________________________________
conv2_block2_1_conv (Conv2D)    (None, None, None, 6 16448       conv2_block1_out[0][0]           
__________________________________________________________________________________________________
conv2_block2_1_bn (BatchNormali (None, None, None, 6 256         conv2_block2_1_conv[0][0]        
__________________________________________________________________________________________________
conv2_block2_1_relu (Activation (None, None, None, 6 0           conv2_block2_1_bn[0][0]          
__________________________________________________________________________________________________
conv2_block2_2_conv (Conv2D)    (None, None, None, 6 36928       conv2_block2_1_relu[0][0]        
__________________________________________________________________________________________________
conv2_block2_2_bn (BatchNormali (None, None, None, 6 256         conv2_block2_2_conv[0][0]        
__________________________________________________________________________________________________
conv2_block2_2_relu (Activation (None, None, None, 6 0           conv2_block2_2_bn[0][0]          
__________________________________________________________________________________________________
conv2_block2_3_conv (Conv2D)    (None, None, None, 2 16640       conv2_block2_2_relu[0][0]        
__________________________________________________________________________________________________
conv2_block2_3_bn (BatchNormali (None, None, None, 2 1024        conv2_block2_3_conv[0][0]        
__________________________________________________________________________________________________
conv2_block2_add (Add)          (None, None, None, 2 0           conv2_block1_out[0][0]           
                                                                 conv2_block2_3_bn[0][0]          
__________________________________________________________________________________________________
conv2_block2_out (Activation)   (None, None, None, 2 0           conv2_block2_add[0][0]           
__________________________________________________________________________________________________
conv2_block3_1_conv (Conv2D)    (None, None, None, 6 16448       conv2_block2_out[0][0]           
__________________________________________________________________________________________________
conv2_block3_1_bn (BatchNormali (None, None, None, 6 256         conv2_block3_1_conv[0][0]        
__________________________________________________________________________________________________
conv2_block3_1_relu (Activation (None, None, None, 6 0           conv2_block3_1_bn[0][0]          
__________________________________________________________________________________________________
conv2_block3_2_conv (Conv2D)    (None, None, None, 6 36928       conv2_block3_1_relu[0][0]        
__________________________________________________________________________________________________
conv2_block3_2_bn (BatchNormali (None, None, None, 6 256         conv2_block3_2_conv[0][0]        
__________________________________________________________________________________________________
conv2_block3_2_relu (Activation (None, None, None, 6 0           conv2_block3_2_bn[0][0]          
__________________________________________________________________________________________________
conv2_block3_3_conv (Conv2D)    (None, None, None, 2 16640       conv2_block3_2_relu[0][0]        
__________________________________________________________________________________________________
conv2_block3_3_bn (BatchNormali (None, None, None, 2 1024        conv2_block3_3_conv[0][0]        
__________________________________________________________________________________________________
conv2_block3_add (Add)          (None, None, None, 2 0           conv2_block2_out[0][0]           
                                                                 conv2_block3_3_bn[0][0]          
__________________________________________________________________________________________________
conv2_block3_out (Activation)   (None, None, None, 2 0           conv2_block3_add[0][0]           
__________________________________________________________________________________________________
conv3_block1_1_conv (Conv2D)    (None, None, None, 1 32896       conv2_block3_out[0][0]           
__________________________________________________________________________________________________
conv3_block1_1_bn (BatchNormali (None, None, None, 1 512         conv3_block1_1_conv[0][0]        
__________________________________________________________________________________________________
conv3_block1_1_relu (Activation (None, None, None, 1 0           conv3_block1_1_bn[0][0]          
__________________________________________________________________________________________________
conv3_block1_2_conv (Conv2D)    (None, None, None, 1 147584      conv3_block1_1_relu[0][0]        
__________________________________________________________________________________________________
conv3_block1_2_bn (BatchNormali (None, None, None, 1 512         conv3_block1_2_conv[0][0]        
__________________________________________________________________________________________________
conv3_block1_2_relu (Activation (None, None, None, 1 0           conv3_block1_2_bn[0][0]          
__________________________________________________________________________________________________
conv3_block1_0_conv (Conv2D)    (None, None, None, 5 131584      conv2_block3_out[0][0]           
__________________________________________________________________________________________________
conv3_block1_3_conv (Conv2D)    (None, None, None, 5 66048       conv3_block1_2_relu[0][0]        
__________________________________________________________________________________________________
conv3_block1_0_bn (BatchNormali (None, None, None, 5 2048        conv3_block1_0_conv[0][0]        
__________________________________________________________________________________________________
conv3_block1_3_bn (BatchNormali (None, None, None, 5 2048        conv3_block1_3_conv[0][0]        
__________________________________________________________________________________________________
conv3_block1_add (Add)          (None, None, None, 5 0           conv3_block1_0_bn[0][0]          
                                                                 conv3_block1_3_bn[0][0]          
__________________________________________________________________________________________________
conv3_block1_out (Activation)   (None, None, None, 5 0           conv3_block1_add[0][0]           
__________________________________________________________________________________________________
conv3_block2_1_conv (Conv2D)    (None, None, None, 1 65664       conv3_block1_out[0][0]           
__________________________________________________________________________________________________
conv3_block2_1_bn (BatchNormali (None, None, None, 1 512         conv3_block2_1_conv[0][0]        
__________________________________________________________________________________________________
conv3_block2_1_relu (Activation (None, None, None, 1 0           conv3_block2_1_bn[0][0]          
__________________________________________________________________________________________________
conv3_block2_2_conv (Conv2D)    (None, None, None, 1 147584      conv3_block2_1_relu[0][0]        
__________________________________________________________________________________________________
conv3_block2_2_bn (BatchNormali (None, None, None, 1 512         conv3_block2_2_conv[0][0]        
__________________________________________________________________________________________________
conv3_block2_2_relu (Activation (None, None, None, 1 0           conv3_block2_2_bn[0][0]          
__________________________________________________________________________________________________
conv3_block2_3_conv (Conv2D)    (None, None, None, 5 66048       conv3_block2_2_relu[0][0]        
__________________________________________________________________________________________________
conv3_block2_3_bn (BatchNormali (None, None, None, 5 2048        conv3_block2_3_conv[0][0]        
__________________________________________________________________________________________________
conv3_block2_add (Add)          (None, None, None, 5 0           conv3_block1_out[0][0]           
                                                                 conv3_block2_3_bn[0][0]          
__________________________________________________________________________________________________
conv3_block2_out (Activation)   (None, None, None, 5 0           conv3_block2_add[0][0]           
__________________________________________________________________________________________________
conv3_block3_1_conv (Conv2D)    (None, None, None, 1 65664       conv3_block2_out[0][0]           
__________________________________________________________________________________________________
conv3_block3_1_bn (BatchNormali (None, None, None, 1 512         conv3_block3_1_conv[0][0]        
__________________________________________________________________________________________________
conv3_block3_1_relu (Activation (None, None, None, 1 0           conv3_block3_1_bn[0][0]          
__________________________________________________________________________________________________
conv3_block3_2_conv (Conv2D)    (None, None, None, 1 147584      conv3_block3_1_relu[0][0]        
__________________________________________________________________________________________________
conv3_block3_2_bn (BatchNormali (None, None, None, 1 512         conv3_block3_2_conv[0][0]        
__________________________________________________________________________________________________
conv3_block3_2_relu (Activation (None, None, None, 1 0           conv3_block3_2_bn[0][0]          
__________________________________________________________________________________________________
conv3_block3_3_conv (Conv2D)    (None, None, None, 5 66048       conv3_block3_2_relu[0][0]        
__________________________________________________________________________________________________
conv3_block3_3_bn (BatchNormali (None, None, None, 5 2048        conv3_block3_3_conv[0][0]        
__________________________________________________________________________________________________
conv3_block3_add (Add)          (None, None, None, 5 0           conv3_block2_out[0][0]           
                                                                 conv3_block3_3_bn[0][0]          
__________________________________________________________________________________________________
conv3_block3_out (Activation)   (None, None, None, 5 0           conv3_block3_add[0][0]           
__________________________________________________________________________________________________
conv3_block4_1_conv (Conv2D)    (None, None, None, 1 65664       conv3_block3_out[0][0]           
__________________________________________________________________________________________________
conv3_block4_1_bn (BatchNormali (None, None, None, 1 512         conv3_block4_1_conv[0][0]        
__________________________________________________________________________________________________
conv3_block4_1_relu (Activation (None, None, None, 1 0           conv3_block4_1_bn[0][0]          
__________________________________________________________________________________________________
conv3_block4_2_conv (Conv2D)    (None, None, None, 1 147584      conv3_block4_1_relu[0][0]        
__________________________________________________________________________________________________
conv3_block4_2_bn (BatchNormali (None, None, None, 1 512         conv3_block4_2_conv[0][0]        
__________________________________________________________________________________________________
conv3_block4_2_relu (Activation (None, None, None, 1 0           conv3_block4_2_bn[0][0]          
__________________________________________________________________________________________________
conv3_block4_3_conv (Conv2D)    (None, None, None, 5 66048       conv3_block4_2_relu[0][0]        
__________________________________________________________________________________________________
conv3_block4_3_bn (BatchNormali (None, None, None, 5 2048        conv3_block4_3_conv[0][0]        
__________________________________________________________________________________________________
conv3_block4_add (Add)          (None, None, None, 5 0           conv3_block3_out[0][0]           
                                                                 conv3_block4_3_bn[0][0]          
__________________________________________________________________________________________________
conv3_block4_out (Activation)   (None, None, None, 5 0           conv3_block4_add[0][0]           
__________________________________________________________________________________________________
conv4_block1_1_conv (Conv2D)    (None, None, None, 2 131328      conv3_block4_out[0][0]           
__________________________________________________________________________________________________
conv4_block1_1_bn (BatchNormali (None, None, None, 2 1024        conv4_block1_1_conv[0][0]        
__________________________________________________________________________________________________
conv4_block1_1_relu (Activation (None, None, None, 2 0           conv4_block1_1_bn[0][0]          
__________________________________________________________________________________________________
conv4_block1_2_conv (Conv2D)    (None, None, None, 2 590080      conv4_block1_1_relu[0][0]        
__________________________________________________________________________________________________
conv4_block1_2_bn (BatchNormali (None, None, None, 2 1024        conv4_block1_2_conv[0][0]        
__________________________________________________________________________________________________
conv4_block1_2_relu (Activation (None, None, None, 2 0           conv4_block1_2_bn[0][0]          
__________________________________________________________________________________________________
conv4_block1_0_conv (Conv2D)    (None, None, None, 1 525312      conv3_block4_out[0][0]           
__________________________________________________________________________________________________
conv4_block1_3_conv (Conv2D)    (None, None, None, 1 263168      conv4_block1_2_relu[0][0]        
__________________________________________________________________________________________________
conv4_block1_0_bn (BatchNormali (None, None, None, 1 4096        conv4_block1_0_conv[0][0]        
__________________________________________________________________________________________________
conv4_block1_3_bn (BatchNormali (None, None, None, 1 4096        conv4_block1_3_conv[0][0]        
__________________________________________________________________________________________________
conv4_block1_add (Add)          (None, None, None, 1 0           conv4_block1_0_bn[0][0]          
                                                                 conv4_block1_3_bn[0][0]          
__________________________________________________________________________________________________
conv4_block1_out (Activation)   (None, None, None, 1 0           conv4_block1_add[0][0]           
__________________________________________________________________________________________________
conv4_block2_1_conv (Conv2D)    (None, None, None, 2 262400      conv4_block1_out[0][0]           
__________________________________________________________________________________________________
conv4_block2_1_bn (BatchNormali (None, None, None, 2 1024        conv4_block2_1_conv[0][0]        
__________________________________________________________________________________________________
conv4_block2_1_relu (Activation (None, None, None, 2 0           conv4_block2_1_bn[0][0]          
__________________________________________________________________________________________________
conv4_block2_2_conv (Conv2D)    (None, None, None, 2 590080      conv4_block2_1_relu[0][0]        
__________________________________________________________________________________________________
conv4_block2_2_bn (BatchNormali (None, None, None, 2 1024        conv4_block2_2_conv[0][0]        
__________________________________________________________________________________________________
conv4_block2_2_relu (Activation (None, None, None, 2 0           conv4_block2_2_bn[0][0]          
__________________________________________________________________________________________________
conv4_block2_3_conv (Conv2D)    (None, None, None, 1 263168      conv4_block2_2_relu[0][0]        
__________________________________________________________________________________________________
conv4_block2_3_bn (BatchNormali (None, None, None, 1 4096        conv4_block2_3_conv[0][0]        
__________________________________________________________________________________________________
conv4_block2_add (Add)          (None, None, None, 1 0           conv4_block1_out[0][0]           
                                                                 conv4_block2_3_bn[0][0]          
__________________________________________________________________________________________________
conv4_block2_out (Activation)   (None, None, None, 1 0           conv4_block2_add[0][0]           
__________________________________________________________________________________________________
conv4_block3_1_conv (Conv2D)    (None, None, None, 2 262400      conv4_block2_out[0][0]           
__________________________________________________________________________________________________
conv4_block3_1_bn (BatchNormali (None, None, None, 2 1024        conv4_block3_1_conv[0][0]        
__________________________________________________________________________________________________
conv4_block3_1_relu (Activation (None, None, None, 2 0           conv4_block3_1_bn[0][0]          
__________________________________________________________________________________________________
conv4_block3_2_conv (Conv2D)    (None, None, None, 2 590080      conv4_block3_1_relu[0][0]        
__________________________________________________________________________________________________
conv4_block3_2_bn (BatchNormali (None, None, None, 2 1024        conv4_block3_2_conv[0][0]        
__________________________________________________________________________________________________
conv4_block3_2_relu (Activation (None, None, None, 2 0           conv4_block3_2_bn[0][0]          
__________________________________________________________________________________________________
conv4_block3_3_conv (Conv2D)    (None, None, None, 1 263168      conv4_block3_2_relu[0][0]        
__________________________________________________________________________________________________
conv4_block3_3_bn (BatchNormali (None, None, None, 1 4096        conv4_block3_3_conv[0][0]        
__________________________________________________________________________________________________
conv4_block3_add (Add)          (None, None, None, 1 0           conv4_block2_out[0][0]           
                                                                 conv4_block3_3_bn[0][0]          
__________________________________________________________________________________________________
conv4_block3_out (Activation)   (None, None, None, 1 0           conv4_block3_add[0][0]           
__________________________________________________________________________________________________
conv4_block4_1_conv (Conv2D)    (None, None, None, 2 262400      conv4_block3_out[0][0]           
__________________________________________________________________________________________________
conv4_block4_1_bn (BatchNormali (None, None, None, 2 1024        conv4_block4_1_conv[0][0]        
__________________________________________________________________________________________________
conv4_block4_1_relu (Activation (None, None, None, 2 0           conv4_block4_1_bn[0][0]          
__________________________________________________________________________________________________
conv4_block4_2_conv (Conv2D)    (None, None, None, 2 590080      conv4_block4_1_relu[0][0]        
__________________________________________________________________________________________________
conv4_block4_2_bn (BatchNormali (None, None, None, 2 1024        conv4_block4_2_conv[0][0]        
__________________________________________________________________________________________________
conv4_block4_2_relu (Activation (None, None, None, 2 0           conv4_block4_2_bn[0][0]          
__________________________________________________________________________________________________
conv4_block4_3_conv (Conv2D)    (None, None, None, 1 263168      conv4_block4_2_relu[0][0]        
__________________________________________________________________________________________________
conv4_block4_3_bn (BatchNormali (None, None, None, 1 4096        conv4_block4_3_conv[0][0]        
__________________________________________________________________________________________________
conv4_block4_add (Add)          (None, None, None, 1 0           conv4_block3_out[0][0]           
                                                                 conv4_block4_3_bn[0][0]          
__________________________________________________________________________________________________
conv4_block4_out (Activation)   (None, None, None, 1 0           conv4_block4_add[0][0]           
__________________________________________________________________________________________________
conv4_block5_1_conv (Conv2D)    (None, None, None, 2 262400      conv4_block4_out[0][0]           
__________________________________________________________________________________________________
conv4_block5_1_bn (BatchNormali (None, None, None, 2 1024        conv4_block5_1_conv[0][0]        
__________________________________________________________________________________________________
conv4_block5_1_relu (Activation (None, None, None, 2 0           conv4_block5_1_bn[0][0]          
__________________________________________________________________________________________________
conv4_block5_2_conv (Conv2D)    (None, None, None, 2 590080      conv4_block5_1_relu[0][0]        
__________________________________________________________________________________________________
conv4_block5_2_bn (BatchNormali (None, None, None, 2 1024        conv4_block5_2_conv[0][0]        
__________________________________________________________________________________________________
conv4_block5_2_relu (Activation (None, None, None, 2 0           conv4_block5_2_bn[0][0]          
__________________________________________________________________________________________________
conv4_block5_3_conv (Conv2D)    (None, None, None, 1 263168      conv4_block5_2_relu[0][0]        
__________________________________________________________________________________________________
conv4_block5_3_bn (BatchNormali (None, None, None, 1 4096        conv4_block5_3_conv[0][0]        
__________________________________________________________________________________________________
conv4_block5_add (Add)          (None, None, None, 1 0           conv4_block4_out[0][0]           
                                                                 conv4_block5_3_bn[0][0]          
__________________________________________________________________________________________________
conv4_block5_out (Activation)   (None, None, None, 1 0           conv4_block5_add[0][0]           
__________________________________________________________________________________________________
conv4_block6_1_conv (Conv2D)    (None, None, None, 2 262400      conv4_block5_out[0][0]           
__________________________________________________________________________________________________
conv4_block6_1_bn (BatchNormali (None, None, None, 2 1024        conv4_block6_1_conv[0][0]        
__________________________________________________________________________________________________
conv4_block6_1_relu (Activation (None, None, None, 2 0           conv4_block6_1_bn[0][0]          
__________________________________________________________________________________________________
conv4_block6_2_conv (Conv2D)    (None, None, None, 2 590080      conv4_block6_1_relu[0][0]        
__________________________________________________________________________________________________
conv4_block6_2_bn (BatchNormali (None, None, None, 2 1024        conv4_block6_2_conv[0][0]        
__________________________________________________________________________________________________
conv4_block6_2_relu (Activation (None, None, None, 2 0           conv4_block6_2_bn[0][0]          
__________________________________________________________________________________________________
conv4_block6_3_conv (Conv2D)    (None, None, None, 1 263168      conv4_block6_2_relu[0][0]        
__________________________________________________________________________________________________
conv4_block6_3_bn (BatchNormali (None, None, None, 1 4096        conv4_block6_3_conv[0][0]        
__________________________________________________________________________________________________
conv4_block6_add (Add)          (None, None, None, 1 0           conv4_block5_out[0][0]           
                                                                 conv4_block6_3_bn[0][0]          
__________________________________________________________________________________________________
conv4_block6_out (Activation)   (None, None, None, 1 0           conv4_block6_add[0][0]           
__________________________________________________________________________________________________
conv5_block1_1_conv (Conv2D)    (None, None, None, 5 524800      conv4_block6_out[0][0]           
__________________________________________________________________________________________________
conv5_block1_1_bn (BatchNormali (None, None, None, 5 2048        conv5_block1_1_conv[0][0]        
__________________________________________________________________________________________________
conv5_block1_1_relu (Activation (None, None, None, 5 0           conv5_block1_1_bn[0][0]          
__________________________________________________________________________________________________
conv5_block1_2_conv (Conv2D)    (None, None, None, 5 2359808     conv5_block1_1_relu[0][0]        
__________________________________________________________________________________________________
conv5_block1_2_bn (BatchNormali (None, None, None, 5 2048        conv5_block1_2_conv[0][0]        
__________________________________________________________________________________________________
conv5_block1_2_relu (Activation (None, None, None, 5 0           conv5_block1_2_bn[0][0]          
__________________________________________________________________________________________________
conv5_block1_0_conv (Conv2D)    (None, None, None, 2 2099200     conv4_block6_out[0][0]           
__________________________________________________________________________________________________
conv5_block1_3_conv (Conv2D)    (None, None, None, 2 1050624     conv5_block1_2_relu[0][0]        
__________________________________________________________________________________________________
conv5_block1_0_bn (BatchNormali (None, None, None, 2 8192        conv5_block1_0_conv[0][0]        
__________________________________________________________________________________________________
conv5_block1_3_bn (BatchNormali (None, None, None, 2 8192        conv5_block1_3_conv[0][0]        
__________________________________________________________________________________________________
conv5_block1_add (Add)          (None, None, None, 2 0           conv5_block1_0_bn[0][0]          
                                                                 conv5_block1_3_bn[0][0]          
__________________________________________________________________________________________________
conv5_block1_out (Activation)   (None, None, None, 2 0           conv5_block1_add[0][0]           
__________________________________________________________________________________________________
conv5_block2_1_conv (Conv2D)    (None, None, None, 5 1049088     conv5_block1_out[0][0]           
__________________________________________________________________________________________________
conv5_block2_1_bn (BatchNormali (None, None, None, 5 2048        conv5_block2_1_conv[0][0]        
__________________________________________________________________________________________________
conv5_block2_1_relu (Activation (None, None, None, 5 0           conv5_block2_1_bn[0][0]          
__________________________________________________________________________________________________
conv5_block2_2_conv (Conv2D)    (None, None, None, 5 2359808     conv5_block2_1_relu[0][0]        
__________________________________________________________________________________________________
conv5_block2_2_bn (BatchNormali (None, None, None, 5 2048        conv5_block2_2_conv[0][0]        
__________________________________________________________________________________________________
conv5_block2_2_relu (Activation (None, None, None, 5 0           conv5_block2_2_bn[0][0]          
__________________________________________________________________________________________________
conv5_block2_3_conv (Conv2D)    (None, None, None, 2 1050624     conv5_block2_2_relu[0][0]        
__________________________________________________________________________________________________
conv5_block2_3_bn (BatchNormali (None, None, None, 2 8192        conv5_block2_3_conv[0][0]        
__________________________________________________________________________________________________
conv5_block2_add (Add)          (None, None, None, 2 0           conv5_block1_out[0][0]           
                                                                 conv5_block2_3_bn[0][0]          
__________________________________________________________________________________________________
conv5_block2_out (Activation)   (None, None, None, 2 0           conv5_block2_add[0][0]           
__________________________________________________________________________________________________
conv5_block3_1_conv (Conv2D)    (None, None, None, 5 1049088     conv5_block2_out[0][0]           
__________________________________________________________________________________________________
conv5_block3_1_bn (BatchNormali (None, None, None, 5 2048        conv5_block3_1_conv[0][0]        
__________________________________________________________________________________________________
conv5_block3_1_relu (Activation (None, None, None, 5 0           conv5_block3_1_bn[0][0]          
__________________________________________________________________________________________________
conv5_block3_2_conv (Conv2D)    (None, None, None, 5 2359808     conv5_block3_1_relu[0][0]        
__________________________________________________________________________________________________
conv5_block3_2_bn (BatchNormali (None, None, None, 5 2048        conv5_block3_2_conv[0][0]        
__________________________________________________________________________________________________
conv5_block3_2_relu (Activation (None, None, None, 5 0           conv5_block3_2_bn[0][0]          
__________________________________________________________________________________________________
conv5_block3_3_conv (Conv2D)    (None, None, None, 2 1050624     conv5_block3_2_relu[0][0]        
__________________________________________________________________________________________________
conv5_block3_3_bn (BatchNormali (None, None, None, 2 8192        conv5_block3_3_conv[0][0]        
__________________________________________________________________________________________________
conv5_block3_add (Add)          (None, None, None, 2 0           conv5_block2_out[0][0]           
                                                                 conv5_block3_3_bn[0][0]          
__________________________________________________________________________________________________
conv5_block3_out (Activation)   (None, None, None, 2 0           conv5_block3_add[0][0]           
__________________________________________________________________________________________________
global_average_pooling2d (Globa (None, 2048)         0           conv5_block3_out[0][0]           
__________________________________________________________________________________________________
dense_5 (Dense)                 (None, 1024)         2098176     global_average_pooling2d[0][0]   
__________________________________________________________________________________________________
dense_6 (Dense)                 (None, 7)            7175        dense_5[0][0]                    
==================================================================================================
Total params: 25,693,063
Trainable params: 2,105,351
Non-trainable params: 23,587,712
__________________________________________________________________________________________________
'''

컴파일 및 핏(Compile & fit)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
model3.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model3.fit(tr, validation_data=val, epochs=20)
'''
Epoch 1/20
105/105 [==============================] - 46s 375ms/step - loss: 2.2210 - accuracy: 0.3408 - val_loss: 1.6776 - val_accuracy: 0.3198
Epoch 2/20
105/105 [==============================] - 38s 344ms/step - loss: 1.2494 - accuracy: 0.5277 - val_loss: 1.6757 - val_accuracy: 0.3544
Epoch 3/20
105/105 [==============================] - 37s 341ms/step - loss: 1.1146 - accuracy: 0.5817 - val_loss: 1.7384 - val_accuracy: 0.3735
Epoch 4/20
105/105 [==============================] - 38s 342ms/step - loss: 1.0167 - accuracy: 0.6187 - val_loss: 1.6563 - val_accuracy: 0.3735
Epoch 5/20
105/105 [==============================] - 37s 340ms/step - loss: 0.9401 - accuracy: 0.6452 - val_loss: 1.6776 - val_accuracy: 0.3819
Epoch 6/20
105/105 [==============================] - 38s 343ms/step - loss: 0.8510 - accuracy: 0.6878 - val_loss: 1.7892 - val_accuracy: 0.3580
Epoch 7/20
105/105 [==============================] - 38s 346ms/step - loss: 0.7629 - accuracy: 0.7287 - val_loss: 1.7751 - val_accuracy: 0.3926
Epoch 8/20
105/105 [==============================] - 38s 343ms/step - loss: 0.7210 - accuracy: 0.7385 - val_loss: 1.9225 - val_accuracy: 0.3663
Epoch 9/20
105/105 [==============================] - 38s 342ms/step - loss: 0.6344 - accuracy: 0.7713 - val_loss: 2.2290 - val_accuracy: 0.2995
Epoch 10/20
105/105 [==============================] - 38s 345ms/step - loss: 0.6084 - accuracy: 0.7767 - val_loss: 1.9812 - val_accuracy: 0.3890
Epoch 11/20
105/105 [==============================] - 38s 344ms/step - loss: 0.5343 - accuracy: 0.8151 - val_loss: 2.0343 - val_accuracy: 0.3962
Epoch 12/20
105/105 [==============================] - 38s 343ms/step - loss: 0.4851 - accuracy: 0.8277 - val_loss: 2.4881 - val_accuracy: 0.3425
Epoch 13/20
105/105 [==============================] - 38s 346ms/step - loss: 0.3974 - accuracy: 0.8715 - val_loss: 2.3355 - val_accuracy: 0.3687
Epoch 14/20
105/105 [==============================] - 38s 344ms/step - loss: 0.3552 - accuracy: 0.8882 - val_loss: 2.3268 - val_accuracy: 0.3699
Epoch 15/20
105/105 [==============================] - 38s 347ms/step - loss: 0.2798 - accuracy: 0.9198 - val_loss: 2.2989 - val_accuracy: 0.3783
Epoch 16/20
105/105 [==============================] - 38s 347ms/step - loss: 0.2640 - accuracy: 0.9234 - val_loss: 2.6399 - val_accuracy: 0.3162
Epoch 17/20
105/105 [==============================] - 38s 347ms/step - loss: 0.2391 - accuracy: 0.9338 - val_loss: 2.4771 - val_accuracy: 0.3926
Epoch 18/20
105/105 [==============================] - 38s 348ms/step - loss: 0.2017 - accuracy: 0.9466 - val_loss: 2.7124 - val_accuracy: 0.3508
Epoch 19/20
105/105 [==============================] - 38s 348ms/step - loss: 0.1735 - accuracy: 0.9550 - val_loss: 3.0435 - val_accuracy: 0.3258
Epoch 20/20
105/105 [==============================] - 38s 349ms/step - loss: 0.1267 - accuracy: 0.9744 - val_loss: 2.9719 - val_accuracy: 0.3508
<keras.callbacks.History at 0x7f0d08ed2b50>
'''
  • ResNet을 사용했음에도 성능이 굉장히 아쉽다.
  • 아무래도 패션 스타일 특성상 사람이 구분하기에도 애매한 부분이 많아서 그런 것 같다.
  • 여러가지 하이퍼파라미터 튜닝 및 데이터를 좀 더 크롤링하여 성능을 개선해야겠다.

최종 모델

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
# Fully Connected layer 부분을 제거하는 역할
resnet = ResNet50(weights='imagenet', include_top=False)

# ResNet50 레이어의 파라미터를 학습하지 않도록 설정
# 역전파를 통해 오차 정보가 전파 되더라도 파라미터가 업데이트되지 않는다.
for layer in resnet.layers:
    layer.trainable = False

# Fully connected layer 추가
x = resnet.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation='relu')(x)
predictions = Dense(7, activation='sigmoid')(x)
model4 = Model(resnet.input, predictions)


model4.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model4.fit(tr_resnet, validation_data=val_resnet, epochs=50)
'''
Epoch 1/50
201/201 [==============================] - 52s 237ms/step - loss: 1.3936 - accuracy: 0.6659 - val_loss: 2.4469 - val_accuracy: 0.3980
Epoch 2/50
201/201 [==============================] - 48s 233ms/step - loss: 0.9588 - accuracy: 0.7011 - val_loss: 2.3411 - val_accuracy: 0.4261
Epoch 3/50
201/201 [==============================] - 48s 232ms/step - loss: 0.7219 - accuracy: 0.7590 - val_loss: 2.0455 - val_accuracy: 0.4963
Epoch 4/50
201/201 [==============================] - 47s 231ms/step - loss: 0.5783 - accuracy: 0.7930 - val_loss: 2.5070 - val_accuracy: 0.4774
Epoch 5/50
201/201 [==============================] - 47s 231ms/step - loss: 0.4938 - accuracy: 0.8141 - val_loss: 2.2769 - val_accuracy: 0.5226
Epoch 6/50
201/201 [==============================] - 48s 233ms/step - loss: 0.3883 - accuracy: 0.8512 - val_loss: 2.2302 - val_accuracy: 0.5269
Epoch 7/50
201/201 [==============================] - 48s 233ms/step - loss: 0.3071 - accuracy: 0.8851 - val_loss: 2.4522 - val_accuracy: 0.5372
Epoch 8/50
201/201 [==============================] - 47s 232ms/step - loss: 0.2603 - accuracy: 0.9072 - val_loss: 2.4566 - val_accuracy: 0.5641
Epoch 9/50
201/201 [==============================] - 48s 232ms/step - loss: 0.2177 - accuracy: 0.9181 - val_loss: 2.3829 - val_accuracy: 0.5800
Epoch 10/50
201/201 [==============================] - 48s 233ms/step - loss: 0.1818 - accuracy: 0.9390 - val_loss: 2.5546 - val_accuracy: 0.5824
Epoch 11/50
201/201 [==============================] - 47s 232ms/step - loss: 0.1644 - accuracy: 0.9405 - val_loss: 2.4072 - val_accuracy: 0.6068
Epoch 12/50
201/201 [==============================] - 47s 231ms/step - loss: 0.1276 - accuracy: 0.9586 - val_loss: 2.7116 - val_accuracy: 0.5916
Epoch 13/50
201/201 [==============================] - 47s 232ms/step - loss: 0.1266 - accuracy: 0.9536 - val_loss: 2.5981 - val_accuracy: 0.5867
Epoch 14/50
201/201 [==============================] - 48s 233ms/step - loss: 0.1004 - accuracy: 0.9686 - val_loss: 2.7737 - val_accuracy: 0.6001
Epoch 15/50
201/201 [==============================] - 47s 232ms/step - loss: 0.0950 - accuracy: 0.9698 - val_loss: 2.8071 - val_accuracy: 0.6111
Epoch 16/50
201/201 [==============================] - 47s 232ms/step - loss: 0.0684 - accuracy: 0.9819 - val_loss: 2.8287 - val_accuracy: 0.6074
Epoch 17/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0605 - accuracy: 0.9860 - val_loss: 2.9294 - val_accuracy: 0.6105
Epoch 18/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0933 - accuracy: 0.9670 - val_loss: 2.8661 - val_accuracy: 0.5806
Epoch 19/50
201/201 [==============================] - 48s 233ms/step - loss: 0.1882 - accuracy: 0.9352 - val_loss: 3.0379 - val_accuracy: 0.6062
Epoch 20/50
201/201 [==============================] - 48s 234ms/step - loss: 0.0676 - accuracy: 0.9767 - val_loss: 3.0459 - val_accuracy: 0.6013
Epoch 21/50
201/201 [==============================] - 47s 232ms/step - loss: 0.0452 - accuracy: 0.9885 - val_loss: 2.9608 - val_accuracy: 0.6245
Epoch 22/50
201/201 [==============================] - 47s 232ms/step - loss: 0.0355 - accuracy: 0.9925 - val_loss: 3.1396 - val_accuracy: 0.6160
Epoch 23/50
201/201 [==============================] - 47s 232ms/step - loss: 0.0224 - accuracy: 0.9966 - val_loss: 3.2252 - val_accuracy: 0.6221
Epoch 24/50
201/201 [==============================] - 47s 232ms/step - loss: 0.0190 - accuracy: 0.9975 - val_loss: 3.1913 - val_accuracy: 0.6190
Epoch 25/50
201/201 [==============================] - 48s 232ms/step - loss: 0.0127 - accuracy: 0.9988 - val_loss: 3.1632 - val_accuracy: 0.6221
Epoch 26/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0115 - accuracy: 0.9994 - val_loss: 3.2275 - val_accuracy: 0.6252
Epoch 27/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0103 - accuracy: 0.9991 - val_loss: 3.2298 - val_accuracy: 0.6270
Epoch 28/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0071 - accuracy: 1.0000 - val_loss: 3.3208 - val_accuracy: 0.6245
Epoch 29/50
201/201 [==============================] - 48s 232ms/step - loss: 0.0064 - accuracy: 1.0000 - val_loss: 3.2902 - val_accuracy: 0.6239
Epoch 30/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0049 - accuracy: 1.0000 - val_loss: 3.3328 - val_accuracy: 0.6276
Epoch 31/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0042 - accuracy: 1.0000 - val_loss: 3.3634 - val_accuracy: 0.6264
Epoch 32/50
201/201 [==============================] - 48s 234ms/step - loss: 0.0038 - accuracy: 1.0000 - val_loss: 3.3842 - val_accuracy: 0.6258
Epoch 33/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0033 - accuracy: 1.0000 - val_loss: 3.4189 - val_accuracy: 0.6264
Epoch 34/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0030 - accuracy: 1.0000 - val_loss: 3.4772 - val_accuracy: 0.6245
Epoch 35/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0025 - accuracy: 1.0000 - val_loss: 3.5002 - val_accuracy: 0.6276
Epoch 36/50
201/201 [==============================] - 48s 234ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 3.4747 - val_accuracy: 0.6252
Epoch 37/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0022 - accuracy: 1.0000 - val_loss: 3.5602 - val_accuracy: 0.6227
Epoch 38/50
201/201 [==============================] - 48s 234ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 3.5909 - val_accuracy: 0.6258
Epoch 39/50
201/201 [==============================] - 48s 234ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 3.5889 - val_accuracy: 0.6276
Epoch 40/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 3.6606 - val_accuracy: 0.6258
Epoch 41/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 3.6457 - val_accuracy: 0.6270
Epoch 42/50
201/201 [==============================] - 48s 233ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 3.6941 - val_accuracy: 0.6233
Epoch 43/50
201/201 [==============================] - 48s 234ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 3.6990 - val_accuracy: 0.6252
Epoch 44/50
201/201 [==============================] - 48s 233ms/step - loss: 9.3891e-04 - accuracy: 1.0000 - val_loss: 3.7309 - val_accuracy: 0.6245
Epoch 45/50
201/201 [==============================] - 48s 234ms/step - loss: 8.6045e-04 - accuracy: 1.0000 - val_loss: 3.7734 - val_accuracy: 0.6258
Epoch 46/50
201/201 [==============================] - 48s 234ms/step - loss: 7.6829e-04 - accuracy: 1.0000 - val_loss: 3.7696 - val_accuracy: 0.6270
Epoch 47/50
201/201 [==============================] - 48s 234ms/step - loss: 6.8480e-04 - accuracy: 1.0000 - val_loss: 3.8050 - val_accuracy: 0.6264
Epoch 48/50
201/201 [==============================] - 48s 234ms/step - loss: 6.2723e-04 - accuracy: 1.0000 - val_loss: 3.8614 - val_accuracy: 0.6282
Epoch 49/50
201/201 [==============================] - 48s 234ms/step - loss: 5.6970e-04 - accuracy: 1.0000 - val_loss: 3.8768 - val_accuracy: 0.6270
Epoch 50/50
201/201 [==============================] - 48s 233ms/step - loss: 4.8837e-04 - accuracy: 1.0000 - val_loss: 3.9097 - val_accuracy: 0.6282
'''
  • 이미지를 2배로 더 수집하고 여러가지 튜닝을 한 결과, 최종 validation accuracy 0.6282의 모델을 생성했다.(여러가지라 해봤자… batch size를 16으로 변경하고, 마지막 dense 층을 512로 변경했다.)
  • 7가지 클래스인데 60%의 정확도니까 그런대로 만족하고 flask 앱 제작에 돌입한다.

남성용 모델 생성

  • class내에 남성에게 해당되지 않는 스타일이 있으므로, 5개의 클래스만 사용한 모델을 하나 더 생성한다.
  • 위에서 생성한 모델은 여성용, 지금 만들 모델은 남성용으로 사용할 것이다.
  • 클래스 리스트와 최종 dense 층을 수정한다.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
```py
tr = '/content/drive/MyDrive/style-for-man/tr'
val = '/content/drive/MyDrive/style-for-man/val'

# 클래스 리스트 선언
class_list = ['street', 'simple', 'classic', 'work', 'unique']

tr = tf.keras.preprocessing.image_dataset_from_directory(
    tr,
    labels="inferred",
    label_mode="categorical",
    class_names=class_list,
    batch_size=16,
    seed=6,
)

val = tf.keras.preprocessing.image_dataset_from_directory(
    val,
    labels="inferred",
    label_mode="categorical",
    class_names=class_list,
    batch_size=16,
    seed=6,
)


# Fully Connected layer 부분을 제거하는 역할
resnet = ResNet50(weights='imagenet', include_top=False)

# ResNet50 레이어의 파라미터를 학습하지 않도록 설정
# 역전파를 통해 오차 정보가 전파 되더라도 파라미터가 업데이트되지 않는다.
for layer in resnet.layers:
    layer.trainable = False

# Fully connected layer 추가
x = resnet.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation='relu')(x)
predictions = Dense(5, activation='sigmoid')(x)
model5 = Model(resnet.input, predictions)


model5.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model5.fit(tr_resnet, validation_data=val_resnet, epochs=50)
'''
Epoch 1/50
147/147 [==============================] - 573s 4s/step - loss: 1.6177 - accuracy: 0.4276 - val_loss: 1.2229 - val_accuracy: 0.4979
Epoch 2/50
147/147 [==============================] - 35s 234ms/step - loss: 1.0625 - accuracy: 0.5788 - val_loss: 1.2018 - val_accuracy: 0.5138
Epoch 3/50
147/147 [==============================] - 35s 233ms/step - loss: 0.9303 - accuracy: 0.6269 - val_loss: 1.2605 - val_accuracy: 0.4954
Epoch 4/50
147/147 [==============================] - 35s 234ms/step - loss: 0.8242 - accuracy: 0.6895 - val_loss: 1.3284 - val_accuracy: 0.4636
Epoch 5/50
147/147 [==============================] - 35s 235ms/step - loss: 0.7462 - accuracy: 0.7070 - val_loss: 1.2411 - val_accuracy: 0.5540
Epoch 6/50
147/147 [==============================] - 35s 234ms/step - loss: 0.7164 - accuracy: 0.7279 - val_loss: 1.3001 - val_accuracy: 0.5079
Epoch 7/50
147/147 [==============================] - 35s 236ms/step - loss: 0.5964 - accuracy: 0.7802 - val_loss: 1.3164 - val_accuracy: 0.5322
Epoch 8/50
147/147 [==============================] - 36s 238ms/step - loss: 0.5529 - accuracy: 0.7879 - val_loss: 1.1718 - val_accuracy: 0.5833
Epoch 9/50
147/147 [==============================] - 35s 236ms/step - loss: 0.4930 - accuracy: 0.8194 - val_loss: 1.1980 - val_accuracy: 0.5707
Epoch 10/50
147/147 [==============================] - 36s 237ms/step - loss: 0.4095 - accuracy: 0.8607 - val_loss: 1.4238 - val_accuracy: 0.5573
Epoch 11/50
147/147 [==============================] - 35s 235ms/step - loss: 0.3650 - accuracy: 0.8675 - val_loss: 1.4846 - val_accuracy: 0.5766
Epoch 12/50
147/147 [==============================] - 36s 236ms/step - loss: 0.3184 - accuracy: 0.8901 - val_loss: 1.2839 - val_accuracy: 0.6008
Epoch 13/50
147/147 [==============================] - 35s 236ms/step - loss: 0.2548 - accuracy: 0.9182 - val_loss: 1.3996 - val_accuracy: 0.6008
Epoch 14/50
147/147 [==============================] - 36s 237ms/step - loss: 0.2158 - accuracy: 0.9361 - val_loss: 1.3655 - val_accuracy: 0.6109
Epoch 15/50
147/147 [==============================] - 36s 237ms/step - loss: 0.1739 - accuracy: 0.9532 - val_loss: 1.4418 - val_accuracy: 0.5992
Epoch 16/50
147/147 [==============================] - 36s 237ms/step - loss: 0.1543 - accuracy: 0.9591 - val_loss: 1.5574 - val_accuracy: 0.5958
Epoch 17/50
147/147 [==============================] - 35s 235ms/step - loss: 0.1305 - accuracy: 0.9681 - val_loss: 1.5601 - val_accuracy: 0.6167
Epoch 18/50
147/147 [==============================] - 35s 236ms/step - loss: 0.1492 - accuracy: 0.9519 - val_loss: 1.5431 - val_accuracy: 0.6184
Epoch 19/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0932 - accuracy: 0.9791 - val_loss: 1.6528 - val_accuracy: 0.6234
Epoch 20/50
147/147 [==============================] - 36s 237ms/step - loss: 0.0657 - accuracy: 0.9872 - val_loss: 1.6701 - val_accuracy: 0.6343
Epoch 21/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0567 - accuracy: 0.9923 - val_loss: 1.7974 - val_accuracy: 0.6134
Epoch 22/50
147/147 [==============================] - 36s 238ms/step - loss: 0.0438 - accuracy: 0.9923 - val_loss: 1.9007 - val_accuracy: 0.6167
Epoch 23/50
147/147 [==============================] - 36s 237ms/step - loss: 0.0254 - accuracy: 0.9996 - val_loss: 1.9115 - val_accuracy: 0.6259
Epoch 24/50
147/147 [==============================] - 36s 237ms/step - loss: 0.0195 - accuracy: 0.9996 - val_loss: 1.9319 - val_accuracy: 0.6226
Epoch 25/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0147 - accuracy: 1.0000 - val_loss: 2.0391 - val_accuracy: 0.6117
Epoch 26/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0129 - accuracy: 1.0000 - val_loss: 1.9929 - val_accuracy: 0.6176
Epoch 27/50
147/147 [==============================] - 36s 238ms/step - loss: 0.0090 - accuracy: 1.0000 - val_loss: 2.0481 - val_accuracy: 0.6243
Epoch 28/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0078 - accuracy: 1.0000 - val_loss: 2.0887 - val_accuracy: 0.6192
Epoch 29/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0068 - accuracy: 1.0000 - val_loss: 2.1211 - val_accuracy: 0.6192
Epoch 30/50
147/147 [==============================] - 36s 237ms/step - loss: 0.0063 - accuracy: 1.0000 - val_loss: 2.1128 - val_accuracy: 0.6251
Epoch 31/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0052 - accuracy: 1.0000 - val_loss: 2.1878 - val_accuracy: 0.6109
Epoch 32/50
147/147 [==============================] - 36s 237ms/step - loss: 0.0045 - accuracy: 1.0000 - val_loss: 2.1911 - val_accuracy: 0.6209
Epoch 33/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0040 - accuracy: 1.0000 - val_loss: 2.2364 - val_accuracy: 0.6184
Epoch 34/50
147/147 [==============================] - 35s 235ms/step - loss: 0.0035 - accuracy: 1.0000 - val_loss: 2.2366 - val_accuracy: 0.6151
Epoch 35/50
147/147 [==============================] - 35s 235ms/step - loss: 0.0031 - accuracy: 1.0000 - val_loss: 2.2635 - val_accuracy: 0.6176
Epoch 36/50
147/147 [==============================] - 35s 235ms/step - loss: 0.0029 - accuracy: 1.0000 - val_loss: 2.3078 - val_accuracy: 0.6184
Epoch 37/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0027 - accuracy: 1.0000 - val_loss: 2.2749 - val_accuracy: 0.6201
Epoch 38/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 2.3161 - val_accuracy: 0.6268
Epoch 39/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 2.3644 - val_accuracy: 0.6184
Epoch 40/50
147/147 [==============================] - 35s 235ms/step - loss: 0.0019 - accuracy: 1.0000 - val_loss: 2.3657 - val_accuracy: 0.6209
Epoch 41/50
147/147 [==============================] - 36s 237ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 2.3881 - val_accuracy: 0.6209
Epoch 42/50
147/147 [==============================] - 36s 237ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 2.4116 - val_accuracy: 0.6151
Epoch 43/50
147/147 [==============================] - 36s 237ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 2.4386 - val_accuracy: 0.6201
Epoch 44/50
147/147 [==============================] - 35s 236ms/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 2.4513 - val_accuracy: 0.6192
Epoch 45/50
147/147 [==============================] - 36s 237ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 2.4841 - val_accuracy: 0.6184
Epoch 46/50
147/147 [==============================] - 36s 237ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 2.4559 - val_accuracy: 0.6209
Epoch 47/50
147/147 [==============================] - 35s 236ms/step - loss: 9.4228e-04 - accuracy: 1.0000 - val_loss: 2.4905 - val_accuracy: 0.6243
Epoch 48/50
147/147 [==============================] - 35s 236ms/step - loss: 8.5505e-04 - accuracy: 1.0000 - val_loss: 2.5463 - val_accuracy: 0.6201
Epoch 49/50
147/147 [==============================] - 35s 236ms/step - loss: 8.0738e-04 - accuracy: 1.0000 - val_loss: 2.5374 - val_accuracy: 0.6167
Epoch 50/50
147/147 [==============================] - 36s 237ms/step - loss: 7.0229e-04 - accuracy: 1.0000 - val_loss: 2.5565 - val_accuracy: 0.6226

모델 저장

  • flask에서 활용하기 위해 최종적으로 생성한 model을 h5로 저장한다.
1
2
3
4
from keras.models import load_model

model4.save('model-for-female.h5')
model5.save('model-for-male.h5')
0%