Skip to content

Instantly share code, notes, and snippets.

@louismullie
Created September 15, 2020 15:54
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save louismullie/01ea1c4cb9e74c3ef7ff88d4b6d9d93e to your computer and use it in GitHub Desktop.
Save louismullie/01ea1c4cb9e74c3ef7ff88d4b6d9d93e to your computer and use it in GitHub Desktop.
Import libraries (tensorflow backend)
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
import numpy as np
/opt/conda/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Build the CNN
classifier = Sequential()
**Convolution
classifier.add(Conv2D(32, (3, 3), activation="relu", input_shape=(64, 64, 3)))
Choose 32 feature detectors and an input shape of 3D image of 64x64 pixels
Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))
Pooling is made with a 2x2 array
Add 2nd convolutional layer with the same structure as the 1st to improve predictions
classifier.add(Conv2D(32, (3, 3), activation="relu"))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
Flattening
classifier.add(Flatten())
Full Connection
classifier.add(Dense(activation = 'relu', units = 128))
classifier.add(Dense(activation = 'sigmoid', units = 1))
CNN has 128 nodes in the first layer of the ANN that's connected in the backbone with rectifier activation function. We then add sigmoid activation function because we have binary outcome with 1 node in the output layer.
Compile the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
adam is for stohastic gradient descent and binary crossentropy for logarithmic loss for binary outcomes
Image Augmentation
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
Apply several transformations to train the model in a better significant way, keras documentation provides all the required information for augmentation
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('../input/chest_xray/chest_xray/train',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('../input/chest_xray/chest_xray/test',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
Found 5216 images belonging to 2 classes.
Found 624 images belonging to 2 classes.
Target size is 64x64. This is the size of the images the model is trained above and size of the batches in which random samples of our images will be included. Class mode is binary because dependent variable is binary.
classifier.summary()
history = classifier.fit_generator(training_set,
steps_per_epoch = 163,
epochs = 10,
validation_data = test_set,
validation_steps = 624)
Epoch 1/10
163/163 [==============================] - 364s 2s/step - loss: 0.3763 - acc: 0.8342 - val_loss: 0.3915 - val_acc: 0.8189
Epoch 2/10
163/163 [==============================] - 337s 2s/step - loss: 0.2427 - acc: 0.8990 - val_loss: 0.5066 - val_acc: 0.7886
Epoch 3/10
163/163 [==============================] - 335s 2s/step - loss: 0.2004 - acc: 0.9187 - val_loss: 0.4748 - val_acc: 0.8223
Epoch 4/10
163/163 [==============================] - 334s 2s/step - loss: 0.2059 - acc: 0.9220 - val_loss: 0.6846 - val_acc: 0.7498
Epoch 5/10
163/163 [==============================] - 323s 2s/step - loss: 0.1673 - acc: 0.9362 - val_loss: 0.4290 - val_acc: 0.8382
Epoch 6/10
163/163 [==============================] - 319s 2s/step - loss: 0.1746 - acc: 0.9300 - val_loss: 0.5238 - val_acc: 0.7965
Epoch 7/10
163/163 [==============================] - 319s 2s/step - loss: 0.1616 - acc: 0.9390 - val_loss: 0.2872 - val_acc: 0.8911
Epoch 8/10
163/163 [==============================] - 318s 2s/step - loss: 0.1589 - acc: 0.9383 - val_loss: 0.3793 - val_acc: 0.8732
Epoch 9/10
163/163 [==============================] - 288s 2s/step - loss: 0.1478 - acc: 0.9431 - val_loss: 0.3417 - val_acc: 0.8671
Epoch 10/10
163/163 [==============================] - 314s 2s/step - loss: 0.1559 - acc: 0.9410 - val_loss: 0.3505 - val_acc: 0.8734
#Accuracy
print(history.history.keys())
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Training set', 'Test set'], loc='upper left')
plt.show()
#Loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training set', 'Test set'], loc='upper left')
plt.show()
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment