Skip to content

Instantly share code, notes, and snippets.

@RITIK-12
Created September 20, 2020 15:13
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save RITIK-12/02791f38b27985dbfa8a55a5a61554c4 to your computer and use it in GitHub Desktop.
Save RITIK-12/02791f38b27985dbfa8a55a5a61554c4 to your computer and use it in GitHub Desktop.
# loading the MobileNetV2 network, ensuring the head FC layer sets are left off
baseModel = MobileNetV2(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
# constructing the head of the model that will be placed on top of the base model
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(7, 7))(headModel)
headModel = Flatten(name="flatten")(headModel)
headModel = Dense(128, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(2, activation="softmax")(headModel)
# placing the head FC model on top of the base model (this will become the actual model we will train)
model = Model(inputs=baseModel.input, outputs=headModel)
# loop over all layers in the base model and freeze them so they will not be updated during the first training process
for layer in baseModel.layers:
layer.trainable = False
# compiling our model
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="binary_crossentropy", optimizer=opt,
metrics=["accuracy"])
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment