Skip to content

Instantly share code, notes, and snippets.

@malcolmgreaves
Created April 30, 2019 18:46
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save malcolmgreaves/26ca6222b11e3941e61289ee046bfde9 to your computer and use it in GitHub Desktop.
Save malcolmgreaves/26ca6222b11e3941e61289ee046bfde9 to your computer and use it in GitHub Desktop.
Key transfer learning insight: when adapating a pre-trained DL model, one must freeze the inner network and change the input & output layers, then train on new data. The fine-tuning is all w/in the _new_ (unfrozen!) I/O layers.
# https://github.com/intel-analytics/analytics-zoo#transfer-learning
from zoo.pipeline.api.net import *
from zoo.pipeline.api.keras.layers import *
from zoo.pipeline.api.keras.models import *
def transfer_learn_model(def_path:str, model_path:str) -> Model:
# load the pre-trained model
full_model = Net.load_caffe(def_path, model_path)
# create a new model by removing layers after pool5/drop_7x7_s1
model = full_model.new_graph(["pool5/drop_7x7_s1"])
# freeze layers from input to pool4/3x3_s2 inclusive
model.freeze_up_to(["pool4/3x3_s2"])
# wrap the frozen model w/ new I/O layers
inputs = Input(name="input", shape=(3, 224, 224))
inception = model.to_keras()(inputs)
flatten = Flatten()(inception)
logits = Dense(2)(flatten)
return Model(inputs, logits)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment