Skip to content

Instantly share code, notes, and snippets.

View sayakpaul's full-sized avatar
:octocat:
Learn, unlearn and relearn.

Sayak Paul sayakpaul

:octocat:
Learn, unlearn and relearn.
View GitHub Profile
Train for 78 steps
Epoch 1/5
78/78 [==============================] - 71s 916ms/step - loss: 0.3686 - accuracy: 0.7503
Epoch 2/5
78/78 [==============================] - 18s 227ms/step - loss: 0.2259 - accuracy: 0.7822
...
Epoch 5/5
78/78 [==============================] - 18s 230ms/step - loss: 0.1180 - accuracy: 0.7928
Train for 78 steps
Epoch 1/5
78/78 [==============================] - 80s 1s/step - loss: 0.3619 - accuracy: 0.7359
Epoch 2/5
78/78 [==============================] - 59s 756ms/step - loss: 0.2056 - accuracy: 0.7683
...
Epoch 5/5
78/78 [==============================] - 61s 788ms/step - loss: 0.1017 - accuracy: 0.7869
start = time.time()
model.fit(...,
callbacks=[WandbCallback()])
training_time = time.time() - start
wandb.log({"training_time":training_time})
opt = Adam(learning_rate=1e-4)
opt = tf.keras.mixed_precision.experimental.LossScaleOptimizer(opt, "dynamic")
model.compile(loss="categorical_crossentropy",
optimizer=opt,
metrics=["accuracy"])
We can't make this file beautiful and searchable because it's too large.
-5.253666639328002930e-02 1.281377375125885010e-01 -9.412130713462829590e-02 1.372232139110565186e-01 6.853442639112472534e-02 7.089108973741531372e-02 -1.823896020650863647e-01 7.667284458875656128e-02 -1.285943537950515747e-01 -7.423347979784011841e-02 -5.199320614337921143e-02 -1.773469150066375732e-01 -8.214486762881278992e-03 -1.190248429775238037e-01 1.122813522815704346e-01 -3.958520293235778809e-02 1.830780878663063049e-02 9.424114972352981567e-02 1.042369827628135681e-01 -1.216302141547203064e-01 2.974569052457809448e-02 -4.232205450534820557e-02 7.888983935117721558e-02 -5.923315417021512985e-03 1.116444468498229980e-01 -1.251639425754547119e-01 -1.113891974091529846e-02 4.876513499766588211e-03 9.422960877418518066e-02 -2.878891490399837494e-02 1.237540990114212036e-01 -4.374166578054428101e-02 -1.296977978199720383e-02 6.766474992036819458e-02 1.500170398503541946e-02 -2.846897346898913383e-03 1.339649222791194916e-02 -1.788336485624313354e-01 -1.079873889684677124e-01 -1.320842802524566650e-01 1.
We can't make this file beautiful and searchable because it's too large.
-5.253666639328002930e-02 1.281377375125885010e-01 -9.412130713462829590e-02 1.372232139110565186e-01 6.853442639112472534e-02 7.089108973741531372e-02 -1.823896020650863647e-01 7.667284458875656128e-02 -1.285943537950515747e-01 -7.423347979784011841e-02 -5.199320614337921143e-02 -1.773469150066375732e-01 -8.214486762881278992e-03 -1.190248429775238037e-01 1.122813522815704346e-01 -3.958520293235778809e-02 1.830780878663063049e-02 9.424114972352981567e-02 1.042369827628135681e-01 -1.216302141547203064e-01 2.974569052457809448e-02 -4.232205450534820557e-02 7.888983935117721558e-02 -5.923315417021512985e-03 1.116444468498229980e-01 -1.251639425754547119e-01 -1.113891974091529846e-02 4.876513499766588211e-03 9.422960877418518066e-02 -2.878891490399837494e-02 1.237540990114212036e-01 -4.374166578054428101e-02 -1.296977978199720383e-02 6.766474992036819458e-02 1.500170398503541946e-02 -2.846897346898913383e-03 1.339649222791194916e-02 -1.788336485624313354e-01 -1.079873889684677124e-01 -1.320842802524566650e-01 1.
We can make this file beautiful and searchable if this error is corrected: No tabs found in this TSV file in line 0.
['Ariel_Sharon']
['George_W_Bush']
['George_W_Bush']
['Donald_Rumsfeld']
['Gerhard_Schroeder']
['George_W_Bush']
['George_W_Bush']
['George_W_Bush']
['Colin_Powell']
['Colin_Powell']
{
"embeddings": [
{
"tensorName": "My tensor",
"tensorShape": [
1000,
50
],
"tensorPath": "https://gist.githubusercontent.com/sayakpaul/43ccb203cc35bcf8e255e76850923246/raw/1aee7f460095adf7ffbdf08e1f3e7921bfb03199/vecs.tsv",
"metadataPath": "https://gist.githubusercontent.com/sayakpaul/79c094950b7d8920a5509dafba0c0041/raw/b6013bf85bd9ce03520ed74bd8f27f43d99d0ba3/meta.tsv"
# Load the MobileNetV2 model but exclude the classification layers
EXTRACTOR = MobileNetV2(weights="imagenet", include_top=False,
input_shape=(224, 224, 3))
# We will set it to both True and False
EXTRACTOR.trainable = True
# Construct the head of the model that will be placed on top of the
# the base model
class_head = EXTRACTOR.output
converter = tf.lite.TFLiteConverter.from_keras_model(non_qat_flower_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()