Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save NobuoTsukamoto/6b7557b4495eeb5e9d94fda5e98dc0a7 to your computer and use it in GitHub Desktop.
Save NobuoTsukamoto/6b7557b4495eeb5e9d94fda5e98dc0a7 to your computer and use it in GitHub Desktop.
Quantization-Aware Training MobileNet v3.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "Quantization-Aware Training MobileNet v3.ipynb",
"provenance": [],
"authorship_tag": "ABX9TyMTfPyQyNUi9uGvWYNCmwoQ",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/NobuoTsukamoto/6b7557b4495eeb5e9d94fda5e98dc0a7/quantization-aware-training-mobilenet-v3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jpMclYshittY",
"colab_type": "text"
},
"source": [
"# Quantization-Aware Training MobileNet v3\n",
"\n",
"This notebook ...\n",
"\n",
"\n",
"* Quantizetion-Aware training of MobileNet v3 (Image classification model).\n",
"* Freeze graph and convert TF-Lite full integer quantizetion model.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "twyIUkwh921Z",
"colab_type": "text"
},
"source": [
"## Setup\n",
"\n",
"\n",
"* Works with Google Colab.\n",
"* Clone [tensorflow / models](https://github.com/tensorflow/models.git) repository.\n",
"* TensorFlow version: 1.x.\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "EbkKh_jF-lDG",
"colab_type": "code",
"colab": {}
},
"source": [
"%tensorflow_version 1.x"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "PeZ1HmKQVuCa",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 306
},
"outputId": "eb4d61ad-6ef2-43f1-edf6-f00e9ada82eb"
},
"source": [
"!nvidia-smi"
],
"execution_count": 2,
"outputs": [
{
"output_type": "stream",
"text": [
"Sun Mar 22 12:16:25 2020 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 440.64.00 Driver Version: 418.67 CUDA Version: 10.1 |\n",
"|-------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\n",
"| N/A 67C P8 12W / 70W | 0MiB / 15079MiB | 0% Default |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
"+-----------------------------------------------------------------------------+\n",
"| Processes: GPU Memory |\n",
"| GPU PID Type Process name Usage |\n",
"|=============================================================================|\n",
"| No running processes found |\n",
"+-----------------------------------------------------------------------------+\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "PQgR6OtECq0G",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 119
},
"outputId": "19e7fee0-52fc-4923-fa20-08d405f8a614"
},
"source": [
"!git clone https://github.com/tensorflow/models.git"
],
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"text": [
"Cloning into 'models'...\n",
"remote: Enumerating objects: 32893, done.\u001b[K\n",
"remote: Total 32893 (delta 0), reused 0 (delta 0), pack-reused 32893\u001b[K\n",
"Receiving objects: 100% (32893/32893), 511.79 MiB | 15.56 MiB/s, done.\n",
"Resolving deltas: 100% (21085/21085), done.\n",
"Checking out files: 100% (2437/2437), done.\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "Yn6O4zaIDOKU",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "8dbbd16d-46a7-4d00-a5c3-956bc858bd90"
},
"source": [
"%cd ./models/research/slim/"
],
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"text": [
"/content/models/research/slim\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WBISe_8e-oju",
"colab_type": "text"
},
"source": [
"## Downloading dataset and converting to TFRecord format\n",
"Download the flowers dataset and covnert to TFRecord.See [details](https://github.com/tensorflow/models/tree/master/research/slim#downloading-and-converting-to-tfrecord-format)."
]
},
{
"cell_type": "code",
"metadata": {
"id": "2Di_MYjaDjMG",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"outputId": "f0ba4ce3-e66c-4b6f-e370-697c9a3c8f77"
},
"source": [
"!python download_and_convert_data.py \\\n",
" --dataset_name=flowers \\\n",
" --dataset_dir=/content/data/flowers"
],
"execution_count": 5,
"outputs": [
{
"output_type": "stream",
"text": [
"WARNING:tensorflow:From /content/models/research/slim/datasets/download_and_convert_flowers.py:183: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.\n",
"\n",
"W0322 12:17:17.998520 140354576320384 module_wrapper.py:139] From /content/models/research/slim/datasets/download_and_convert_flowers.py:183: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/download_and_convert_flowers.py:184: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.\n",
"\n",
"W0322 12:17:17.998827 140354576320384 module_wrapper.py:139] From /content/models/research/slim/datasets/download_and_convert_flowers.py:184: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.\n",
"\n",
">> Downloading flower_photos.tgz 100.0%\n",
"Successfully downloaded flower_photos.tgz 228813984 bytes.\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/download_and_convert_flowers.py:57: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n",
"\n",
"W0322 12:17:23.751250 140354576320384 module_wrapper.py:139] From /content/models/research/slim/datasets/download_and_convert_flowers.py:57: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/download_and_convert_flowers.py:124: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n",
"\n",
"W0322 12:17:23.755553 140354576320384 module_wrapper.py:139] From /content/models/research/slim/datasets/download_and_convert_flowers.py:124: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n",
"\n",
"2020-03-22 12:17:23.756907: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n",
"2020-03-22 12:17:23.821391: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:23.821950: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:17:23.822329: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:17:23.824141: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:17:23.825678: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:17:23.825978: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:17:23.836263: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:17:23.868117: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:17:23.871581: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:17:23.871683: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:23.872230: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:23.872727: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:17:23.954965: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz\n",
"2020-03-22 12:17:23.957330: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x258aa00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n",
"2020-03-22 12:17:23.957360: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n",
"2020-03-22 12:17:24.108238: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:24.108880: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x258abc0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2020-03-22 12:17:24.108916: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\n",
"2020-03-22 12:17:24.109975: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:24.110466: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:17:24.110519: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:17:24.110539: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:17:24.110558: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:17:24.110576: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:17:24.110593: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:17:24.110610: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:17:24.110628: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:17:24.110680: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:24.111205: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:24.111672: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:17:24.115201: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:17:24.116262: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
"2020-03-22 12:17:24.116290: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 \n",
"2020-03-22 12:17:24.116301: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N \n",
"2020-03-22 12:17:24.117442: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:24.117997: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:24.118496: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n",
"2020-03-22 12:17:24.118536: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14221 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/download_and_convert_flowers.py:130: The name tf.python_io.TFRecordWriter is deprecated. Please use tf.io.TFRecordWriter instead.\n",
"\n",
"W0322 12:17:24.119186 140354576320384 module_wrapper.py:139] From /content/models/research/slim/datasets/download_and_convert_flowers.py:130: The name tf.python_io.TFRecordWriter is deprecated. Please use tf.io.TFRecordWriter instead.\n",
"\n",
">> Converting image 1/3320 shard 0WARNING:tensorflow:From /content/models/research/slim/datasets/download_and_convert_flowers.py:139: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.\n",
"\n",
"W0322 12:17:24.119598 140354576320384 module_wrapper.py:139] From /content/models/research/slim/datasets/download_and_convert_flowers.py:139: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.\n",
"\n",
">> Converting image 3320/3320 shard 4\n",
"2020-03-22 12:17:30.176771: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:30.177310: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:17:30.177360: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:17:30.177373: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:17:30.177385: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:17:30.177397: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:17:30.177408: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:17:30.177419: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:17:30.177430: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:17:30.177477: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:30.177979: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:30.178432: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:17:30.178463: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
"2020-03-22 12:17:30.178472: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 \n",
"2020-03-22 12:17:30.178480: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N \n",
"2020-03-22 12:17:30.178549: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:30.179086: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:17:30.179566: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14221 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n",
">> Converting image 350/350 shard 4\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/dataset_utils.py:176: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.\n",
"\n",
"W0322 12:17:30.817924 140354576320384 module_wrapper.py:139] From /content/models/research/slim/datasets/dataset_utils.py:176: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/download_and_convert_flowers.py:161: The name tf.gfile.Remove is deprecated. Please use tf.io.gfile.remove instead.\n",
"\n",
"W0322 12:17:30.818370 140354576320384 module_wrapper.py:139] From /content/models/research/slim/datasets/download_and_convert_flowers.py:161: The name tf.gfile.Remove is deprecated. Please use tf.io.gfile.remove instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/download_and_convert_flowers.py:164: The name tf.gfile.DeleteRecursively is deprecated. Please use tf.io.gfile.rmtree instead.\n",
"\n",
"W0322 12:17:30.842796 140354576320384 module_wrapper.py:139] From /content/models/research/slim/datasets/download_and_convert_flowers.py:164: The name tf.gfile.DeleteRecursively is deprecated. Please use tf.io.gfile.rmtree instead.\n",
"\n",
"\n",
"Finished converting the Flowers dataset!\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SVOuydyt_Iqb",
"colab_type": "text"
},
"source": [
"## Downloading pre-trainded model\n",
"\n",
"\n",
"* [Mobilenet V3 Imagenet Checkpoints](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet#mobilenet-v3-imagenet-checkpoints).\n",
"* Use full integer quantizetion (8-bit) checkpoint.\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "gxU8AoqSEAwU",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 204
},
"outputId": "bd7f7c6d-a4ed-4ca1-eef7-1945a787d09b"
},
"source": [
"!wget https://storage.googleapis.com/mobilenet_v3/checkpoints/v3-large_224_1.0_uint8.tgz -P /content/data\n",
"!tar xf /content/data/v3-large_224_1.0_uint8.tgz -C /content/data"
],
"execution_count": 6,
"outputs": [
{
"output_type": "stream",
"text": [
"--2020-03-22 12:17:33-- https://storage.googleapis.com/mobilenet_v3/checkpoints/v3-large_224_1.0_uint8.tgz\n",
"Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.68.128, 2404:6800:4003:c03::80\n",
"Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.68.128|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 189990560 (181M) [application/x-compressed-tar]\n",
"Saving to: ‘/content/data/v3-large_224_1.0_uint8.tgz’\n",
"\n",
"v3-large_224_1.0_ui 100%[===================>] 181.19M 49.1MB/s in 3.7s \n",
"\n",
"2020-03-22 12:17:38 (49.1 MB/s) - ‘/content/data/v3-large_224_1.0_uint8.tgz’ saved [189990560/189990560]\n",
"\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "H6pruuqcAU21",
"colab_type": "text"
},
"source": [
"## Training model\n",
"\n",
"\n",
"* Set fine-tuning checkpoint with the *--checkpoint_path* and train all layers.\n",
"* *quantize_delay* is required for quantization-aware training.\n",
"* Parameters such as *max_number_of_steps*, *quantize_delay*, *learning_rate* and etc.. need to be adjusted.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "fxEVDx0KErwi",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"outputId": "8209430c-91a0-432e-ccc4-13be5afa1dde"
},
"source": [
"!python train_image_classifier.py \\\n",
" --train_dir=/content/data/train \\\n",
" --dataset_dir=/content/data/flowers \\\n",
" --dataset_name=flowers \\\n",
" --dataset_split_name=train \\\n",
" --model_name=mobilenet_v3_large \\\n",
" --max_number_of_steps=2000 \\\n",
" --batch_size=32 \\\n",
" --learning_rate=0.01 \\\n",
" --learning_rate_decay_type=fixed \\\n",
" --save_interval_secs=60 \\\n",
" --save_summaries_secs=60 \\\n",
" --log_every_n_steps=20 \\\n",
" --optimizer=sgd \\\n",
" --weight_decay=0.00004 \\\n",
" --quantize_delay=1000 \\\n",
" --train_image_size=224 \\\n",
" --checkpoint_path=/content/data/v3-large_224_1.0_uint8/ema/model-2790693 \\\n",
" --checkpoint_exclude_scopes=MobilenetV3/Logits"
],
"execution_count": 7,
"outputs": [
{
"output_type": "stream",
"text": [
"WARNING:tensorflow:From train_image_classifier.py:608: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.\n",
"\n",
"WARNING:tensorflow:From train_image_classifier.py:417: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.\n",
"\n",
"W0322 12:17:48.486773 139723617343360 module_wrapper.py:139] From train_image_classifier.py:417: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.\n",
"\n",
"WARNING:tensorflow:From train_image_classifier.py:417: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.\n",
"\n",
"W0322 12:17:48.486971 139723617343360 module_wrapper.py:139] From train_image_classifier.py:417: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.\n",
"\n",
"WARNING:tensorflow:From train_image_classifier.py:431: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please switch to tf.train.create_global_step\n",
"W0322 12:17:48.487811 139723617343360 deprecation.py:323] From train_image_classifier.py:431: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please switch to tf.train.create_global_step\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/flowers.py:74: The name tf.FixedLenFeature is deprecated. Please use tf.io.FixedLenFeature instead.\n",
"\n",
"W0322 12:17:48.492416 139723617343360 module_wrapper.py:139] From /content/models/research/slim/datasets/flowers.py:74: The name tf.FixedLenFeature is deprecated. Please use tf.io.FixedLenFeature instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/dataset_utils.py:192: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.\n",
"\n",
"W0322 12:17:48.493160 139723617343360 module_wrapper.py:139] From /content/models/research/slim/datasets/dataset_utils.py:192: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/dataset_utils.py:206: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.\n",
"\n",
"W0322 12:17:48.493376 139723617343360 module_wrapper.py:139] From /content/models/research/slim/datasets/dataset_utils.py:206: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.\n",
"\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/slim/python/slim/data/parallel_reader.py:246: string_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n",
"W0322 12:17:48.495000 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/slim/python/slim/data/parallel_reader.py:246: string_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:277: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n",
"W0322 12:17:48.499635 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:277: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:189: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensors(tensor).repeat(num_epochs)`.\n",
"W0322 12:17:48.500559 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:189: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensors(tensor).repeat(num_epochs)`.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:198: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"To construct input pipelines, use the `tf.data` module.\n",
"W0322 12:17:48.502167 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:198: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"To construct input pipelines, use the `tf.data` module.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:198: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"To construct input pipelines, use the `tf.data` module.\n",
"W0322 12:17:48.503301 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:198: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"To construct input pipelines, use the `tf.data` module.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/slim/python/slim/data/parallel_reader.py:95: TFRecordReader.__init__ (from tensorflow.python.ops.io_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.TFRecordDataset`.\n",
"W0322 12:17:48.509727 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/slim/python/slim/data/parallel_reader.py:95: TFRecordReader.__init__ (from tensorflow.python.ops.io_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.TFRecordDataset`.\n",
"WARNING:tensorflow:From /content/models/research/slim/preprocessing/inception_preprocessing.py:206: The name tf.summary.image is deprecated. Please use tf.compat.v1.summary.image instead.\n",
"\n",
"W0322 12:17:48.592390 139723617343360 module_wrapper.py:139] From /content/models/research/slim/preprocessing/inception_preprocessing.py:206: The name tf.summary.image is deprecated. Please use tf.compat.v1.summary.image instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/preprocessing/inception_preprocessing.py:148: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.\n",
"W0322 12:17:48.593942 139723617343360 deprecation.py:323] From /content/models/research/slim/preprocessing/inception_preprocessing.py:148: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"`seed2` arg is deprecated.Use sample_distorted_bounding_box_v2 instead.\n",
"WARNING:tensorflow:From /content/models/research/slim/preprocessing/inception_preprocessing.py:38: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n",
"\n",
"W0322 12:17:48.597685 139723617343360 module_wrapper.py:139] From /content/models/research/slim/preprocessing/inception_preprocessing.py:38: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/preprocessing/inception_preprocessing.py:230: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead.\n",
"\n",
"W0322 12:17:48.600600 139723617343360 module_wrapper.py:139] From /content/models/research/slim/preprocessing/inception_preprocessing.py:230: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead.\n",
"\n",
"WARNING:tensorflow:From train_image_classifier.py:477: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).\n",
"W0322 12:17:48.631817 139723617343360 deprecation.py:323] From train_image_classifier.py:477: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).\n",
"WARNING:tensorflow:From train_image_classifier.py:504: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.\n",
"\n",
"W0322 12:17:48.644028 139723617343360 module_wrapper.py:139] From train_image_classifier.py:504: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.\n",
"\n",
"WARNING:tensorflow:From train_image_classifier.py:504: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.\n",
"\n",
"W0322 12:17:48.644193 139723617343360 module_wrapper.py:139] From train_image_classifier.py:504: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/deployment/model_deploy.py:192: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.\n",
"\n",
"W0322 12:17:48.644462 139723617343360 module_wrapper.py:139] From /content/models/research/slim/deployment/model_deploy.py:192: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/deployment/model_deploy.py:192: The name tf.get_variable_scope is deprecated. Please use tf.compat.v1.get_variable_scope instead.\n",
"\n",
"W0322 12:17:48.644601 139723617343360 module_wrapper.py:139] From /content/models/research/slim/deployment/model_deploy.py:192: The name tf.get_variable_scope is deprecated. Please use tf.compat.v1.get_variable_scope instead.\n",
"\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please use `layer.__call__` method instead.\n",
"W0322 12:17:48.647220 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please use `layer.__call__` method instead.\n",
"WARNING:tensorflow:From train_image_classifier.py:500: softmax_cross_entropy (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.\n",
"Instructions for updating:\n",
"Use tf.losses.softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.\n",
"W0322 12:17:50.373321 139723617343360 deprecation.py:323] From train_image_classifier.py:500: softmax_cross_entropy (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.\n",
"Instructions for updating:\n",
"Use tf.losses.softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/losses/python/losses/loss_ops.py:373: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"\n",
"Future major versions of TensorFlow will allow gradients to flow\n",
"into the labels input on backprop by default.\n",
"\n",
"See `tf.nn.softmax_cross_entropy_with_logits_v2`.\n",
"\n",
"W0322 12:17:50.373662 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/losses/python/losses/loss_ops.py:373: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"\n",
"Future major versions of TensorFlow will allow gradients to flow\n",
"into the labels input on backprop by default.\n",
"\n",
"See `tf.nn.softmax_cross_entropy_with_logits_v2`.\n",
"\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/losses/python/losses/loss_ops.py:374: compute_weighted_loss (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.\n",
"Instructions for updating:\n",
"Use tf.losses.compute_weighted_loss instead.\n",
"W0322 12:17:50.416026 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/losses/python/losses/loss_ops.py:374: compute_weighted_loss (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.\n",
"Instructions for updating:\n",
"Use tf.losses.compute_weighted_loss instead.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/losses/python/losses/loss_ops.py:152: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Deprecated in favor of operator or tf.math.divide.\n",
"W0322 12:17:50.422946 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/losses/python/losses/loss_ops.py:152: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Deprecated in favor of operator or tf.math.divide.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/losses/python/losses/loss_ops.py:154: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use tf.where in 2.0, which has the same broadcast rule as np.where\n",
"W0322 12:17:50.424487 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/losses/python/losses/loss_ops.py:154: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use tf.where in 2.0, which has the same broadcast rule as np.where\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/losses/python/losses/loss_ops.py:121: add_loss (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.\n",
"Instructions for updating:\n",
"Use tf.losses.add_loss instead.\n",
"W0322 12:17:50.430288 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/losses/python/losses/loss_ops.py:121: add_loss (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.\n",
"Instructions for updating:\n",
"Use tf.losses.add_loss instead.\n",
"WARNING:tensorflow:From train_image_classifier.py:516: The name tf.summary.histogram is deprecated. Please use tf.compat.v1.summary.histogram instead.\n",
"\n",
"W0322 12:17:50.430702 139723617343360 module_wrapper.py:139] From train_image_classifier.py:516: The name tf.summary.histogram is deprecated. Please use tf.compat.v1.summary.histogram instead.\n",
"\n",
"WARNING:tensorflow:From train_image_classifier.py:517: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.\n",
"\n",
"W0322 12:17:50.431751 139723617343360 module_wrapper.py:139] From train_image_classifier.py:517: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.\n",
"\n",
"INFO:tensorflow:Skipping MobilenetV3/Conv/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:54.016806 139723617343360 quantize.py:166] Skipping MobilenetV3/Conv/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:17:54.596524 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:17:55.007453 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:17:55.256063 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_6/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:55.349729 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_6/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_6/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:55.438431 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_6/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_7/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:55.591721 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_7/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_7/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:55.700806 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_7/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_8/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:55.799128 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_8/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_8/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:55.902909 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_8/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_9/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:55.996863 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_9/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_9/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:56.098905 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_9/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_10/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:56.191020 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_10/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_10/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:56.281291 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_10/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:17:56.430202 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_11/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:56.576476 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_11/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_11/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:56.668930 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_11/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:17:56.813037 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_12/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:56.910184 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_12/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_12/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:57.004487 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_12/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:17:57.146528 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_13/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:57.299288 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_13/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_13/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:57.391772 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_13/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:17:57.534725 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_14/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:57.630144 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_14/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_14/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:57.721001 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_14/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:17:57.866468 139723617343360 quantize.py:166] Skipping MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/Conv_1/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:57.960651 139723617343360 quantize.py:166] Skipping MobilenetV3/Conv_1/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/Conv_2/hard_swish/add, because its followed by an activation.\n",
"I0322 12:17:58.055490 139723617343360 quantize.py:166] Skipping MobilenetV3/Conv_2/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/Conv/hard_swish/add\n",
"I0322 12:17:58.169962 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/Conv/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul\n",
"I0322 12:17:58.170114 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul_1\n",
"I0322 12:17:58.211442 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add\n",
"I0322 12:17:58.252216 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/mul\n",
"I0322 12:17:58.252376 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/mul\n",
"I0322 12:17:58.294549 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add\n",
"I0322 12:17:58.336792 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/mul\n",
"I0322 12:17:58.336942 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/mul\n",
"I0322 12:17:58.378393 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add\n",
"I0322 12:17:58.420523 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/mul\n",
"I0322 12:17:58.420656 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/mul\n",
"I0322 12:17:58.460569 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_6/expand/hard_swish/add\n",
"I0322 12:17:58.499639 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_6/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul\n",
"I0322 12:17:58.499790 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul_1\n",
"I0322 12:17:58.539097 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/add\n",
"I0322 12:17:58.579032 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul\n",
"I0322 12:17:58.579171 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul_1\n",
"I0322 12:17:58.624828 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_7/expand/hard_swish/add\n",
"I0322 12:17:58.664321 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_7/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul\n",
"I0322 12:17:58.664453 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul_1\n",
"I0322 12:17:58.703580 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/add\n",
"I0322 12:17:58.743153 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul\n",
"I0322 12:17:58.743359 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul_1\n",
"I0322 12:17:58.782932 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_8/expand/hard_swish/add\n",
"I0322 12:17:58.827910 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_8/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul\n",
"I0322 12:17:58.828042 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul_1\n",
"I0322 12:17:58.870515 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/add\n",
"I0322 12:17:58.910633 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul\n",
"I0322 12:17:58.910788 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul_1\n",
"I0322 12:17:59.072129 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_9/expand/hard_swish/add\n",
"I0322 12:17:59.111970 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_9/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul\n",
"I0322 12:17:59.112120 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul_1\n",
"I0322 12:17:59.152556 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/add\n",
"I0322 12:17:59.194487 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul\n",
"I0322 12:17:59.194629 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul_1\n",
"I0322 12:17:59.238197 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_10/expand/hard_swish/add\n",
"I0322 12:17:59.277858 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_10/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul\n",
"I0322 12:17:59.278023 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul_1\n",
"I0322 12:17:59.317922 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/add\n",
"I0322 12:17:59.361415 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul\n",
"I0322 12:17:59.361557 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul_1\n",
"I0322 12:17:59.403212 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add\n",
"I0322 12:17:59.445052 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/mul\n",
"I0322 12:17:59.445179 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/mul\n",
"I0322 12:17:59.484547 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_11/expand/hard_swish/add\n",
"I0322 12:17:59.524688 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_11/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul\n",
"I0322 12:17:59.524836 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul_1\n",
"I0322 12:17:59.564624 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/add\n",
"I0322 12:17:59.604512 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul\n",
"I0322 12:17:59.604640 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul_1\n",
"I0322 12:17:59.650107 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add\n",
"I0322 12:17:59.689811 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/mul\n",
"I0322 12:17:59.689941 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/mul\n",
"I0322 12:17:59.730245 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_12/expand/hard_swish/add\n",
"I0322 12:17:59.770352 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_12/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul\n",
"I0322 12:17:59.770507 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul_1\n",
"I0322 12:17:59.810219 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/add\n",
"I0322 12:17:59.852531 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul\n",
"I0322 12:17:59.852662 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul_1\n",
"I0322 12:17:59.892094 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add\n",
"I0322 12:17:59.932332 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/mul\n",
"I0322 12:17:59.932463 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/mul\n",
"I0322 12:17:59.972178 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_13/expand/hard_swish/add\n",
"I0322 12:18:00.012238 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_13/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul\n",
"I0322 12:18:00.012368 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul_1\n",
"I0322 12:18:00.054423 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/add\n",
"I0322 12:18:00.098598 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul\n",
"I0322 12:18:00.098745 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul_1\n",
"I0322 12:18:00.138899 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add\n",
"I0322 12:18:00.178676 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/mul\n",
"I0322 12:18:00.178861 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/mul\n",
"I0322 12:18:00.218775 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_14/expand/hard_swish/add\n",
"I0322 12:18:00.260779 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_14/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul\n",
"I0322 12:18:00.260907 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul_1\n",
"I0322 12:18:00.301689 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/add\n",
"I0322 12:18:00.343857 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul\n",
"I0322 12:18:00.344004 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul_1\n",
"I0322 12:18:00.384099 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add\n",
"I0322 12:18:00.432477 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/mul\n",
"I0322 12:18:00.432628 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/mul\n",
"I0322 12:18:00.475474 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/Conv_1/hard_swish/add\n",
"I0322 12:18:00.517334 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/Conv_1/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul\n",
"I0322 12:18:00.517466 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul_1\n",
"I0322 12:18:00.557596 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/Conv_2/hard_swish/add\n",
"I0322 12:18:00.597780 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/Conv_2/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul\n",
"I0322 12:18:00.597909 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul_1\n",
"I0322 12:18:00.642687 139723617343360 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv/depthwise/add_fold\n",
"I0322 12:18:00.690447 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_1/expand/add_fold\n",
"I0322 12:18:00.690999 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_1/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_1/depthwise/add_fold\n",
"I0322 12:18:00.691284 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_1/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_2/expand/add_fold\n",
"I0322 12:18:00.691764 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_2/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_2/depthwise/add_fold\n",
"I0322 12:18:00.692039 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_2/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_3/expand/add_fold\n",
"I0322 12:18:00.692515 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_3/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_3/depthwise/add_fold\n",
"I0322 12:18:00.692794 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_3/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_4/expand/add_fold\n",
"I0322 12:18:00.693263 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_4/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_4/depthwise/add_fold\n",
"I0322 12:18:00.693528 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_4/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_5/expand/add_fold\n",
"I0322 12:18:00.693995 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_5/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_5/depthwise/add_fold\n",
"I0322 12:18:00.694264 139723617343360 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_5/depthwise/add_fold\n",
"WARNING:tensorflow:From train_image_classifier.py:342: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.\n",
"\n",
"W0322 12:18:00.717312 139723617343360 module_wrapper.py:139] From train_image_classifier.py:342: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.\n",
"\n",
"WARNING:tensorflow:From train_image_classifier.py:402: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead.\n",
"\n",
"W0322 12:18:00.718329 139723617343360 module_wrapper.py:139] From train_image_classifier.py:402: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead.\n",
"\n",
"WARNING:tensorflow:From train_image_classifier.py:588: The name tf.summary.merge is deprecated. Please use tf.compat.v1.summary.merge instead.\n",
"\n",
"W0322 12:18:07.714879 139723617343360 module_wrapper.py:139] From train_image_classifier.py:588: The name tf.summary.merge is deprecated. Please use tf.compat.v1.summary.merge instead.\n",
"\n",
"WARNING:tensorflow:From train_image_classifier.py:382: The name tf.gfile.IsDirectory is deprecated. Please use tf.io.gfile.isdir instead.\n",
"\n",
"W0322 12:18:07.723523 139723617343360 module_wrapper.py:139] From train_image_classifier.py:382: The name tf.gfile.IsDirectory is deprecated. Please use tf.io.gfile.isdir instead.\n",
"\n",
"WARNING:tensorflow:From train_image_classifier.py:387: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.\n",
"\n",
"W0322 12:18:07.723768 139723617343360 module_wrapper.py:139] From train_image_classifier.py:387: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.\n",
"\n",
"INFO:tensorflow:Fine-tuning from /content/data/v3-large_224_1.0_uint8/ema/model-2790693\n",
"I0322 12:18:07.723870 139723617343360 train_image_classifier.py:387] Fine-tuning from /content/data/v3-large_224_1.0_uint8/ema/model-2790693\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/slim/python/slim/learning.py:742: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please switch to tf.train.MonitoredTrainingSession\n",
"W0322 12:18:09.646677 139723617343360 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/slim/python/slim/learning.py:742: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please switch to tf.train.MonitoredTrainingSession\n",
"2020-03-22 12:18:10.871417: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n",
"2020-03-22 12:18:10.904431: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:18:10.904994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:18:10.905278: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:18:10.906634: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:18:10.908175: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:18:10.908479: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:18:10.909894: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:18:10.910577: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:18:10.913485: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:18:10.913589: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:18:10.914125: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:18:10.914597: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:18:10.919229: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz\n",
"2020-03-22 12:18:10.919396: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1bf09500 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n",
"2020-03-22 12:18:10.919421: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n",
"2020-03-22 12:18:11.023292: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:18:11.023954: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1bf096c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2020-03-22 12:18:11.023987: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\n",
"2020-03-22 12:18:11.024147: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:18:11.024639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:18:11.024690: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:18:11.024725: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:18:11.024749: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:18:11.024764: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:18:11.024777: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:18:11.024789: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:18:11.024802: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:18:11.024860: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:18:11.025369: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:18:11.025865: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:18:11.025927: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:18:11.027056: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
"2020-03-22 12:18:11.027082: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 \n",
"2020-03-22 12:18:11.027092: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N \n",
"2020-03-22 12:18:11.027186: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:18:11.027691: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:18:11.028185: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n",
"2020-03-22 12:18:11.028223: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14221 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n",
"INFO:tensorflow:Restoring parameters from /content/data/v3-large_224_1.0_uint8/ema/model-2790693\n",
"I0322 12:18:20.054452 139723617343360 saver.py:1284] Restoring parameters from /content/data/v3-large_224_1.0_uint8/ema/model-2790693\n",
"INFO:tensorflow:Running local_init_op.\n",
"I0322 12:18:20.736298 139723617343360 session_manager.py:500] Running local_init_op.\n",
"INFO:tensorflow:Done running local_init_op.\n",
"I0322 12:18:21.156793 139723617343360 session_manager.py:502] Done running local_init_op.\n",
"INFO:tensorflow:Starting Session.\n",
"I0322 12:18:31.247903 139723617343360 learning.py:754] Starting Session.\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:18:31.680281 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Starting Queues.\n",
"I0322 12:18:31.684038 139723617343360 learning.py:768] Starting Queues.\n",
"INFO:tensorflow:global_step/sec: 0\n",
"I0322 12:19:00.091489 139719975364352 supervisor.py:1099] global_step/sec: 0\n",
"2020-03-22 12:19:05.935694: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:19:11.241472: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"INFO:tensorflow:Recording summary at step 0.\n",
"I0322 12:19:18.144370 139719983757056 supervisor.py:1050] Recording summary at step 0.\n",
"INFO:tensorflow:global step 20: loss = 2.0146 (0.521 sec/step)\n",
"I0322 12:19:29.934173 139723617343360 learning.py:507] global step 20: loss = 2.0146 (0.521 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:19:31.680860 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:global_step/sec: 0.632465\n",
"I0322 12:19:38.038236 139719975364352 supervisor.py:1099] global_step/sec: 0.632465\n",
"INFO:tensorflow:Recording summary at step 24.\n",
"I0322 12:19:38.736067 139719983757056 supervisor.py:1050] Recording summary at step 24.\n",
"INFO:tensorflow:global step 40: loss = 1.5281 (0.515 sec/step)\n",
"I0322 12:19:46.834877 139723617343360 learning.py:507] global step 40: loss = 1.5281 (0.515 sec/step)\n",
"INFO:tensorflow:global step 60: loss = 1.6194 (0.527 sec/step)\n",
"I0322 12:19:57.582980 139723617343360 learning.py:507] global step 60: loss = 1.6194 (0.527 sec/step)\n",
"INFO:tensorflow:global step 80: loss = 1.4935 (0.518 sec/step)\n",
"I0322 12:20:08.472991 139723617343360 learning.py:507] global step 80: loss = 1.4935 (0.518 sec/step)\n",
"INFO:tensorflow:global step 100: loss = 1.2510 (0.539 sec/step)\n",
"I0322 12:20:19.190577 139723617343360 learning.py:507] global step 100: loss = 1.2510 (0.539 sec/step)\n",
"INFO:tensorflow:global step 120: loss = 1.1951 (0.522 sec/step)\n",
"I0322 12:20:30.055155 139723617343360 learning.py:507] global step 120: loss = 1.1951 (0.522 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:20:31.680510 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:global_step/sec: 1.67695\n",
"I0322 12:20:37.670377 139719975364352 supervisor.py:1099] global_step/sec: 1.67695\n",
"INFO:tensorflow:Recording summary at step 125.\n",
"I0322 12:20:38.361042 139719983757056 supervisor.py:1050] Recording summary at step 125.\n",
"INFO:tensorflow:global step 140: loss = 1.5936 (0.517 sec/step)\n",
"I0322 12:20:46.431108 139723617343360 learning.py:507] global step 140: loss = 1.5936 (0.517 sec/step)\n",
"INFO:tensorflow:global step 160: loss = 1.3357 (0.537 sec/step)\n",
"I0322 12:20:57.245831 139723617343360 learning.py:507] global step 160: loss = 1.3357 (0.537 sec/step)\n",
"INFO:tensorflow:global step 180: loss = 1.2001 (0.537 sec/step)\n",
"I0322 12:21:08.164265 139723617343360 learning.py:507] global step 180: loss = 1.2001 (0.537 sec/step)\n",
"INFO:tensorflow:global step 200: loss = 1.0904 (0.526 sec/step)\n",
"I0322 12:21:18.962381 139723617343360 learning.py:507] global step 200: loss = 1.0904 (0.526 sec/step)\n",
"INFO:tensorflow:global step 220: loss = 1.5167 (0.535 sec/step)\n",
"I0322 12:21:29.940928 139723617343360 learning.py:507] global step 220: loss = 1.5167 (0.535 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:21:31.680914 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:global_step/sec: 1.66647\n",
"I0322 12:21:37.677531 139719975364352 supervisor.py:1099] global_step/sec: 1.66647\n",
"INFO:tensorflow:Recording summary at step 224.\n",
"I0322 12:21:37.936845 139719983757056 supervisor.py:1050] Recording summary at step 224.\n",
"INFO:tensorflow:global step 240: loss = 1.0536 (0.529 sec/step)\n",
"I0322 12:21:46.398021 139723617343360 learning.py:507] global step 240: loss = 1.0536 (0.529 sec/step)\n",
"INFO:tensorflow:global step 260: loss = 1.1238 (0.566 sec/step)\n",
"I0322 12:21:57.279079 139723617343360 learning.py:507] global step 260: loss = 1.1238 (0.566 sec/step)\n",
"INFO:tensorflow:global step 280: loss = 1.1787 (0.497 sec/step)\n",
"I0322 12:22:07.963910 139723617343360 learning.py:507] global step 280: loss = 1.1787 (0.497 sec/step)\n",
"INFO:tensorflow:global step 300: loss = 0.9212 (0.561 sec/step)\n",
"I0322 12:22:18.710106 139723617343360 learning.py:507] global step 300: loss = 0.9212 (0.561 sec/step)\n",
"INFO:tensorflow:global step 320: loss = 0.8986 (0.536 sec/step)\n",
"I0322 12:22:29.656363 139723617343360 learning.py:507] global step 320: loss = 0.8986 (0.536 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:22:31.680727 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 325.\n",
"I0322 12:22:36.833202 139719983757056 supervisor.py:1050] Recording summary at step 325.\n",
"INFO:tensorflow:global_step/sec: 1.70086\n",
"I0322 12:22:37.646985 139719975364352 supervisor.py:1099] global_step/sec: 1.70086\n",
"INFO:tensorflow:global step 340: loss = 1.1903 (0.559 sec/step)\n",
"I0322 12:22:45.776008 139723617343360 learning.py:507] global step 340: loss = 1.1903 (0.559 sec/step)\n",
"INFO:tensorflow:global step 360: loss = 1.3301 (0.539 sec/step)\n",
"I0322 12:22:56.736656 139723617343360 learning.py:507] global step 360: loss = 1.3301 (0.539 sec/step)\n",
"INFO:tensorflow:global step 380: loss = 1.3426 (0.553 sec/step)\n",
"I0322 12:23:07.607213 139723617343360 learning.py:507] global step 380: loss = 1.3426 (0.553 sec/step)\n",
"INFO:tensorflow:global step 400: loss = 1.1245 (0.530 sec/step)\n",
"I0322 12:23:18.321112 139723617343360 learning.py:507] global step 400: loss = 1.1245 (0.530 sec/step)\n",
"INFO:tensorflow:global step 420: loss = 0.9156 (0.550 sec/step)\n",
"I0322 12:23:29.152065 139723617343360 learning.py:507] global step 420: loss = 0.9156 (0.550 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:23:31.680506 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/saver.py:963: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use standard file APIs to delete files with this prefix.\n",
"W0322 12:23:31.913877 139719966971648 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/saver.py:963: remove_checkpoint (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use standard file APIs to delete files with this prefix.\n",
"INFO:tensorflow:global_step/sec: 1.64983\n",
"I0322 12:23:37.653227 139719975364352 supervisor.py:1099] global_step/sec: 1.64983\n",
"INFO:tensorflow:Recording summary at step 426.\n",
"I0322 12:23:38.253122 139719983757056 supervisor.py:1050] Recording summary at step 426.\n",
"INFO:tensorflow:global step 440: loss = 1.1069 (0.581 sec/step)\n",
"I0322 12:23:45.671354 139723617343360 learning.py:507] global step 440: loss = 1.1069 (0.581 sec/step)\n",
"INFO:tensorflow:global step 460: loss = 1.1336 (0.507 sec/step)\n",
"I0322 12:23:56.481825 139723617343360 learning.py:507] global step 460: loss = 1.1336 (0.507 sec/step)\n",
"INFO:tensorflow:global step 480: loss = 1.1235 (0.552 sec/step)\n",
"I0322 12:24:07.271728 139723617343360 learning.py:507] global step 480: loss = 1.1235 (0.552 sec/step)\n",
"INFO:tensorflow:global step 500: loss = 1.3603 (0.518 sec/step)\n",
"I0322 12:24:18.141536 139723617343360 learning.py:507] global step 500: loss = 1.3603 (0.518 sec/step)\n",
"INFO:tensorflow:global step 520: loss = 0.9997 (0.536 sec/step)\n",
"I0322 12:24:29.051048 139723617343360 learning.py:507] global step 520: loss = 0.9997 (0.536 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:24:31.680534 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:global_step/sec: 1.67005\n",
"I0322 12:24:38.130312 139719975364352 supervisor.py:1099] global_step/sec: 1.67005\n",
"INFO:tensorflow:Recording summary at step 526.\n",
"I0322 12:24:38.183154 139719983757056 supervisor.py:1050] Recording summary at step 526.\n",
"INFO:tensorflow:global step 540: loss = 1.0026 (0.535 sec/step)\n",
"I0322 12:24:45.703052 139723617343360 learning.py:507] global step 540: loss = 1.0026 (0.535 sec/step)\n",
"INFO:tensorflow:global step 560: loss = 1.0139 (0.546 sec/step)\n",
"I0322 12:24:56.613597 139723617343360 learning.py:507] global step 560: loss = 1.0139 (0.546 sec/step)\n",
"INFO:tensorflow:global step 580: loss = 1.0253 (0.542 sec/step)\n",
"I0322 12:25:07.433388 139723617343360 learning.py:507] global step 580: loss = 1.0253 (0.542 sec/step)\n",
"INFO:tensorflow:global step 600: loss = 1.1694 (0.532 sec/step)\n",
"I0322 12:25:18.289988 139723617343360 learning.py:507] global step 600: loss = 1.1694 (0.532 sec/step)\n",
"INFO:tensorflow:global step 620: loss = 0.9205 (0.548 sec/step)\n",
"I0322 12:25:29.095242 139723617343360 learning.py:507] global step 620: loss = 0.9205 (0.548 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:25:31.685158 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:global_step/sec: 1.66067\n",
"I0322 12:25:37.744801 139719975364352 supervisor.py:1099] global_step/sec: 1.66067\n",
"INFO:tensorflow:Recording summary at step 626.\n",
"I0322 12:25:38.123047 139719983757056 supervisor.py:1050] Recording summary at step 626.\n",
"INFO:tensorflow:global step 640: loss = 0.8643 (0.559 sec/step)\n",
"I0322 12:25:45.549693 139723617343360 learning.py:507] global step 640: loss = 0.8643 (0.559 sec/step)\n",
"INFO:tensorflow:global step 660: loss = 0.8429 (0.570 sec/step)\n",
"I0322 12:25:56.418452 139723617343360 learning.py:507] global step 660: loss = 0.8429 (0.570 sec/step)\n",
"INFO:tensorflow:global step 680: loss = 0.9655 (0.549 sec/step)\n",
"I0322 12:26:07.225487 139723617343360 learning.py:507] global step 680: loss = 0.9655 (0.549 sec/step)\n",
"INFO:tensorflow:global step 700: loss = 1.1405 (0.544 sec/step)\n",
"I0322 12:26:18.239173 139723617343360 learning.py:507] global step 700: loss = 1.1405 (0.544 sec/step)\n",
"INFO:tensorflow:global step 720: loss = 0.8867 (0.550 sec/step)\n",
"I0322 12:26:29.122539 139723617343360 learning.py:507] global step 720: loss = 0.8867 (0.550 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:26:31.681004 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:global_step/sec: 1.66896\n",
"I0322 12:26:37.662283 139719975364352 supervisor.py:1099] global_step/sec: 1.66896\n",
"INFO:tensorflow:Recording summary at step 725.\n",
"I0322 12:26:37.931366 139719983757056 supervisor.py:1050] Recording summary at step 725.\n",
"INFO:tensorflow:global step 740: loss = 0.8436 (0.541 sec/step)\n",
"I0322 12:26:46.083220 139723617343360 learning.py:507] global step 740: loss = 0.8436 (0.541 sec/step)\n",
"INFO:tensorflow:global step 760: loss = 0.9787 (0.541 sec/step)\n",
"I0322 12:26:56.843157 139723617343360 learning.py:507] global step 760: loss = 0.9787 (0.541 sec/step)\n",
"INFO:tensorflow:global step 780: loss = 0.8764 (0.536 sec/step)\n",
"I0322 12:27:07.661414 139723617343360 learning.py:507] global step 780: loss = 0.8764 (0.536 sec/step)\n",
"INFO:tensorflow:global step 800: loss = 0.8323 (0.541 sec/step)\n",
"I0322 12:27:18.403365 139723617343360 learning.py:507] global step 800: loss = 0.8323 (0.541 sec/step)\n",
"INFO:tensorflow:global step 820: loss = 0.8966 (0.545 sec/step)\n",
"I0322 12:27:29.301268 139723617343360 learning.py:507] global step 820: loss = 0.8966 (0.545 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:27:31.680549 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:global_step/sec: 1.66185\n",
"I0322 12:27:37.836333 139719975364352 supervisor.py:1099] global_step/sec: 1.66185\n",
"INFO:tensorflow:Recording summary at step 825.\n",
"I0322 12:27:38.088272 139719983757056 supervisor.py:1050] Recording summary at step 825.\n",
"INFO:tensorflow:global step 840: loss = 0.9505 (0.547 sec/step)\n",
"I0322 12:27:46.061163 139723617343360 learning.py:507] global step 840: loss = 0.9505 (0.547 sec/step)\n",
"INFO:tensorflow:global step 860: loss = 1.1259 (0.530 sec/step)\n",
"I0322 12:27:56.853638 139723617343360 learning.py:507] global step 860: loss = 1.1259 (0.530 sec/step)\n",
"INFO:tensorflow:global step 880: loss = 1.0371 (0.531 sec/step)\n",
"I0322 12:28:07.533261 139723617343360 learning.py:507] global step 880: loss = 1.0371 (0.531 sec/step)\n",
"INFO:tensorflow:global step 900: loss = 0.8417 (0.556 sec/step)\n",
"I0322 12:28:18.412436 139723617343360 learning.py:507] global step 900: loss = 0.8417 (0.556 sec/step)\n",
"INFO:tensorflow:global step 920: loss = 0.9766 (0.529 sec/step)\n",
"I0322 12:28:29.250743 139723617343360 learning.py:507] global step 920: loss = 0.9766 (0.529 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:28:31.680599 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 926.\n",
"I0322 12:28:38.263813 139719983757056 supervisor.py:1050] Recording summary at step 926.\n",
"INFO:tensorflow:global step 940: loss = 0.9576 (0.533 sec/step)\n",
"I0322 12:28:45.808130 139723617343360 learning.py:507] global step 940: loss = 0.9576 (0.533 sec/step)\n",
"INFO:tensorflow:global step 960: loss = 1.1103 (0.549 sec/step)\n",
"I0322 12:28:56.535969 139723617343360 learning.py:507] global step 960: loss = 1.1103 (0.549 sec/step)\n",
"INFO:tensorflow:global step 980: loss = 0.8657 (0.535 sec/step)\n",
"I0322 12:29:07.279445 139723617343360 learning.py:507] global step 980: loss = 0.8657 (0.535 sec/step)\n",
"INFO:tensorflow:global step 1000: loss = 0.9610 (0.617 sec/step)\n",
"I0322 12:29:19.124910 139723617343360 learning.py:507] global step 1000: loss = 0.9610 (0.617 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:29:31.680527 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:global step 1020: loss = 0.8508 (1.292 sec/step)\n",
"I0322 12:29:32.916930 139723617343360 learning.py:507] global step 1020: loss = 0.8508 (1.292 sec/step)\n",
"INFO:tensorflow:Recording summary at step 1020.\n",
"I0322 12:29:37.275141 139719983757056 supervisor.py:1050] Recording summary at step 1020.\n",
"INFO:tensorflow:global step 1040: loss = 1.2772 (0.622 sec/step)\n",
"I0322 12:29:49.545866 139723617343360 learning.py:507] global step 1040: loss = 1.2772 (0.622 sec/step)\n",
"INFO:tensorflow:global step 1060: loss = 1.0678 (0.628 sec/step)\n",
"I0322 12:30:02.119632 139723617343360 learning.py:507] global step 1060: loss = 1.0678 (0.628 sec/step)\n",
"INFO:tensorflow:global step 1080: loss = 0.9345 (0.631 sec/step)\n",
"I0322 12:30:14.731694 139723617343360 learning.py:507] global step 1080: loss = 0.9345 (0.631 sec/step)\n",
"INFO:tensorflow:global step 1100: loss = 1.0020 (0.636 sec/step)\n",
"I0322 12:30:27.387973 139723617343360 learning.py:507] global step 1100: loss = 1.0020 (0.636 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:30:31.680906 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1107.\n",
"I0322 12:30:37.339639 139719983757056 supervisor.py:1050] Recording summary at step 1107.\n",
"INFO:tensorflow:global step 1120: loss = 1.1202 (0.624 sec/step)\n",
"I0322 12:30:45.220977 139723617343360 learning.py:507] global step 1120: loss = 1.1202 (0.624 sec/step)\n",
"INFO:tensorflow:global step 1140: loss = 1.1196 (0.628 sec/step)\n",
"I0322 12:30:57.750225 139723617343360 learning.py:507] global step 1140: loss = 1.1196 (0.628 sec/step)\n",
"INFO:tensorflow:global step 1160: loss = 0.8832 (0.622 sec/step)\n",
"I0322 12:31:10.215375 139723617343360 learning.py:507] global step 1160: loss = 0.8832 (0.622 sec/step)\n",
"INFO:tensorflow:global step 1180: loss = 0.9589 (0.624 sec/step)\n",
"I0322 12:31:22.748818 139723617343360 learning.py:507] global step 1180: loss = 0.9589 (0.624 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:31:31.681032 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1194.\n",
"I0322 12:31:37.230088 139719983757056 supervisor.py:1050] Recording summary at step 1194.\n",
"INFO:tensorflow:global step 1200: loss = 1.1764 (0.620 sec/step)\n",
"I0322 12:31:40.731322 139723617343360 learning.py:507] global step 1200: loss = 1.1764 (0.620 sec/step)\n",
"INFO:tensorflow:global step 1220: loss = 0.9836 (0.630 sec/step)\n",
"I0322 12:31:53.298968 139723617343360 learning.py:507] global step 1220: loss = 0.9836 (0.630 sec/step)\n",
"INFO:tensorflow:global step 1240: loss = 0.7981 (0.636 sec/step)\n",
"I0322 12:32:05.912496 139723617343360 learning.py:507] global step 1240: loss = 0.7981 (0.636 sec/step)\n",
"INFO:tensorflow:global step 1260: loss = 0.9950 (0.620 sec/step)\n",
"I0322 12:32:18.475735 139723617343360 learning.py:507] global step 1260: loss = 0.9950 (0.620 sec/step)\n",
"INFO:tensorflow:global step 1280: loss = 0.9425 (0.627 sec/step)\n",
"I0322 12:32:31.046159 139723617343360 learning.py:507] global step 1280: loss = 0.9425 (0.627 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:32:31.681102 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1281.\n",
"I0322 12:32:37.273113 139719983757056 supervisor.py:1050] Recording summary at step 1281.\n",
"INFO:tensorflow:global step 1300: loss = 0.9759 (0.639 sec/step)\n",
"I0322 12:32:49.062376 139723617343360 learning.py:507] global step 1300: loss = 0.9759 (0.639 sec/step)\n",
"INFO:tensorflow:global step 1320: loss = 0.8133 (0.612 sec/step)\n",
"I0322 12:33:01.674573 139723617343360 learning.py:507] global step 1320: loss = 0.8133 (0.612 sec/step)\n",
"INFO:tensorflow:global step 1340: loss = 0.8565 (0.627 sec/step)\n",
"I0322 12:33:14.200476 139723617343360 learning.py:507] global step 1340: loss = 0.8565 (0.627 sec/step)\n",
"INFO:tensorflow:global step 1360: loss = 0.9233 (0.615 sec/step)\n",
"I0322 12:33:26.804645 139723617343360 learning.py:507] global step 1360: loss = 0.9233 (0.615 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:33:31.680623 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1368.\n",
"I0322 12:33:37.340802 139719983757056 supervisor.py:1050] Recording summary at step 1368.\n",
"INFO:tensorflow:global step 1380: loss = 0.9443 (0.611 sec/step)\n",
"I0322 12:33:44.595821 139723617343360 learning.py:507] global step 1380: loss = 0.9443 (0.611 sec/step)\n",
"INFO:tensorflow:global step 1400: loss = 0.8167 (0.634 sec/step)\n",
"I0322 12:33:57.186215 139723617343360 learning.py:507] global step 1400: loss = 0.8167 (0.634 sec/step)\n",
"INFO:tensorflow:global step 1420: loss = 0.9823 (0.637 sec/step)\n",
"I0322 12:34:09.707264 139723617343360 learning.py:507] global step 1420: loss = 0.9823 (0.637 sec/step)\n",
"INFO:tensorflow:global step 1440: loss = 0.9266 (0.620 sec/step)\n",
"I0322 12:34:22.262101 139723617343360 learning.py:507] global step 1440: loss = 0.9266 (0.620 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:34:31.684837 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1455.\n",
"I0322 12:34:37.275510 139719983757056 supervisor.py:1050] Recording summary at step 1455.\n",
"INFO:tensorflow:global step 1460: loss = 0.9702 (0.622 sec/step)\n",
"I0322 12:34:40.145890 139723617343360 learning.py:507] global step 1460: loss = 0.9702 (0.622 sec/step)\n",
"INFO:tensorflow:global step 1480: loss = 0.9596 (0.623 sec/step)\n",
"I0322 12:34:52.688301 139723617343360 learning.py:507] global step 1480: loss = 0.9596 (0.623 sec/step)\n",
"INFO:tensorflow:global step 1500: loss = 0.7838 (0.621 sec/step)\n",
"I0322 12:35:05.277768 139723617343360 learning.py:507] global step 1500: loss = 0.7838 (0.621 sec/step)\n",
"INFO:tensorflow:global step 1520: loss = 0.7959 (0.615 sec/step)\n",
"I0322 12:35:17.811342 139723617343360 learning.py:507] global step 1520: loss = 0.7959 (0.615 sec/step)\n",
"INFO:tensorflow:global step 1540: loss = 0.9169 (0.631 sec/step)\n",
"I0322 12:35:30.386932 139723617343360 learning.py:507] global step 1540: loss = 0.9169 (0.631 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:35:31.681106 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1542.\n",
"I0322 12:35:37.267453 139719983757056 supervisor.py:1050] Recording summary at step 1542.\n",
"INFO:tensorflow:global step 1560: loss = 0.7887 (0.630 sec/step)\n",
"I0322 12:35:48.265245 139723617343360 learning.py:507] global step 1560: loss = 0.7887 (0.630 sec/step)\n",
"INFO:tensorflow:global step 1580: loss = 0.7909 (0.634 sec/step)\n",
"I0322 12:36:00.797524 139723617343360 learning.py:507] global step 1580: loss = 0.7909 (0.634 sec/step)\n",
"INFO:tensorflow:global step 1600: loss = 1.0913 (0.612 sec/step)\n",
"I0322 12:36:13.361137 139723617343360 learning.py:507] global step 1600: loss = 1.0913 (0.612 sec/step)\n",
"INFO:tensorflow:global step 1620: loss = 0.7385 (0.655 sec/step)\n",
"I0322 12:36:25.931815 139723617343360 learning.py:507] global step 1620: loss = 0.7385 (0.655 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:36:31.680919 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1629.\n",
"I0322 12:36:37.179951 139719983757056 supervisor.py:1050] Recording summary at step 1629.\n",
"INFO:tensorflow:global step 1640: loss = 0.8531 (0.623 sec/step)\n",
"I0322 12:36:43.867474 139723617343360 learning.py:507] global step 1640: loss = 0.8531 (0.623 sec/step)\n",
"INFO:tensorflow:global step 1660: loss = 1.0985 (0.618 sec/step)\n",
"I0322 12:36:56.346242 139723617343360 learning.py:507] global step 1660: loss = 1.0985 (0.618 sec/step)\n",
"INFO:tensorflow:global step 1680: loss = 0.9733 (0.628 sec/step)\n",
"I0322 12:37:08.858081 139723617343360 learning.py:507] global step 1680: loss = 0.9733 (0.628 sec/step)\n",
"INFO:tensorflow:global step 1700: loss = 0.9295 (0.633 sec/step)\n",
"I0322 12:37:21.492399 139723617343360 learning.py:507] global step 1700: loss = 0.9295 (0.633 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:37:31.680825 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1716.\n",
"I0322 12:37:37.160613 139719983757056 supervisor.py:1050] Recording summary at step 1716.\n",
"INFO:tensorflow:global step 1720: loss = 0.8950 (0.625 sec/step)\n",
"I0322 12:37:39.419458 139723617343360 learning.py:507] global step 1720: loss = 0.8950 (0.625 sec/step)\n",
"INFO:tensorflow:global step 1740: loss = 0.8422 (0.651 sec/step)\n",
"I0322 12:37:51.940358 139723617343360 learning.py:507] global step 1740: loss = 0.8422 (0.651 sec/step)\n",
"INFO:tensorflow:global step 1760: loss = 0.9377 (0.630 sec/step)\n",
"I0322 12:38:04.699116 139723617343360 learning.py:507] global step 1760: loss = 0.9377 (0.630 sec/step)\n",
"INFO:tensorflow:global step 1780: loss = 0.8338 (0.640 sec/step)\n",
"I0322 12:38:17.276519 139723617343360 learning.py:507] global step 1780: loss = 0.8338 (0.640 sec/step)\n",
"INFO:tensorflow:global step 1800: loss = 0.9751 (0.645 sec/step)\n",
"I0322 12:38:29.902181 139723617343360 learning.py:507] global step 1800: loss = 0.9751 (0.645 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:38:31.680729 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1803.\n",
"I0322 12:38:37.383373 139719983757056 supervisor.py:1050] Recording summary at step 1803.\n",
"INFO:tensorflow:global step 1820: loss = 0.8746 (0.615 sec/step)\n",
"I0322 12:38:47.785764 139723617343360 learning.py:507] global step 1820: loss = 0.8746 (0.615 sec/step)\n",
"INFO:tensorflow:global step 1840: loss = 0.8872 (0.636 sec/step)\n",
"I0322 12:39:00.321038 139723617343360 learning.py:507] global step 1840: loss = 0.8872 (0.636 sec/step)\n",
"INFO:tensorflow:global step 1860: loss = 0.9550 (0.613 sec/step)\n",
"I0322 12:39:12.878015 139723617343360 learning.py:507] global step 1860: loss = 0.9550 (0.613 sec/step)\n",
"INFO:tensorflow:global step 1880: loss = 0.9537 (0.640 sec/step)\n",
"I0322 12:39:25.446473 139723617343360 learning.py:507] global step 1880: loss = 0.9537 (0.640 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:39:31.680701 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1890.\n",
"I0322 12:39:37.300180 139719983757056 supervisor.py:1050] Recording summary at step 1890.\n",
"INFO:tensorflow:global step 1900: loss = 0.7931 (0.633 sec/step)\n",
"I0322 12:39:43.340826 139723617343360 learning.py:507] global step 1900: loss = 0.7931 (0.633 sec/step)\n",
"INFO:tensorflow:global step 1920: loss = 0.9840 (0.610 sec/step)\n",
"I0322 12:39:55.863167 139723617343360 learning.py:507] global step 1920: loss = 0.9840 (0.610 sec/step)\n",
"INFO:tensorflow:global step 1940: loss = 1.0510 (0.616 sec/step)\n",
"I0322 12:40:08.392555 139723617343360 learning.py:507] global step 1940: loss = 1.0510 (0.616 sec/step)\n",
"INFO:tensorflow:global step 1960: loss = 0.9768 (0.628 sec/step)\n",
"I0322 12:40:20.882159 139723617343360 learning.py:507] global step 1960: loss = 0.9768 (0.628 sec/step)\n",
"INFO:tensorflow:Saving checkpoint to path /content/data/train/model.ckpt\n",
"I0322 12:40:31.681175 139719966971648 supervisor.py:1117] Saving checkpoint to path /content/data/train/model.ckpt\n",
"INFO:tensorflow:Recording summary at step 1977.\n",
"I0322 12:40:37.275098 139719983757056 supervisor.py:1050] Recording summary at step 1977.\n",
"INFO:tensorflow:global step 1980: loss = 0.8502 (0.632 sec/step)\n",
"I0322 12:40:38.870048 139723617343360 learning.py:507] global step 1980: loss = 0.8502 (0.632 sec/step)\n",
"INFO:tensorflow:global step 2000: loss = 0.9905 (0.626 sec/step)\n",
"I0322 12:40:51.350227 139723617343360 learning.py:507] global step 2000: loss = 0.9905 (0.626 sec/step)\n",
"INFO:tensorflow:Stopping Training.\n",
"I0322 12:40:51.351170 139723617343360 learning.py:777] Stopping Training.\n",
"INFO:tensorflow:Finished training! Saving model to disk.\n",
"I0322 12:40:51.351401 139723617343360 learning.py:785] Finished training! Saving model to disk.\n",
"/tensorflow-1.15.0/python3.6/tensorflow_core/python/summary/writer/writer.py:386: UserWarning: Attempting to use a closed FileWriter. The operation will be a noop unless the FileWriter is explicitly reopened.\n",
" warnings.warn(\"Attempting to use a closed FileWriter. \"\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nw2rTlHGFGsb",
"colab_type": "text"
},
"source": [
"## Evaluating performance of a model"
]
},
{
"cell_type": "code",
"metadata": {
"id": "wXyBQzbBcUhg",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"outputId": "1a2751bc-84b9-4f39-8615-8859fc16e74a"
},
"source": [
"!python eval_image_classifier.py \\\n",
" --checkpoint_path=/content/data/train \\\n",
" --eval_dir=/content/data/eval \\\n",
" --dataset_name=flowers \\\n",
" --dataset_split_name=validation \\\n",
" --dataset_dir=/content/data/flowers \\\n",
" --dataset_name=flowers \\\n",
" --model_name=mobilenet_v3_large \\\n",
" --eval_image_size=224 \\\n",
" --quantize"
],
"execution_count": 8,
"outputs": [
{
"output_type": "stream",
"text": [
"WARNING:tensorflow:From eval_image_classifier.py:203: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.\n",
"\n",
"WARNING:tensorflow:From eval_image_classifier.py:97: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.\n",
"\n",
"W0322 12:41:02.821039 140262624872320 module_wrapper.py:139] From eval_image_classifier.py:97: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.\n",
"\n",
"WARNING:tensorflow:From eval_image_classifier.py:97: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.\n",
"\n",
"W0322 12:41:02.821225 140262624872320 module_wrapper.py:139] From eval_image_classifier.py:97: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.\n",
"\n",
"WARNING:tensorflow:From eval_image_classifier.py:99: get_or_create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please switch to tf.train.get_or_create_global_step\n",
"W0322 12:41:02.821753 140262624872320 deprecation.py:323] From eval_image_classifier.py:99: get_or_create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please switch to tf.train.get_or_create_global_step\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/flowers.py:74: The name tf.FixedLenFeature is deprecated. Please use tf.io.FixedLenFeature instead.\n",
"\n",
"W0322 12:41:02.826384 140262624872320 module_wrapper.py:139] From /content/models/research/slim/datasets/flowers.py:74: The name tf.FixedLenFeature is deprecated. Please use tf.io.FixedLenFeature instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/dataset_utils.py:192: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.\n",
"\n",
"W0322 12:41:02.827088 140262624872320 module_wrapper.py:139] From /content/models/research/slim/datasets/dataset_utils.py:192: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/dataset_utils.py:206: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.\n",
"\n",
"W0322 12:41:02.827285 140262624872320 module_wrapper.py:139] From /content/models/research/slim/datasets/dataset_utils.py:206: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.\n",
"\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/slim/python/slim/data/parallel_reader.py:246: string_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n",
"W0322 12:41:02.831531 140262624872320 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/slim/python/slim/data/parallel_reader.py:246: string_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:277: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n",
"W0322 12:41:02.835993 140262624872320 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:277: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs)`. If `shuffle=False`, omit the `.shuffle(...)`.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:189: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensors(tensor).repeat(num_epochs)`.\n",
"W0322 12:41:02.836213 140262624872320 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:189: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.from_tensors(tensor).repeat(num_epochs)`.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:198: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"To construct input pipelines, use the `tf.data` module.\n",
"W0322 12:41:02.837444 140262624872320 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:198: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"To construct input pipelines, use the `tf.data` module.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:198: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"To construct input pipelines, use the `tf.data` module.\n",
"W0322 12:41:02.838301 140262624872320 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/input.py:198: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"To construct input pipelines, use the `tf.data` module.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/slim/python/slim/data/parallel_reader.py:95: TFRecordReader.__init__ (from tensorflow.python.ops.io_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.TFRecordDataset`.\n",
"W0322 12:41:02.844290 140262624872320 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/slim/python/slim/data/parallel_reader.py:95: TFRecordReader.__init__ (from tensorflow.python.ops.io_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.TFRecordDataset`.\n",
"WARNING:tensorflow:From /content/models/research/slim/preprocessing/inception_preprocessing.py:301: The name tf.image.resize_bilinear is deprecated. Please use tf.compat.v1.image.resize_bilinear instead.\n",
"\n",
"W0322 12:41:02.940559 140262624872320 module_wrapper.py:139] From /content/models/research/slim/preprocessing/inception_preprocessing.py:301: The name tf.image.resize_bilinear is deprecated. Please use tf.compat.v1.image.resize_bilinear instead.\n",
"\n",
"WARNING:tensorflow:From eval_image_classifier.py:143: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).\n",
"W0322 12:41:02.943687 140262624872320 deprecation.py:323] From eval_image_classifier.py:143: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Queue-based input pipelines have been replaced by `tf.data`. Use `tf.data.Dataset.batch(batch_size)` (or `padded_batch(...)` if `dynamic_pad=True`).\n",
"INFO:tensorflow:Scale of 0 disables regularizer.\n",
"I0322 12:41:02.949968 140262624872320 regularizers.py:98] Scale of 0 disables regularizer.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please use `layer.__call__` method instead.\n",
"W0322 12:41:02.951961 140262624872320 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please use `layer.__call__` method instead.\n",
"INFO:tensorflow:Skipping MobilenetV3/Conv/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:04.758237 140262624872320 quantize.py:166] Skipping MobilenetV3/Conv/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:04.876559 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:04.942054 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:04.995185 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_6/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.012313 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_6/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_6/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.028946 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_6/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_7/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.057012 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_7/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_7/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.074282 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_7/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_8/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.091029 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_8/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_8/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.108630 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_8/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_9/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.125984 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_9/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_9/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.142882 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_9/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_10/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.159915 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_10/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_10/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.178629 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_10/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:05.207094 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_11/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.234778 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_11/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_11/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.251564 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_11/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:05.280092 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_12/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.296803 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_12/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_12/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.313646 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_12/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:05.341972 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_13/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.369795 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_13/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_13/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.388445 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_13/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:05.416997 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_14/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.434303 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_14/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_14/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.451233 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_14/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:05.479372 140262624872320 quantize.py:166] Skipping MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/Conv_1/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.496508 140262624872320 quantize.py:166] Skipping MobilenetV3/Conv_1/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/Conv_2/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:05.516201 140262624872320 quantize.py:166] Skipping MobilenetV3/Conv_2/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/Conv/hard_swish/add\n",
"I0322 12:41:05.544256 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/Conv/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul\n",
"I0322 12:41:05.544425 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul_1\n",
"I0322 12:41:05.550424 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add\n",
"I0322 12:41:05.556917 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:05.557069 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/mul\n",
"I0322 12:41:05.562781 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add\n",
"I0322 12:41:05.568799 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:05.568949 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/mul\n",
"I0322 12:41:05.575123 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add\n",
"I0322 12:41:05.586392 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:05.586542 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/mul\n",
"I0322 12:41:05.592308 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_6/expand/hard_swish/add\n",
"I0322 12:41:05.598210 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_6/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul\n",
"I0322 12:41:05.598367 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul_1\n",
"I0322 12:41:05.604545 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/add\n",
"I0322 12:41:05.611803 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul\n",
"I0322 12:41:05.611978 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul_1\n",
"I0322 12:41:05.617933 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_7/expand/hard_swish/add\n",
"I0322 12:41:05.623836 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_7/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul\n",
"I0322 12:41:05.623987 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul_1\n",
"I0322 12:41:05.629654 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/add\n",
"I0322 12:41:05.636415 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul\n",
"I0322 12:41:05.636566 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul_1\n",
"I0322 12:41:05.642505 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_8/expand/hard_swish/add\n",
"I0322 12:41:05.649385 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_8/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul\n",
"I0322 12:41:05.649575 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul_1\n",
"I0322 12:41:05.656680 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/add\n",
"I0322 12:41:05.663244 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul\n",
"I0322 12:41:05.663433 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul_1\n",
"I0322 12:41:05.764501 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_9/expand/hard_swish/add\n",
"I0322 12:41:05.770634 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_9/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul\n",
"I0322 12:41:05.770839 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul_1\n",
"I0322 12:41:05.776671 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/add\n",
"I0322 12:41:05.784563 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul\n",
"I0322 12:41:05.784756 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul_1\n",
"I0322 12:41:05.791470 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_10/expand/hard_swish/add\n",
"I0322 12:41:05.797460 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_10/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul\n",
"I0322 12:41:05.797621 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul_1\n",
"I0322 12:41:05.803527 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/add\n",
"I0322 12:41:05.809509 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul\n",
"I0322 12:41:05.809664 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul_1\n",
"I0322 12:41:05.815487 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add\n",
"I0322 12:41:05.821248 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:05.821406 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/mul\n",
"I0322 12:41:05.827160 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_11/expand/hard_swish/add\n",
"I0322 12:41:05.833335 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_11/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul\n",
"I0322 12:41:05.833502 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul_1\n",
"I0322 12:41:05.839228 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/add\n",
"I0322 12:41:05.845238 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul\n",
"I0322 12:41:05.845406 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul_1\n",
"I0322 12:41:05.851140 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add\n",
"I0322 12:41:05.857118 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:05.857273 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/mul\n",
"I0322 12:41:05.862969 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_12/expand/hard_swish/add\n",
"I0322 12:41:05.868932 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_12/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul\n",
"I0322 12:41:05.869133 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul_1\n",
"I0322 12:41:05.874914 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/add\n",
"I0322 12:41:05.880767 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul\n",
"I0322 12:41:05.880938 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul_1\n",
"I0322 12:41:05.886850 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add\n",
"I0322 12:41:05.892784 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:05.892934 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/mul\n",
"I0322 12:41:05.898849 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_13/expand/hard_swish/add\n",
"I0322 12:41:05.904875 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_13/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul\n",
"I0322 12:41:05.905026 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul_1\n",
"I0322 12:41:05.910909 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/add\n",
"I0322 12:41:05.916771 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul\n",
"I0322 12:41:05.916914 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul_1\n",
"I0322 12:41:05.922772 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add\n",
"I0322 12:41:05.928575 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:05.928739 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/mul\n",
"I0322 12:41:05.934530 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_14/expand/hard_swish/add\n",
"I0322 12:41:05.941940 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_14/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul\n",
"I0322 12:41:05.942129 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul_1\n",
"I0322 12:41:05.949325 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/add\n",
"I0322 12:41:05.955785 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul\n",
"I0322 12:41:05.955953 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul_1\n",
"I0322 12:41:05.962155 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add\n",
"I0322 12:41:05.968508 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:05.968672 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/mul\n",
"I0322 12:41:05.974924 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/Conv_1/hard_swish/add\n",
"I0322 12:41:05.981948 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/Conv_1/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul\n",
"I0322 12:41:05.982155 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul_1\n",
"I0322 12:41:05.990621 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/Conv_2/hard_swish/add\n",
"I0322 12:41:05.996969 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/Conv_2/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul\n",
"I0322 12:41:05.997169 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul_1\n",
"I0322 12:41:06.003360 140262624872320 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv/depthwise/add_fold\n",
"I0322 12:41:06.009312 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_1/expand/add_fold\n",
"I0322 12:41:06.009639 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_1/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_1/depthwise/add_fold\n",
"I0322 12:41:06.009843 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_1/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_2/expand/add_fold\n",
"I0322 12:41:06.010099 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_2/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_2/depthwise/add_fold\n",
"I0322 12:41:06.010307 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_2/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_3/expand/add_fold\n",
"I0322 12:41:06.010559 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_3/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_3/depthwise/add_fold\n",
"I0322 12:41:06.010751 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_3/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_4/expand/add_fold\n",
"I0322 12:41:06.010995 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_4/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_4/depthwise/add_fold\n",
"I0322 12:41:06.011168 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_4/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_5/expand/add_fold\n",
"I0322 12:41:06.011451 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_5/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_5/depthwise/add_fold\n",
"I0322 12:41:06.011631 140262624872320 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_5/depthwise/add_fold\n",
"WARNING:tensorflow:From eval_image_classifier.py:167: streaming_accuracy (from tensorflow.contrib.metrics.python.ops.metric_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please switch to tf.metrics.accuracy. Note that the order of the labels and predictions arguments has been switched.\n",
"W0322 12:41:06.017624 140262624872320 deprecation.py:323] From eval_image_classifier.py:167: streaming_accuracy (from tensorflow.contrib.metrics.python.ops.metric_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please switch to tf.metrics.accuracy. Note that the order of the labels and predictions arguments has been switched.\n",
"WARNING:tensorflow:From eval_image_classifier.py:169: streaming_recall_at_k (from tensorflow.contrib.metrics.python.ops.metric_ops) is deprecated and will be removed after 2016-11-08.\n",
"Instructions for updating:\n",
"Please use `streaming_sparse_recall_at_k`, and reshape labels from [batch_size] to [batch_size, 1].\n",
"W0322 12:41:06.028488 140262624872320 deprecation.py:323] From eval_image_classifier.py:169: streaming_recall_at_k (from tensorflow.contrib.metrics.python.ops.metric_ops) is deprecated and will be removed after 2016-11-08.\n",
"Instructions for updating:\n",
"Please use `streaming_sparse_recall_at_k`, and reshape labels from [batch_size] to [batch_size, 1].\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/metrics/python/ops/metric_ops.py:2167: streaming_mean (from tensorflow.contrib.metrics.python.ops.metric_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please switch to tf.metrics.mean\n",
"W0322 12:41:06.029940 140262624872320 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/metrics/python/ops/metric_ops.py:2167: streaming_mean (from tensorflow.contrib.metrics.python.ops.metric_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please switch to tf.metrics.mean\n",
"WARNING:tensorflow:From eval_image_classifier.py:175: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.\n",
"\n",
"W0322 12:41:06.041177 140262624872320 module_wrapper.py:139] From eval_image_classifier.py:175: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.\n",
"\n",
"WARNING:tensorflow:From eval_image_classifier.py:176: Print (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2018-08-20.\n",
"Instructions for updating:\n",
"Use tf.print instead of tf.Print. Note that tf.print returns a no-output operator that directly prints the output. Outside of defuns or eager mode, this operator will not be executed unless it is directly specified in session.run or used as a control dependency for other operators. This is only a concern in graph mode. Below is an example of how to ensure tf.print executes in graph mode:\n",
"\n",
"W0322 12:41:06.042168 140262624872320 deprecation.py:323] From eval_image_classifier.py:176: Print (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2018-08-20.\n",
"Instructions for updating:\n",
"Use tf.print instead of tf.Print. Note that tf.print returns a no-output operator that directly prints the output. Outside of defuns or eager mode, this operator will not be executed unless it is directly specified in session.run or used as a control dependency for other operators. This is only a concern in graph mode. Below is an example of how to ensure tf.print executes in graph mode:\n",
"\n",
"WARNING:tensorflow:From eval_image_classifier.py:177: The name tf.add_to_collection is deprecated. Please use tf.compat.v1.add_to_collection instead.\n",
"\n",
"W0322 12:41:06.042969 140262624872320 module_wrapper.py:139] From eval_image_classifier.py:177: The name tf.add_to_collection is deprecated. Please use tf.compat.v1.add_to_collection instead.\n",
"\n",
"WARNING:tensorflow:From eval_image_classifier.py:177: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.\n",
"\n",
"W0322 12:41:06.043135 140262624872320 module_wrapper.py:139] From eval_image_classifier.py:177: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.\n",
"\n",
"WARNING:tensorflow:From eval_image_classifier.py:186: The name tf.gfile.IsDirectory is deprecated. Please use tf.io.gfile.isdir instead.\n",
"\n",
"W0322 12:41:06.044659 140262624872320 module_wrapper.py:139] From eval_image_classifier.py:186: The name tf.gfile.IsDirectory is deprecated. Please use tf.io.gfile.isdir instead.\n",
"\n",
"WARNING:tensorflow:From eval_image_classifier.py:191: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.\n",
"\n",
"W0322 12:41:06.046604 140262624872320 module_wrapper.py:139] From eval_image_classifier.py:191: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.\n",
"\n",
"INFO:tensorflow:Evaluating /content/data/train/model.ckpt-2000\n",
"I0322 12:41:06.046752 140262624872320 eval_image_classifier.py:191] Evaluating /content/data/train/model.ckpt-2000\n",
"INFO:tensorflow:Starting evaluation at 2020-03-22T12:41:06Z\n",
"I0322 12:41:06.514400 140262624872320 evaluation.py:255] Starting evaluation at 2020-03-22T12:41:06Z\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/ops/array_ops.py:1475: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use tf.where in 2.0, which has the same broadcast rule as np.where\n",
"W0322 12:41:06.909149 140262624872320 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/ops/array_ops.py:1475: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use tf.where in 2.0, which has the same broadcast rule as np.where\n",
"INFO:tensorflow:Graph was finalized.\n",
"I0322 12:41:07.315060 140262624872320 monitored_session.py:240] Graph was finalized.\n",
"2020-03-22 12:41:07.328473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n",
"2020-03-22 12:41:07.372072: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:07.372633: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:41:07.377331: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:41:07.388287: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:41:07.394781: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:41:07.402595: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:41:07.412567: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:41:07.418139: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:41:07.483994: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:41:07.484117: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:07.484669: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:07.485166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:41:07.491007: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz\n",
"2020-03-22 12:41:07.491185: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1e4ed80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n",
"2020-03-22 12:41:07.491211: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n",
"2020-03-22 12:41:07.607548: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:07.608128: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1e4ef40 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2020-03-22 12:41:07.608153: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\n",
"2020-03-22 12:41:07.608315: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:07.608853: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:41:07.608934: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:41:07.608961: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:41:07.608982: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:41:07.609002: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:41:07.609027: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:41:07.609047: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:41:07.609067: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:41:07.609131: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:07.609663: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:07.610162: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:41:07.610228: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:41:07.611352: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
"2020-03-22 12:41:07.611385: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 \n",
"2020-03-22 12:41:07.611395: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N \n",
"2020-03-22 12:41:07.611563: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:07.612102: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:07.612589: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n",
"2020-03-22 12:41:07.612627: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14221 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n",
"INFO:tensorflow:Restoring parameters from /content/data/train/model.ckpt-2000\n",
"I0322 12:41:07.614019 140262624872320 saver.py:1284] Restoring parameters from /content/data/train/model.ckpt-2000\n",
"INFO:tensorflow:Running local_init_op.\n",
"I0322 12:41:09.598004 140262624872320 session_manager.py:500] Running local_init_op.\n",
"INFO:tensorflow:Done running local_init_op.\n",
"I0322 12:41:09.679669 140262624872320 session_manager.py:502] Done running local_init_op.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/monitored_session.py:882: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"To construct input pipelines, use the `tf.data` module.\n",
"W0322 12:41:10.325245 140262624872320 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/training/monitored_session.py:882: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"To construct input pipelines, use the `tf.data` module.\n",
"2020-03-22 12:41:12.405945: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:41:14.253037: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"INFO:tensorflow:Evaluation [1/4]\n",
"I0322 12:41:15.946469 140262624872320 evaluation.py:167] Evaluation [1/4]\n",
"INFO:tensorflow:Evaluation [2/4]\n",
"I0322 12:41:16.157857 140262624872320 evaluation.py:167] Evaluation [2/4]\n",
"INFO:tensorflow:Evaluation [3/4]\n",
"I0322 12:41:16.452013 140262624872320 evaluation.py:167] Evaluation [3/4]\n",
"INFO:tensorflow:Evaluation [4/4]\n",
"I0322 12:41:16.831573 140262624872320 evaluation.py:167] Evaluation [4/4]\n",
"eval/Accuracy[0.9475]\n",
"eval/Recall_5[1]\n",
"INFO:tensorflow:Finished evaluation at 2020-03-22-12:41:17\n",
"I0322 12:41:17.405241 140262624872320 evaluation.py:275] Finished evaluation at 2020-03-22-12:41:17\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "JO_aJu27FWUr",
"colab_type": "text"
},
"source": [
"## Exporting the Inference Graph"
]
},
{
"cell_type": "code",
"metadata": {
"id": "jPHgi41ldKGR",
"colab_type": "code",
"colab": {}
},
"source": [
"!mkdir /content/data/output"
],
"execution_count": 0,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "COBu9u9YczCq",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"outputId": "6aad3547-4a02-47a8-c23a-404f4179bbb9"
},
"source": [
"!python export_inference_graph.py \\\n",
" --alsologtostderr \\\n",
" --model_name=mobilenet_v3_large \\\n",
" --image_size=224 \\\n",
" --output_file=/content/data/output/graph_template.pb \\\n",
" --dataset_name=flowers \\\n",
" --quantize"
],
"execution_count": 10,
"outputs": [
{
"output_type": "stream",
"text": [
"WARNING:tensorflow:From export_inference_graph.py:164: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.\n",
"\n",
"WARNING:tensorflow:From export_inference_graph.py:127: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.\n",
"\n",
"W0322 12:41:26.427026 140456666544000 module_wrapper.py:139] From export_inference_graph.py:127: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.\n",
"\n",
"WARNING:tensorflow:From export_inference_graph.py:127: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.\n",
"\n",
"W0322 12:41:26.427184 140456666544000 module_wrapper.py:139] From export_inference_graph.py:127: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/flowers.py:74: The name tf.FixedLenFeature is deprecated. Please use tf.io.FixedLenFeature instead.\n",
"\n",
"W0322 12:41:26.427738 140456666544000 module_wrapper.py:139] From /content/models/research/slim/datasets/flowers.py:74: The name tf.FixedLenFeature is deprecated. Please use tf.io.FixedLenFeature instead.\n",
"\n",
"WARNING:tensorflow:From /content/models/research/slim/datasets/dataset_utils.py:192: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.\n",
"\n",
"W0322 12:41:26.429485 140456666544000 module_wrapper.py:139] From /content/models/research/slim/datasets/dataset_utils.py:192: The name tf.gfile.Exists is deprecated. Please use tf.io.gfile.exists instead.\n",
"\n",
"WARNING:tensorflow:From export_inference_graph.py:144: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n",
"\n",
"W0322 12:41:26.429801 140456666544000 module_wrapper.py:139] From export_inference_graph.py:144: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n",
"\n",
"INFO:tensorflow:Scale of 0 disables regularizer.\n",
"I0322 12:41:26.430730 140456666544000 regularizers.py:98] Scale of 0 disables regularizer.\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please use `layer.__call__` method instead.\n",
"W0322 12:41:26.433682 140456666544000 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Please use `layer.__call__` method instead.\n",
"INFO:tensorflow:Skipping MobilenetV3/Conv/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.268944 140456666544000 quantize.py:166] Skipping MobilenetV3/Conv/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:28.387311 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:28.449547 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:28.499917 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_6/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.517430 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_6/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_6/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.534293 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_6/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_7/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.565538 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_7/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_7/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.583052 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_7/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_8/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.600147 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_8/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_8/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.617190 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_8/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_9/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.634113 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_9/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_9/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.651849 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_9/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_10/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.668881 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_10/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_10/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.685606 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_10/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:28.713872 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_11/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.742263 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_11/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_11/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.759339 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_11/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:28.787643 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_12/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.804997 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_12/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_12/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.822724 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_12/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:28.853023 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_13/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.883369 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_13/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_13/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.902187 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_13/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:28.934814 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_14/expand/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.953597 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_14/expand/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_14/depthwise/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:28.970728 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_14/depthwise/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"I0322 12:41:28.999220 140456666544000 quantize.py:166] Skipping MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/Conv_1/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:29.016445 140456666544000 quantize.py:166] Skipping MobilenetV3/Conv_1/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping MobilenetV3/Conv_2/hard_swish/add, because its followed by an activation.\n",
"I0322 12:41:29.033942 140456666544000 quantize.py:166] Skipping MobilenetV3/Conv_2/hard_swish/add, because its followed by an activation.\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/Conv/hard_swish/add\n",
"I0322 12:41:29.062848 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/Conv/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul\n",
"I0322 12:41:29.062985 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul_1\n",
"I0322 12:41:29.068751 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add\n",
"I0322 12:41:29.075206 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:29.075336 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/mul\n",
"I0322 12:41:29.081117 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_3/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add\n",
"I0322 12:41:29.087319 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:29.087489 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/mul\n",
"I0322 12:41:29.093243 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_4/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add\n",
"I0322 12:41:29.099253 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:29.099381 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/mul\n",
"I0322 12:41:29.105244 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_5/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_6/expand/hard_swish/add\n",
"I0322 12:41:29.111238 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_6/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul\n",
"I0322 12:41:29.111355 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul_1\n",
"I0322 12:41:29.117133 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/add\n",
"I0322 12:41:29.122980 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul\n",
"I0322 12:41:29.123105 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul_1\n",
"I0322 12:41:29.128825 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_6/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_7/expand/hard_swish/add\n",
"I0322 12:41:29.134654 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_7/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul\n",
"I0322 12:41:29.134835 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul_1\n",
"I0322 12:41:29.140606 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/add\n",
"I0322 12:41:29.146661 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul\n",
"I0322 12:41:29.146821 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul_1\n",
"I0322 12:41:29.152476 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_7/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_8/expand/hard_swish/add\n",
"I0322 12:41:29.158401 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_8/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul\n",
"I0322 12:41:29.158554 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul_1\n",
"I0322 12:41:29.164421 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/add\n",
"I0322 12:41:29.170382 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul\n",
"I0322 12:41:29.170506 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul_1\n",
"I0322 12:41:29.176252 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_8/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_9/expand/hard_swish/add\n",
"I0322 12:41:29.184230 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_9/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul\n",
"I0322 12:41:29.184388 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul_1\n",
"I0322 12:41:29.191516 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/add\n",
"I0322 12:41:29.198015 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul\n",
"I0322 12:41:29.198151 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul_1\n",
"I0322 12:41:29.204343 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_9/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_10/expand/hard_swish/add\n",
"I0322 12:41:29.210640 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_10/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul\n",
"I0322 12:41:29.210793 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul_1\n",
"I0322 12:41:29.217731 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/add\n",
"I0322 12:41:29.224292 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul\n",
"I0322 12:41:29.224427 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul_1\n",
"I0322 12:41:29.230766 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add\n",
"I0322 12:41:29.236594 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:29.236747 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/mul\n",
"I0322 12:41:29.242491 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_10/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_11/expand/hard_swish/add\n",
"I0322 12:41:29.250259 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_11/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul\n",
"I0322 12:41:29.250445 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul_1\n",
"I0322 12:41:29.256935 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/add\n",
"I0322 12:41:29.262812 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul\n",
"I0322 12:41:29.262962 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul_1\n",
"I0322 12:41:29.268698 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add\n",
"I0322 12:41:29.274562 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:29.274734 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/mul\n",
"I0322 12:41:29.280541 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_11/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_12/expand/hard_swish/add\n",
"I0322 12:41:29.286421 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_12/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul\n",
"I0322 12:41:29.286574 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul_1\n",
"I0322 12:41:29.292281 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/add\n",
"I0322 12:41:29.298114 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul\n",
"I0322 12:41:29.298270 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul_1\n",
"I0322 12:41:29.304485 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add\n",
"I0322 12:41:29.311689 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:29.311875 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/mul\n",
"I0322 12:41:29.317672 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_12/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_13/expand/hard_swish/add\n",
"I0322 12:41:29.323544 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_13/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul\n",
"I0322 12:41:29.323703 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul_1\n",
"I0322 12:41:29.329457 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/add\n",
"I0322 12:41:29.335446 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul\n",
"I0322 12:41:29.335598 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul_1\n",
"I0322 12:41:29.341425 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add\n",
"I0322 12:41:29.347586 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:29.347754 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/mul\n",
"I0322 12:41:29.353465 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_13/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_14/expand/hard_swish/add\n",
"I0322 12:41:29.359349 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_14/expand/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul\n",
"I0322 12:41:29.359511 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul_1\n",
"I0322 12:41:29.366005 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/expand/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/add\n",
"I0322 12:41:29.371849 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul\n",
"I0322 12:41:29.372003 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul_1\n",
"I0322 12:41:29.377803 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/depthwise/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add\n",
"I0322 12:41:29.383826 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/mul\n",
"I0322 12:41:29.383976 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/Conv_1/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/mul\n",
"I0322 12:41:29.487860 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/expanded_conv_14/squeeze_excite/mul\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/Conv_1/hard_swish/add\n",
"I0322 12:41:29.494193 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/Conv_1/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul\n",
"I0322 12:41:29.494354 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul_1\n",
"I0322 12:41:29.500041 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_1/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/Conv_2/hard_swish/add\n",
"I0322 12:41:29.505972 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/Conv_2/hard_swish/add\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul\n",
"I0322 12:41:29.506134 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul\n",
"INFO:tensorflow:Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul_1\n",
"I0322 12:41:29.512046 140456666544000 quantize.py:262] Inserting fake quant op activation_Mul_quant after MobilenetV3/Conv_2/hard_swish/mul_1\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv/depthwise/add_fold\n",
"I0322 12:41:29.518203 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_1/expand/add_fold\n",
"I0322 12:41:29.518500 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_1/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_1/depthwise/add_fold\n",
"I0322 12:41:29.518694 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_1/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_2/expand/add_fold\n",
"I0322 12:41:29.518958 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_2/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_2/depthwise/add_fold\n",
"I0322 12:41:29.519140 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_2/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_3/expand/add_fold\n",
"I0322 12:41:29.519384 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_3/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_3/depthwise/add_fold\n",
"I0322 12:41:29.519554 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_3/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_4/expand/add_fold\n",
"I0322 12:41:29.519839 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_4/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_4/depthwise/add_fold\n",
"I0322 12:41:29.520019 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_4/depthwise/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_5/expand/add_fold\n",
"I0322 12:41:29.520266 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_5/expand/add_fold\n",
"INFO:tensorflow:Skipping quant after MobilenetV3/expanded_conv_5/depthwise/add_fold\n",
"I0322 12:41:29.520424 140456666544000 quantize.py:299] Skipping quant after MobilenetV3/expanded_conv_5/depthwise/add_fold\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wIXoXZ54FoKK",
"colab_type": "text"
},
"source": [
"## Freezing the exported Graph"
]
},
{
"cell_type": "code",
"metadata": {
"id": "2pWPOvtYda0U",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"outputId": "a6bb549d-fdd9-4727-937e-29df0a9e2ba3"
},
"source": [
"!freeze_graph \\\n",
" --input_graph=/content/data/output/graph_template.pb \\\n",
" --input_checkpoint=/content/data/train/model.ckpt-2000 \\\n",
" --input_binary \\\n",
" --output_graph=/content/data/output/frozen_graph.pb \\\n",
" --output_node_names=MobilenetV3/Predictions/Softmax"
],
"execution_count": 11,
"outputs": [
{
"output_type": "stream",
"text": [
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use standard file APIs to check for files with this prefix.\n",
"W0322 12:41:34.367049 140376120350592 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use standard file APIs to check for files with this prefix.\n",
"2020-03-22 12:41:34.785942: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n",
"2020-03-22 12:41:34.825273: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:34.825832: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:41:34.826099: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:41:34.827619: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:41:34.835376: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:41:34.835682: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:41:34.837259: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:41:34.839446: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:41:34.843223: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:41:34.843325: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:34.843890: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:34.844371: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:41:34.849081: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz\n",
"2020-03-22 12:41:34.849256: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x13f6a00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n",
"2020-03-22 12:41:34.849281: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n",
"2020-03-22 12:41:34.949646: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:34.950307: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x13f6bc0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2020-03-22 12:41:34.950339: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\n",
"2020-03-22 12:41:34.950489: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:34.951042: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:41:34.951102: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:41:34.951122: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:41:34.951144: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:41:34.951162: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:41:34.951180: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:41:34.951199: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:41:34.951232: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:41:34.951300: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:34.951871: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:34.952348: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:41:34.952403: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:41:34.953342: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
"2020-03-22 12:41:34.953376: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 \n",
"2020-03-22 12:41:34.953385: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N \n",
"2020-03-22 12:41:34.953488: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:34.954771: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:34.955262: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n",
"2020-03-22 12:41:34.955302: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14221 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n",
"INFO:tensorflow:Restoring parameters from /content/data/train/model.ckpt-2000\n",
"I0322 12:41:35.472354 140376120350592 saver.py:1284] Restoring parameters from /content/data/train/model.ckpt-2000\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/tools/freeze_graph.py:233: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use `tf.compat.v1.graph_util.convert_variables_to_constants`\n",
"W0322 12:41:36.777435 140376120350592 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/tools/freeze_graph.py:233: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use `tf.compat.v1.graph_util.convert_variables_to_constants`\n",
"WARNING:tensorflow:From /tensorflow-1.15.0/python3.6/tensorflow_core/python/framework/graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use `tf.compat.v1.graph_util.extract_sub_graph`\n",
"W0322 12:41:36.777666 140376120350592 deprecation.py:323] From /tensorflow-1.15.0/python3.6/tensorflow_core/python/framework/graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.\n",
"Instructions for updating:\n",
"Use `tf.compat.v1.graph_util.extract_sub_graph`\n",
"INFO:tensorflow:Froze 716 variables.\n",
"I0322 12:41:37.218621 140376120350592 graph_util_impl.py:334] Froze 716 variables.\n",
"INFO:tensorflow:Converted 716 variables to const ops.\n",
"I0322 12:41:37.309269 140376120350592 graph_util_impl.py:394] Converted 716 variables to const ops.\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iqEJoS-BF0_4",
"colab_type": "text"
},
"source": [
"## Converting TF-Lite Full integer quantization model"
]
},
{
"cell_type": "code",
"metadata": {
"id": "9hPJs7ETeWc9",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 768
},
"outputId": "a6c35d0d-2c5e-42a6-a1f2-8cf33e685e8e"
},
"source": [
"!tflite_convert \\\n",
" --output_file=/content/data/output/output_tflite_graph.tflite \\\n",
" --graph_def_file=/content/data/output/frozen_graph.pb \\\n",
" --inference_type=QUANTIZED_UINT8 \\\n",
" --input_arrays=input \\\n",
" --output_arrays=MobilenetV3/Predictions/Softmax \\\n",
" --mean_values=128 \\\n",
" --std_dev_values=128 \\\n",
" --input_shapes=1,224,224,3"
],
"execution_count": 12,
"outputs": [
{
"output_type": "stream",
"text": [
"2020-03-22 12:41:42.526201: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\n",
"2020-03-22 12:41:42.560191: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:42.560921: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:41:42.561192: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:41:42.566026: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:41:42.567552: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:41:42.567870: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:41:42.572642: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:41:42.573868: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:41:42.580532: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:41:42.580673: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:42.581341: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:42.582069: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:41:42.588901: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz\n",
"2020-03-22 12:41:42.589113: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x12faa00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n",
"2020-03-22 12:41:42.589146: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n",
"2020-03-22 12:41:42.710810: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:42.711452: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x12fabc0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2020-03-22 12:41:42.711484: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla T4, Compute Capability 7.5\n",
"2020-03-22 12:41:42.711636: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:42.712174: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \n",
"name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59\n",
"pciBusID: 0000:00:04.0\n",
"2020-03-22 12:41:42.712230: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:41:42.712253: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\n",
"2020-03-22 12:41:42.712274: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\n",
"2020-03-22 12:41:42.712295: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\n",
"2020-03-22 12:41:42.712312: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\n",
"2020-03-22 12:41:42.712329: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\n",
"2020-03-22 12:41:42.712346: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\n",
"2020-03-22 12:41:42.712407: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:42.712945: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:42.713413: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0\n",
"2020-03-22 12:41:42.713481: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\n",
"2020-03-22 12:41:42.714431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:\n",
"2020-03-22 12:41:42.714456: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 \n",
"2020-03-22 12:41:42.714466: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N \n",
"2020-03-22 12:41:42.714559: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:42.715113: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:983] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n",
"2020-03-22 12:41:42.715596: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n",
"2020-03-22 12:41:42.715633: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14221 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qBcveNUdGGBx",
"colab_type": "text"
},
"source": [
"## **[Optional]** Compiling Edge TPU model\n",
"Note: Edge TPU Compiler (version 2.0.291256449) does not support Hard-swish ope. <br>For this reason, most opes are offloaded to the CPU."
]
},
{
"cell_type": "code",
"metadata": {
"id": "D3ADSER5exeP",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 802
},
"outputId": "2a0d22ec-b7cb-4c51-a1e8-1985faf498b0"
},
"source": [
"!echo \"deb https://packages.cloud.google.com/apt coral-edgetpu-stable main\" | tee /etc/apt/sources.list.d/coral-edgetpu.list\n",
"!curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -\n",
"!sudo apt-get update\n",
"!apt-get install edgetpu-compiler"
],
"execution_count": 13,
"outputs": [
{
"output_type": "stream",
"text": [
"deb https://packages.cloud.google.com/apt coral-edgetpu-stable main\n",
" % Total % Received % Xferd Average Speed Time Time Time Current\n",
" Dload Upload Total Spent Left Speed\n",
"100 653 100 653 0 0 11872 0 --:--:-- --:--:-- --:--:-- 11872\n",
"OK\n",
"Get:1 https://packages.cloud.google.com/apt coral-edgetpu-stable InRelease [6,332 B]\n",
"Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]\n",
"Ign:3 https://packages.cloud.google.com/apt coral-edgetpu-stable/main amd64 Packages\n",
"Get:4 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease [3,626 B]\n",
"Ign:5 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease\n",
"Hit:6 http://archive.ubuntu.com/ubuntu bionic InRelease\n",
"Hit:7 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease\n",
"Get:3 https://packages.cloud.google.com/apt coral-edgetpu-stable/main amd64 Packages [1,390 B]\n",
"Get:8 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]\n",
"Get:9 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [832 kB]\n",
"Get:10 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic InRelease [15.4 kB]\n",
"Ign:11 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease\n",
"Hit:12 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release\n",
"Hit:13 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release\n",
"Get:16 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main Sources [1,784 kB]\n",
"Get:17 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]\n",
"Get:18 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [857 kB]\n",
"Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1,151 kB]\n",
"Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1,361 kB]\n",
"Get:21 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main amd64 Packages [861 kB]\n",
"Fetched 7,125 kB in 7s (1,043 kB/s)\n",
"Reading package lists... Done\n",
"Reading package lists... Done\n",
"Building dependency tree \n",
"Reading state information... Done\n",
"The following NEW packages will be installed:\n",
" edgetpu-compiler\n",
"0 upgraded, 1 newly installed, 0 to remove and 25 not upgraded.\n",
"Need to get 4,500 kB of archives.\n",
"After this operation, 16.6 MB of additional disk space will be used.\n",
"Get:1 https://packages.cloud.google.com/apt coral-edgetpu-stable/main amd64 edgetpu-compiler amd64 13.0 [4,500 kB]\n",
"Fetched 4,500 kB in 1s (8,005 kB/s)\n",
"Selecting previously unselected package edgetpu-compiler.\n",
"(Reading database ... 144542 files and directories currently installed.)\n",
"Preparing to unpack .../edgetpu-compiler_13.0_amd64.deb ...\n",
"Unpacking edgetpu-compiler (13.0) ...\n",
"Setting up edgetpu-compiler (13.0) ...\n",
"Processing triggers for libc-bin (2.27-3ubuntu1) ...\n",
"/sbin/ldconfig.real: /usr/local/lib/python3.6/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link\n",
"\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "5Q9Ujgs6gTbZ",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
},
"outputId": "596116c6-42c8-4404-b710-9c2924e76905"
},
"source": [
"!edgetpu_compiler -v"
],
"execution_count": 14,
"outputs": [
{
"output_type": "stream",
"text": [
"Edge TPU Compiler version 2.0.291256449\n"
],
"name": "stdout"
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "MEn70WkCgeQg",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 564
},
"outputId": "26f8cda2-55c5-4e96-a822-2077b169ec8a"
},
"source": [
"!edgetpu_compiler -s -m 13 /content/data/output/output_tflite_graph.tflite"
],
"execution_count": 15,
"outputs": [
{
"output_type": "stream",
"text": [
"Edge TPU Compiler version 2.0.291256449\n",
"\n",
"Model compiled successfully in 28 ms.\n",
"\n",
"Input model: /content/data/output/output_tflite_graph.tflite\n",
"Input size: 4.11MiB\n",
"Output model: output_tflite_graph_edgetpu.tflite\n",
"Output size: 4.14MiB\n",
"On-chip memory available for caching model parameters: 8.05MiB\n",
"On-chip memory used for caching model parameters: 2.50KiB\n",
"Off-chip memory used for streaming uncached model parameters: 0.00B\n",
"Number of Edge TPU subgraphs: 1\n",
"Total number of operations: 131\n",
"Operation log: output_tflite_graph_edgetpu.log\n",
"\n",
"Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.\n",
"Number of operations that will run on Edge TPU: 1\n",
"Number of operations that will run on CPU: 130\n",
"\n",
"Operator Count Status\n",
"\n",
"AVERAGE_POOL_2D 2 More than one subgraph is not supported\n",
"MUL 16 More than one subgraph is not supported\n",
"CONV_2D 48 More than one subgraph is not supported\n",
"CONV_2D 1 Mapped to Edge TPU\n",
"DEPTHWISE_CONV_2D 15 More than one subgraph is not supported\n",
"HARD_SWISH 21 Operation not supported\n",
"RESHAPE 1 More than one subgraph is not supported\n",
"MEAN 8 More than one subgraph is not supported\n",
"SOFTMAX 1 More than one subgraph is not supported\n",
"ADD 18 More than one subgraph is not supported\n"
],
"name": "stdout"
}
]
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment