Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save helena-intel/c167321835944b5fde8d583e7a3f8e10 to your computer and use it in GitHub Desktop.
Save helena-intel/c167321835944b5fde8d583e7a3f8e10 to your computer and use it in GitHub Desktop.
101-tensorflow-to-openvino
Display the source blob
Display the rendered blob
Raw
{"cells": [{"cell_type": "markdown", "metadata": {"id": "JwEAhQVzkAwA"}, "source": "# Tensorflow to OpenVINO\n\nThis short tutorial shows how to convert a TensorFlow MobilenetV3 model to OpenVINO IR format.\nModel source: https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mobilenet-v3-small-1.0-224-tf/mobilenet-v3-small-1.0-224-tf.md", "id": "bb757eee"}, {"id": "b343dd6e", "cell_type": "markdown", "source": "## Preparation\n\nInstall the requirements and download the files that are necessary for running this notebook.\n\n**NOTE:** installation may take a while. It is recommended to restart the Jupyter kernel after installing the packages. Choose *Kernel->Restart Kernel* in Jupyter Notebook or Lab, or *Runtime->Restart runtime* in Google Colab.", "metadata": {}}, {"id": "30a5d34c", "cell_type": "code", "metadata": {}, "execution_count": null, "source": "# Install or upgrade required Python packages. Install specific versions of some packages to ensure compatibility.\n!pip install tensorflow networkx defusedxml numpy==1.18.* openvino-dev opencv-python-headless==4.2.0.32 ipython>7.0 ipywidgets>=7.4", "outputs": []}, {"id": "5a13d9df", "cell_type": "code", "metadata": {}, "execution_count": null, "source": "# Download image and model files\nimport os\nimport pip\nimport urllib.parse\nimport urllib.request\nfrom pathlib import Path\n\nurls = ['https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main/notebooks/101-tensorflow-to-openvino/imagenet_class_index.json', 'https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main/notebooks/101-tensorflow-to-openvino/coco.jpg', 'https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main/notebooks/101-tensorflow-to-openvino/models/v3-small_224_1.0_float.pb']\n\nnotebook_url = \"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main\"\n\nfor url in urls:\n save_path = Path(url).relative_to(fr\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main/notebooks/101-tensorflow-to-openvino\")\n os.makedirs(save_path.parent, exist_ok=True)\n safe_url = urllib.parse.quote(url, safe=\":/\")\n\n urllib.request.urlretrieve(safe_url, save_path.as_posix())", "outputs": []}, {"cell_type": "markdown", "metadata": {"id": "QB4Yo-rGGLmV"}, "source": "### Imports", "id": "b1efd181"}, {"cell_type": "code", "execution_count": null, "metadata": {"id": "2ynWRum4iiTz"}, "outputs": [], "source": "import json\nimport sys\nimport time\nfrom pathlib import Path\n\nimport cv2\nimport matplotlib.pyplot as plt\nimport mo_tf\nimport numpy as np\nfrom openvino.inference_engine import IECore", "id": "405ad82c"}, {"cell_type": "markdown", "metadata": {}, "source": "### Settings", "id": "c295ce78"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "# The paths of the source and converted models\nmodel_path = Path(\"models/v3-small_224_1.0_float.pb\")\nir_path = Path(model_path).with_suffix(\".xml\")", "id": "12e0f34f"}, {"cell_type": "markdown", "metadata": {"id": "6JSoEIk60uxV"}, "source": "## Convert the Model to OpenVINO IR Format\n\n### Convert TensorFlow model to OpenVINO IR Format\n\nCall the OpenVINO Model Optimizer tool to convert the TensorFlow model to OpenVINO IR, with FP16 precision. The models are saved to the current directory. We add the mean values to the model and scale the output with the standard deviation with `--scale_values`. With these options, it is not necessary to normalize input data before propagating it through the network. The original model expects input images in RGB format. The converted model also expects images in RGB format. If you want the converted model to work with BGR images, you can use the `--reverse-input-channels` option. See the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) for more information about Model Optimizer, including a description of the command line options. Check the [model documentation](https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/public/mobilenet-v3-small-1.0-224-tf/mobilenet-v3-small-1.0-224-tf.md) for information about the model, including input shape, expected color order and mean values. \n\nWe first construct the command for Model Optimizer, and then execute this command in the notebook by prepending the command with a `!`. There may be some errors or warnings in the output. Model Optimization was succesful if the last lines of the output include `[ SUCCESS ] Generated IR version 10 model.\n`", "id": "412e7eff"}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": []}, "outputs": [], "source": "# Get the path to the Model Optimizer script\nmo_path = str(Path(mo_tf.__file__))\n\n# Construct the command for Model Optimizer\nmo_command = f\"\"\"\"{sys.executable}\"\n \"{mo_path}\" \n --input_model \"{model_path}\" \n --input_shape \"[1,224,224,3]\" \n --mean_values=\"[127.5,127.5,127.5]\"\n --scale_values=\"[127.5]\" \n --data_type FP16 \n --output_dir \"{model_path.parent}\"\n \"\"\"\nmo_command = \" \".join(mo_command.split())\nprint(\"Model Optimizer command to convert TensorFlow to OpenVINO:\")\nprint(mo_command)", "id": "6f1cf223"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "# Run Model Optimizer if the IR model file does not exist\nif not ir_path.exists():\n print(\"Exporting TensorFlow model to IR... This may take a few minutes.\")\n ! $mo_command\nelse:\n print(f\"IR model {ir_path} already exists.\")", "id": "934d8902"}, {"cell_type": "markdown", "metadata": {}, "source": "## Test Inference on the Converted Model", "id": "854c117e"}, {"cell_type": "markdown", "metadata": {}, "source": "### Load the model", "id": "e030ddfa"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "ie = IECore()\nnet = ie.read_network(str(ir_path), str(ir_path.with_suffix(\".bin\")))\nexec_net = ie.load_network(net, \"CPU\")", "id": "f26a2ad3"}, {"cell_type": "markdown", "metadata": {}, "source": "### Get model information", "id": "6ba0acf1"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "input_key = list(exec_net.input_info)[0]\noutput_key = list(exec_net.outputs.keys())[0]\nnetwork_input_shape = exec_net.input_info[input_key].tensor_desc.dims", "id": "f43a9062"}, {"cell_type": "markdown", "metadata": {}, "source": "### Load image\n\nLoad an image, resize it, and convert it to network input shape", "id": "fb6412f9"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "image = cv2.cvtColor(cv2.imread(\"coco.jpg\"), cv2.COLOR_BGR2RGB)\n# Resize image to network input image shape\nresized_image = cv2.resize(image, (224, 224))\n# Transpose image to network input shape\ninput_image = np.reshape(resized_image, network_input_shape) / 255\ninput_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0)\nplt.imshow(image)", "id": "316ac441"}, {"cell_type": "markdown", "metadata": {}, "source": "### Do inference", "id": "4fb37bee"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "result = exec_net.infer(inputs={input_key: input_image})[output_key]\nresult_index = np.argmax(result)", "id": "fa75263d"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "imagenet_classes = json.loads(open(\"imagenet_class_index.json\").read())\nimagenet_classes = {\n int(key) + 1: value for key, value in imagenet_classes.items()\n}\nimagenet_classes[result_index]", "id": "96a017aa"}, {"cell_type": "markdown", "metadata": {}, "source": "## Timing\n\nMeasure the time it takes to do inference on thousand images. This gives an indication of performance. For more accurate benchmarking, use the [OpenVINO benchmark tool](https://github.com/openvinotoolkit/openvino/tree/master/inference-engine/tools/benchmark_tool). Note that many optimizations are possible to improve the performance. ", "id": "43f0f3ea"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "num_images = 1000\nstart = time.perf_counter()\nfor _ in range(num_images):\n exec_net.infer(inputs={input_key: input_image})\nend = time.perf_counter()\ntime_ir = end - start\nprint(\n f\"IR model in Inference Engine/CPU: {time_ir/num_images:.4f} \"\n f\"seconds per image, FPS: {num_images/time_ir:.2f}\"\n)", "id": "7dab2c5f"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "", "id": "9386c4f5"}], "metadata": {"colab": {"collapsed_sections": [], "name": "OpenVINO 2021.3 PIP installer - PyTorch Image Segmentation.ipynb", "provenance": [], "toc_visible": true}, "kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8"}}, "nbformat": 4, "nbformat_minor": 5}
libpython3.7-dev
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment