Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save helena-intel/3bf9e91c605645f7a8af9c704ac105c8 to your computer and use it in GitHub Desktop.
Save helena-intel/3bf9e91c605645f7a8af9c704ac105c8 to your computer and use it in GitHub Desktop.
102-pytorch-onnx-to-openvino
Display the source blob
Display the rendered blob
Raw
{"cells": [{"cell_type": "markdown", "metadata": {"id": "JwEAhQVzkAwA"}, "source": "# OpenVINO ONNX demo\n\nThis tutorial demostrates step-by-step instructions to perform inference on a PyTorch semantic segmentation model using [OpenVINO](https://github.com/openvinotoolkit/openvino)\n\nThe PyTorch model is converted to ONNX and loaded with OpenVINO. The model is pretrained on [CityScapes](https://www.cityscapes-dataset.com). The model source is https://github.com/ekzhang/fastseg", "id": "1ae64102"}, {"id": "a8c275bf", "cell_type": "markdown", "source": "## Preparation\n\nInstall the requirements and download the files that are necessary for running this notebook.\n\n**NOTE:** installation may take a while. It is recommended to restart the Jupyter kernel after installing the packages. Choose *Kernel->Restart Kernel* in Jupyter Notebook or Lab, or *Runtime->Restart runtime* in Google Colab.", "metadata": {}}, {"id": "a565dc2f", "cell_type": "code", "metadata": {}, "execution_count": null, "source": "# Install or upgrade required Python packages. Install specific versions of some packages to ensure compatibility.\n!pip install geffnet==0.9.8 fastseg onnx ipywidgets networkx defusedxml --find-links https://download.pytorch.org/whl/torch_stable.html torch==1.5.1+cpu torchvision==0.6.1+cpu numpy==1.18.* openvino-dev opencv-python-headless==4.2.0.32 ipython>7.0 ipywidgets>=7.4", "outputs": []}, {"id": "12ef55a3", "cell_type": "code", "metadata": {}, "execution_count": null, "source": "# Download image and model files\nimport os\nimport pip\nimport urllib.parse\nimport urllib.request\nfrom pathlib import Path\n\nurls = ['https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main/notebooks/102-pytorch-onnx-to-openvino/coco_cross.png']\n\nnotebook_url = \"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main\"\n\nfor url in urls:\n save_path = Path(url).relative_to(fr\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/main/notebooks/102-pytorch-onnx-to-openvino\")\n os.makedirs(save_path.parent, exist_ok=True)\n safe_url = urllib.parse.quote(url, safe=\":/\")\n\n urllib.request.urlretrieve(safe_url, save_path.as_posix())", "outputs": []}, {"cell_type": "markdown", "metadata": {}, "source": "## Preparation", "id": "2b58939e"}, {"cell_type": "markdown", "metadata": {"id": "QB4Yo-rGGLmV"}, "source": "### Import the PyTorch Library and Fastseg", "id": "843e9b78"}, {"cell_type": "code", "execution_count": null, "metadata": {"id": "2ynWRum4iiTz"}, "outputs": [], "source": "import sys\nimport time\nfrom pathlib import Path\n\nimport cv2\nimport matplotlib.pyplot as plt\nimport mo_onnx\nimport numpy as np\nimport torch\nfrom fastseg import MobileV3Large\nfrom openvino.inference_engine import IECore", "id": "8da53809"}, {"cell_type": "markdown", "metadata": {}, "source": "### Settings", "id": "d353967b"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "# The filenames of the downloaded and converted models\nBASE_MODEL_NAME = \"fastseg\"\nmodel_path = Path(BASE_MODEL_NAME).with_suffix(\".pth\")\nonnx_path = model_path.with_suffix(\".onnx\")\nir_path = model_path.with_suffix(\".xml\")", "id": "0b49921b"}, {"cell_type": "markdown", "metadata": {"id": "u5xKw0hR0jq6"}, "source": "### Download the Fastseg Model\n\nThis downloads and loads the model and pretrained weights. It may take some time.", "id": "67be90ff"}, {"cell_type": "code", "execution_count": null, "metadata": {"colab": {"base_uri": "https://localhost:8080/"}, "id": "xGKkMRfvi0op", "outputId": "4eb1f9af-a4c5-424c-f808-dd9cc2600975"}, "outputs": [], "source": "print(\n \"Downloading the Fastseg model (if it has not been downloaded before)....\"\n)\nmodel = MobileV3Large.from_pretrained().cpu().eval()\nprint(\"Loaded PyTorch Fastseg model\")\n\n# Save the model\nif not model_path.exists():\n print(\"\\nSaving the model\")\n torch.save(model.state_dict(), str(model_path))\n print(f\"Model saved at {model_path}\")", "id": "832be3dc"}, {"cell_type": "markdown", "metadata": {"id": "Rhc_7EObUypw"}, "source": "## ONNX Model Conversion\n\n### Convert PyTorch model to ONNX\n\nThe output for this cell will show some warnings. These are most likely harmless. Conversion succeeded if the last line of the output says `ONNX model exported to fastseg.onnx.` ", "id": "916ab6ec"}, {"cell_type": "code", "execution_count": null, "metadata": {"colab": {"base_uri": "https://localhost:8080/"}, "id": "ipQWpbgQUxoo", "outputId": "bbc1734a-c2a2-4261-ed45-264b9e3edd00"}, "outputs": [], "source": "if not onnx_path.exists():\n dummy_input = torch.randn(1, 3, 512, 1024)\n # For the Fastseg model, setting do_constant_folding to False is required\n # for PyTorch>1.5.1\n torch.onnx.export(\n model,\n dummy_input,\n onnx_path,\n opset_version=11,\n do_constant_folding=False,\n )\n print(f\"ONNX model exported to {onnx_path}.\")\nelse:\n print(f\"ONNX model {onnx_path} already exists.\")", "id": "812e9da8"}, {"cell_type": "markdown", "metadata": {"id": "6JSoEIk60uxV"}, "source": "### Convert ONNX model to OpenVINO IR Format\n\nCall the OpenVINO Model Optimizer tool to convert the ONNX model to OpenVINO IR, with FP16 precision. The models are saved to the current directory. We add the mean values to the model and scale the output with the standard deviation with `--scale_values`. With these options, it is not necessary to normalize input data before propagating it through the network.\n\nSee the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) for more information about Model Optimizer.", "id": "229e5349"}, {"cell_type": "markdown", "metadata": {}, "source": "Call the OpenVINO Model Optimizer tool to convert the ONNX model to OpenVINO IR, with FP16 precision. Executing this command may take a while. There may be some errors or warnings in the output. Model Optimization was succesful if the last lines of the output include `[ SUCCESS ] Generated IR version 10 model.\n`", "id": "f2539c2a"}, {"cell_type": "code", "execution_count": null, "metadata": {"tags": []}, "outputs": [], "source": "# Get the path to the Model Optimizer script\nmo_path = str(Path(mo_onnx.__file__))\n\n# Construct the command for Model Optimizer\nmo_command = f\"\"\"\"{sys.executable}\"\n \"{mo_path}\"\n --input_model \"{onnx_path}\"\n --input_shape \"[1,3, 512, 1024]\"\n --mean_values=\"[123.675, 116.28 , 103.53]\"\n --scale_values=\"[58.395, 57.12 , 57.375]\"\n --data_type FP16\n --output_dir \"{model_path.parent}\"\n \"\"\"\nmo_command = \" \".join(mo_command.split())\nprint(\"Model Optimizer command to convert the ONNX model to OpenVINO:\")\nprint(mo_command)", "id": "ddd9c77f"}, {"cell_type": "code", "execution_count": null, "metadata": {"id": "6YUwrq7QWSzw", "tags": []}, "outputs": [], "source": "if not ir_path.exists():\n print(\"Exporting ONNX model to IR... This may take a few minutes.\")\n ! $mo_command\nelse:\n print(f\"IR model {ir_path} already exists.\")", "id": "64fc556a"}, {"cell_type": "markdown", "metadata": {"id": "FAGmlKQ83ecE"}, "source": "## Show Results\n\n### Define Preprocessing and Display Functions\n\nFor the OpenVINO model, normalization is moved to the model. For the ONNX and PyTorch models, images need to be normalized before propagating through the network.", "id": "e589ffd8"}, {"cell_type": "code", "execution_count": null, "metadata": {"id": "QTOoQnSetzQM", "tags": []}, "outputs": [], "source": "def normalize(image: np.ndarray) -> np.ndarray:\n \"\"\"\n Normalize the image to the given mean and standard deviation\n for CityScapes models.\n \"\"\"\n image = image.astype(np.float32)\n mean = (0.485, 0.456, 0.406)\n std = (0.229, 0.224, 0.225)\n image /= 255.0\n image -= mean\n image /= std\n return image\n\n\ndef show_image_and_result(\n image: np.ndarray, result: np.ndarray, cmap=\"viridis\"\n):\n \"\"\"\n Show the original image and the segmentation map side by side. The\n segmentation map is resized to the size of the original image,\n and visualized with the colormap given by `cmap`.\n\n :param image: the original RGB image\n :param result: the network result after argmax, with np.uint8 datatype\n \"\"\"\n # Resize result segmentation map to original image size\n resized_result = cv2.resize(result, tuple(image.shape[:2][::-1]))\n fig, ax = plt.subplots(1, 2, figsize=(25, 8))\n ax[0].imshow(image)\n ax[1].imshow(resized_result, cmap=cmap)\n for a in ax:\n a.axis(\"off\")", "id": "bd9e47e5"}, {"cell_type": "markdown", "metadata": {"id": "JyD5EKka34Wd"}, "source": "### Load and Pre-process an Input Image\n\n---\n\n\n", "id": "4b878e9d"}, {"cell_type": "code", "execution_count": null, "metadata": {"colab": {"base_uri": "https://localhost:8080/"}, "id": "DGFW5VXL3x9G", "outputId": "300eacff-c6de-4eb5-e99a-8def5260da1a", "tags": []}, "outputs": [], "source": "image = cv2.cvtColor(cv2.imread(\"coco_cross.png\"), cv2.COLOR_BGR2RGB)\nresized_image = cv2.resize(image, (1024, 512))\nnormalized_image = normalize(resized_image)\n\n# Convert the image shape to shape and data type expected by network\n# for OpenVINO IR model\ninput_image = np.expand_dims(np.transpose(resized_image, (2, 0, 1)), 0)\n# for ONNX and PyTorch models\nnormalized_input_image = np.expand_dims(\n np.transpose(normalized_image, (2, 0, 1)), 0\n)", "id": "aee4e74d"}, {"cell_type": "markdown", "metadata": {"id": "FnEiEbNq4Csh"}, "source": "### Load the OpenVINO IR network and Run Inference on the ONNX model\n\nInference Engine can load ONNX models directly. We first load the ONNX model, do inference and show the results. After that we load the model that was converted to Intermediate Representation (IR) with Model Optimizer and do inference on that model and show the results.\n\n#### 1. ONNX model in Inference Engine", "id": "ee85f1b2"}, {"cell_type": "code", "execution_count": null, "metadata": {"id": "otfT6EDk03KV", "tags": []}, "outputs": [], "source": "# Load network to Inference Engine\nie = IECore()\nnet_onnx = ie.read_network(model=onnx_path)\nexec_net_onnx = ie.load_network(network=net_onnx, device_name=\"CPU\")\n\ninput_layer_onnx = next(iter(exec_net_onnx.input_info))\noutput_layer_onnx = next(iter(exec_net_onnx.outputs))\n\n# Run the Inference on the Input image...\nres_onnx = exec_net_onnx.infer(\n inputs={input_layer_onnx: normalized_input_image}\n)\nres_onnx = res_onnx[output_layer_onnx]", "id": "ad912b7d"}, {"cell_type": "code", "execution_count": null, "metadata": {"colab": {"base_uri": "https://localhost:8080/", "height": 348}, "id": "q8NRH8hLuWeV", "outputId": "8b17f90e-b3bc-456f-b6f3-e9cd4f743b2f", "tags": []}, "outputs": [], "source": "# Convert network result to segmentation map and display the result\nresult_mask_onnx = np.squeeze(np.argmax(res_onnx, axis=1)).astype(np.uint8)\nshow_image_and_result(image, result_mask_onnx)", "id": "4fc3b6c1"}, {"cell_type": "markdown", "metadata": {"id": "FnEiEbNq4Csh"}, "source": "#### 2. IR model in Inference Engine", "id": "8e4239ce"}, {"cell_type": "code", "execution_count": null, "metadata": {"id": "otfT6EDk03KV", "tags": []}, "outputs": [], "source": "# Load network to Inference Engine\nie = IECore()\nnet_ir = ie.read_network(model=ir_path)\nexec_net_ir = ie.load_network(network=net_ir, device_name=\"CPU\")\n\n# Get names of input and output layers\ninput_layer_ir = next(iter(exec_net_ir.input_info))\noutput_layer_ir = next(iter(exec_net_ir.outputs))\n\n# Run the Inference on the Input image...\nres_ir = exec_net_ir.infer(inputs={input_layer_ir: input_image})\nres_ir = res_ir[output_layer_ir]", "id": "e99ac1f7"}, {"cell_type": "code", "execution_count": null, "metadata": {"colab": {"base_uri": "https://localhost:8080/", "height": 348}, "id": "q8NRH8hLuWeV", "outputId": "8b17f90e-b3bc-456f-b6f3-e9cd4f743b2f", "tags": []}, "outputs": [], "source": "result_mask_ir = np.squeeze(np.argmax(res_ir, axis=1)).astype(np.uint8)\nshow_image_and_result(image, result_mask_ir)", "id": "30297b15"}, {"cell_type": "markdown", "metadata": {"id": "w3UUduQEGsQm"}, "source": "## PyTorch Comparison\n\nDo inference on the PyTorch model to verify that the output visually looks the same as the ONNX/IR models.", "id": "91a944cf"}, {"cell_type": "code", "execution_count": null, "metadata": {"colab": {"base_uri": "https://localhost:8080/", "height": 348}, "id": "1l1JtgeV4Wuw", "outputId": "f21c8904-83da-438c-df39-4620bb679554", "tags": []}, "outputs": [], "source": "with torch.no_grad():\n result_torch = model(torch.as_tensor(normalized_input_image).float())\n \nresult_mask_torch = (\n torch.argmax(result_torch, dim=1).squeeze(0).numpy().astype(np.uint8)\n)\nshow_image_and_result(image, result_mask_torch)", "id": "2f53ff0e"}, {"cell_type": "markdown", "metadata": {}, "source": "## Performance comparison\n\nMeasure the time it takes to do inference on five images. This gives an indication of performance. For more accurate benchmarking, use the [OpenVINO benchmark tool](https://github.com/openvinotoolkit/openvino/tree/master/inference-engine/tools/benchmark_tool). Note that many optimizations are possible to improve the performance. ", "id": "cf8d9a35"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "num_images = 5\n\nstart = time.perf_counter()\nfor _ in range(num_images):\n exec_net_onnx.infer(inputs={input_layer_onnx: input_image})\nend = time.perf_counter()\ntime_onnx = end - start\nprint(\n f\"ONNX model in Inference Engine/CPU: {time_onnx/num_images:.3f} \"\n f\"seconds per image, FPS: {num_images/time_onnx:.2f}\"\n)\n\nstart = time.perf_counter()\nfor _ in range(num_images):\n exec_net_ir.infer(inputs={input_layer_ir: input_image})\nend = time.perf_counter()\ntime_ir = end - start\nprint(\n f\"IR model in Inference Engine/CPU: {time_ir/num_images:.3f} \"\n f\"seconds per image, FPS: {num_images/time_ir:.2f}\"\n)\n\nwith torch.no_grad():\n start = time.perf_counter()\n for _ in range(num_images):\n model(torch.as_tensor(input_image).float())\n end = time.perf_counter()\n time_torch = end - start\nprint(\n f\"PyTorch model on CPU: {time_torch/num_images:.3f} seconds per image, \"\n f\"FPS: {num_images/time_torch:.2f}\"\n)\n\n# Uncomment the following lines for GPU performance stats\n\n# exec_net_onnx_gpu = ie.load_network(network=net_ir, device_name=\"GPU\")\n# start = time.perf_counter()\n# for _ in range(num_images):\n# exec_net_onnx_gpu.infer(inputs={input_layer_onnx: input_image})\n# end = time.perf_counter()\n# time_onnx_gpu = end - start\n# print(\n# f\"ONNX model in Inference Engine/GPU: {time_onnx_gpu/num_images:.3f} \"\n# f\"seconds per image, FPS: {num_images/time_onnx_gpu:.2f}\"\n# )\n\n# exec_net_ir_gpu = ie.load_network(network=net_ir, device_name=\"GPU\")\n# start = time.perf_counter()\n# for _ in range(num_images):\n# exec_net_ir_gpu.infer(inputs={input_layer_ir: input_image})\n# end = time.perf_counter()\n# time_ir_gpu = end - start\n# print(\n# f\"IR model in Inference Engine/GPU: {time_ir_gpu/num_images:.3f} \"\n# f\"seconds per image, FPS: {num_images/time_ir_gpu:.2f}\"\n# )", "id": "096f9973"}, {"cell_type": "markdown", "metadata": {}, "source": "**Show CPU Information for reference**", "id": "3d9b64b7"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "try:\n import cpuinfo\n\n print(cpuinfo.get_cpu_info()[\"brand_raw\"])\nexcept Exception:\n # OpenVINO installs cpuinfo, but if a different version is installed\n # the command above may not work\n import platform\n\n print(platform.processor())", "id": "eb7622a3"}, {"cell_type": "markdown", "metadata": {}, "source": "# References\n\n* Fastseg: https://github.com/ekzhang/fastseg\n* PIP install openvino-dev: https://github.com/openvinotoolkit/openvino/blob/releases/2021/3/docs/install_guides/pypi-openvino-dev.md\n* OpenVINO ONNX support: https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_ONNX_Support.html\n* Model Optimizer Documentation: https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html\n", "id": "eee62200"}, {"cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": "", "id": "d3b389c7"}], "metadata": {"colab": {"collapsed_sections": [], "name": "OpenVINO 2021.3 PIP installer - PyTorch Image Segmentation.ipynb", "provenance": [], "toc_visible": true}, "kernelspec": {"display_name": "Python 3", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.8"}}, "nbformat": 4, "nbformat_minor": 5}
libpython3.7-dev
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment