Skip to content

Instantly share code, notes, and snippets.

@mgei
Last active May 15, 2019 13:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mgei/5d6aa985a1bf95b4fadfbd99ae0bb267 to your computer and use it in GitHub Desktop.
Save mgei/5d6aa985a1bf95b4fadfbd99ae0bb267 to your computer and use it in GitHub Desktop.

Running the sample MOVIDIUS models on Ubuntu 16.04

  1. Install OpenVino Toolkit: https://software.intel.com/en-us/articles/OpenVINO-Install-Linux
  2. Additional installation steps: https://software.intel.com/en-us/articles/OpenVINO-Install-Linux

Compile all models first. There's a bash for this. After that, the models are in the home directory:

cd ~/inference_engine_samples_build/intel64/Release

Weights are in a subfolder of the installation directory:

ls /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/intel_models

Note that for running on the stick (MYRIAD), you need FP16 and not FP32.

You can set a temporary environmnet variable to easily access the models/weights:

models=/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/intel_models

Then to run for example the human pose estimation demo:

./human_pose_estimation_demo -m $models/human-pose-estimation-0001/FP16/human-pose-estimation-0001.xml -d MYRIAD -i /dev/video0

  • -d is the device and can be MYRIAD (compute stick), CPU, or GPU.
  • -i is the imput, can alternatively be an image or video file
  • -m the model, note that you need to reference the XML that itself references to the binary.

More complicated cool models

One model on top of the other, with one running on the Neural stick, other on GPU and CPU

./interactive_face_detection_demo -i cam -m $models/face-detection-retail-0004/FP16/face-detection-retail-0004.xml -d MYRIAD -m_ag $models/age-gender-recognition-retail-0013/FP32/age-gender-recognition-retail-0013.xml -d_ag CPU -m_hp $models/head-pose-estimation-adas-0001/FP16/head-pose-estimation-adas-0001.xml -d_hp GPU -m_em $models/emotions-recognition-retail-0003/FP16/emotions-recognition-retail-0003.xml -d_em GPU -m_lm $models/facial-landmarks-35-adas-0001/FP16/facial-landmarks-35-adas-0001.xml -d_lm GPU

(there was I gist that explained how to combine the samples but I cannot find it anymore)

For the Raspberry Pi it's similar, there's instruction from Intel also.

Raspberry

Install guide: https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_raspbian.html

The OpenCV sample might not work. The OpenCV installation by Openvino Version 2019 seems strange.

To use a sample one has to frist cmake preferably to the home dir:

mkdir build && cd build

cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino/deployment_tools/inference_engine/samples

Then download the weight for the desired model from the model zoo.

Then in the bulild dir make, e.g. for the pose estimation:

make -j2 human_pose_estimation_demo

Then cd armv7l/Release/

Then define $model or give absolute path to the weights, to run, e.g.:

/human_pose_estimation_demo -m $models/human-pose-estimation-0001.xml -d CPU -i /dev/video0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment