Skip to content

Instantly share code, notes, and snippets.

View tonyreina's full-sized avatar

Tony Reina tonyreina

View GitHub Profile
@tonyreina
tonyreina / keras inference timeline
Last active June 18, 2022 18:05
How to add a TensorFlow timeline to Keras inference
import numpy as np
import tensorflow as tf
from tensorflow.python.client import timeline
import json
def create_model():
# Just a simple CNN
# Alternatively, just
# model = tf.keras.models.load_model("mymodel.hdf5")
## TO BUILD CONTAINER:
## Make sure you have downloaded the Linux version of OpenVINO: https://software.intel.com/en-us/openvino-toolkit/choose-download/free-download-linux
## Place the downloaded OpenVINO installer in the same directory as this Dockerfile.
## docker build -t 3d_unet_decathlon --build-arg HTTP_PROXY=${HTTP_PROXY} --build-arg HTTPS_PROXY=${HTTPS_PROXY} --build-arg NO_PROXY=${NO_PROXY} --build-arg http_proxy=${http_proxy} --build-arg https_proxy=${https_proxy} --build-arg no_proxy=${no_proxy} .
## TO RUN BUILT CONTAINER:
## For CPU - `docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp.X11-unix -it 3d_unet_decathlon`
FROM ubuntu:16.04
ENV APP_DIR /app
ADD . ${APP_DIR}
@tonyreina
tonyreina / save_and_convert.sh
Last active January 13, 2020 20:06
Ways to save and convert TF to OpenVINO
There are several options here. For Keras, just simply load the Keras model and use sess = keras.get_session() to link the TF session variable to the Keras model (model.output and model.input are the output and input ops)
# Option 1: Saved TensorFlow model
tf.saved_model.simple_save(session, export_dir, inputs={"x":x, "y":y}, outputs={"z": z})
mo_tf.py -saved_model_dir export_dir
# Option 2: Saved TensorFlow checkpoint
saver = tf.train.Saver()
saver.save(sess, "./model.ckpt")
@tonyreina
tonyreina / openvino_env.sh
Last active January 13, 2020 20:04
Intel OpenVINO setup environment
source /opt/intel/openvino/bin/setupvars.sh
alias downloader='python3 ${INTEL_OPENVINO_DIR}/deployment_tools/model_downloader/downloader.py'
alias mo='${INTEL_OPENVINO_DIR}/deployment_tools/model_optimizer/mo.py'
FROM intelaipg/openvino-model-server
# Create this working directory in container
# All commands after this are relative to WORKDIR
WORKDIR /usr/openvino
# Copy everything from local machine directory into container
COPY . .
# Install h5py within container
import nibabel as nib
import numpy as np
data = np.ones((32, 32, 15, 100), dtype=np.int16) # dummy data in numpy matrix
img = nib.Nifti1Image(data, np.eye(4)) # Save axis for data (just identity)
img.header.get_xyzt_units()
img.to_filename(os.path.join('build','test4d.nii.gz')) # Save as NiBabel file
@tonyreina
tonyreina / print_bench.sh
Last active September 23, 2018 20:15
Print benchmarks
#Prints FPS from from the logs emitted from tf_cnn_bench_6nets scripts.
#Usage: ./print_fps_tf_cnn_bench_6nets.sh 96
echo -e "\n Net BZ FPS \n"
num_cores=`grep -c ^processor /proc/cpuinfo` # Get number of cores
for network in googlenet inception3 resnet50 resnet152 vgg16 ; do
for bz in 1 32 64 96 128; do
fps=$(grep "total images/sec:" net_${network}_bz_${bz}_numcores_${num_cores}.log | cut -d ":" -f2 | xargs)
echo "$network $bz $fps"
@tonyreina
tonyreina / run_bench.sh
Last active September 23, 2018 20:09
TF CNN Benchmarking Scripts
#!/bin/bash
#sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
date > start_benchmark.txt
# Get number of sockets (set interthread to num sockets)
inter=`grep -i "physical id" /proc/cpuinfo | sort -u | wc -l`
# Get number of physical cores (set intra to num physical cores)
num_cores=`grep -c ^processor /proc/cpuinfo` # Get number of cores
for network in googlenet inception3 resnet50 resnet152 vgg16 ; do
for bz in 1; do