Skip to content

Instantly share code, notes, and snippets.

@Rexhaif
Created January 9, 2021 18:05
Show Gist options
  • Save Rexhaif/a73d899a8ae837f9a3b4cbdd5653d71d to your computer and use it in GitHub Desktop.
Save Rexhaif/a73d899a8ae837f9a3b4cbdd5653d71d to your computer and use it in GitHub Desktop.
tensorflow on  master [?] via 🐍 v2.7.16
❯ adb shell taskset f0 /data/local/tmp/benchmark_model \
--graph=/data/local/tmp/albert_base.tflite \
--num_threads=1
STARTING!
Log parameter values verbosely: [0]
Num threads: [1]
Graph: [/data/local/tmp/albert_base.tflite]
#threads used for CPU inference: [1]
Loaded model /data/local/tmp/albert_base.tflite
INFO: Initialized TensorFlow Lite runtime.
The input model file size (MB): 11.983
Initialized session in 20.283ms.
Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.
count=2 first=339304 curr=266325 min=266325 max=339304 avg=302814 std=36489
Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds.
count=50 first=266601 curr=266931 min=265837 max=268116 avg=266473 std=419
Inference timings in us: Init: 20283, First inference: 339304, Warmup (avg): 302814, Inference (avg): 266473
Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.
Peak memory footprint (MB): init=2.96875 overall=61.168
tensorflow on  master [?] via 🐍 v2.7.16 took 14s
❯ adb shell taskset f0 /data/local/tmp/benchmark_model \
--graph=/data/local/tmp/l2_h128.tflite \
--num_threads=1
STARTING!
Log parameter values verbosely: [0]
Num threads: [1]
Graph: [/data/local/tmp/l2_h128.tflite]
#threads used for CPU inference: [1]
Loaded model /data/local/tmp/l2_h128.tflite
The input model file size (MB): 4.57718
Initialized session in 1.458ms.
Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.
INFO: Initialized TensorFlow Lite runtime.
count=20 first=63622 curr=22670 min=22159 max=63622 avg=25619.4 std=8852
Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds.
count=50 first=25277 curr=22719 min=22022 max=29556 avg=23208.5 std=1549
Inference timings in us: Init: 1458, First inference: 63622, Warmup (avg): 25619.4, Inference (avg): 23208.5
Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.
Peak memory footprint (MB): init=1.01562 overall=50.4883
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment