Skip to content

Instantly share code, notes, and snippets.

@pinzhenx
Created May 3, 2019 17:06
Show Gist options
  • Save pinzhenx/4a1aa06b7750bca5a8b4b5a49245ef01 to your computer and use it in GitHub Desktop.
Save pinzhenx/4a1aa06b7750bca5a8b4b5a49245ef01 to your computer and use it in GitHub Desktop.
onnx sample
<html>
<script src='../dist/webml-polyfill.js'></script>
<script src='third_party/protobuf.min.js'></script>
<script src='util/base.js'></script>
<script src='util/onnx/onnx.js'></script>
<script src='util/onnx/OnnxModelUtils.js'></script>
<script src='util/onnx/OnnxModelImporter.js'></script>
<script>
(async () => {
const res = await fetch('path/to/model.onnx');
const bytes = await res.arrayBuffer();
const onnxModel = onnx.ModelProto.decode(new Uint8Array(bytes));
const model = new OnnxModelImporter({
rawModel: onnxModel,
backend: 'WebML',
prefer: 'fast',
});
await model.createCompiledModel();
const inputs = [new Float32Array(224*224*3)]; // input tensors in NHWC format
const outputs = [new Float32Array(1000)]; // placeholder for outputs
const start = performance.now();
await model.compute(inputs, outputs); // outputs will be ready once fulfills
const inferenceTime = performance.now() - start;
console.log(outputs[0]);
})();
</script>
</html>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment