Skip to content

Instantly share code, notes, and snippets.

@oborchers
Created April 3, 2021 12:05
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save oborchers/2536a05199a9068a78e6de0df6e7089f to your computer and use it in GitHub Desktop.
Save oborchers/2536a05199a9068a78e6de0df6e7089f to your computer and use it in GitHub Desktop.
# We start by working with CUDA only
ONNX_PROVIDERS = ["CUDAExecutionProvider", "CPUExecutionProvider"]
opt = rt.SessionOptions()
sess = rt.InferenceSession(str(model_pth), opt, providers=ONNX_PROVIDERS)
model_input = tokenizer.encode_plus(span)
model_input = {name : np.atleast_2d(value) for name, value in model_input.items()}
onnx_result = sess.run(None, model_input)
print(onnx_result[0].shape)
print(onnx_result[1].shape)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment