Skip to content

Instantly share code, notes, and snippets.

@dhruvbird
Created November 12, 2021 19:29
Show Gist options
  • Save dhruvbird/13d801ccbc9b8c3f77d4d3ce7c798472 to your computer and use it in GitHub Desktop.
Save dhruvbird/13d801ccbc9b8c3f77d4d3ce7c798472 to your computer and use it in GitHub Desktop.
Run the lite interpreter model using the PyTorch C++ API
#include <iostream>
#include <vector>
#include <torch/csrc/jit/mobile/import.h>
#include <torch/csrc/jit/mobile/module.h>
int main() {
auto model = torch::jit::_load_for_mobile("AddTensorsModelOptimized.ptl"));
std::vector<at::IValue> inputs, ret;
inputs.push_back(at::zeros({3}));
inputs.push_back(at::ones({3}));
ret.emplace_back(model.forward(inputs));
for (auto& iv: ret) {
std::cout << iv << std::endl;
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment