Skip to content

Instantly share code, notes, and snippets.

@jimlloyd

jimlloyd/CMakeLists.txt

Last active Mar 6, 2021
Embed
What would you like to do?
Loading a Tensorflow.js model from C++/WASM

Loading a Tensorflow.js model from C++/WASM

This is a spike to figure out any method to load a Tensorflow.js model from C++, compiled to WASM with Emscripten em++.

It is my first working sample that calls Javascript from C++. I am aware that there exist better ways to do this, but I wanted to produce a simple/small working application and then move on from it.

The code here uses a single Javascript gloabal object spikeContext, created as a emscripten::val object in C++ at the first line of code. It gives me the willies to pollute the global namespace, so I create just one global object and then add properties to it for all other global objects created.

The three EM_JS functions loadTensorflow, locateModel, and loadModel are a somewhat arbitrary divsion of the execution. I wanted confirmation that the global object persists across these calls (despite the fact that of course global objects persist).

I separated locatedModel from loadModel just to isolate the async load step into one function.

The model referenced here "tf-wasm/e1431.tfjs.model" is not included in this gist, but any Tensorflow.js GraphModel should work. A LayersModel should also work by just changing the tf.loadGraphModel() call to tf.loadLayersModel().

Loading NPM modules

I was initially confused as to what would be required to make the '@tensorflow/tfjs-node' or any other NPM module available to the C++ code. But of course, it's all relatively automatic. All that is necessary is to locate a node_modules directory in the same directory as the compiled application (javascript and WASM).

Next step

I have an abstract class Predictor with with a pure virtual function Predict(...), with an implemention Predictor_cc that uses Tensorflow_cc for its implementation. I want to implement a Predictor_js class uses Tensorflow.js for the implementation.

add_executable(spike spike.cpp)
target_compile_options(spike PRIVATE "SHELL:--bind")
target_link_options(
spike
PRIVATE "--bind"
PRIVATE "SHELL:-s ASYNCIFY"
PRIVATE "SHELL:-s ASYNCIFY_IMPORTS=[loadModel]"
)
#include <emscripten.h>
#include <emscripten/bind.h>
#include <emscripten/val.h>
#include <string>
using namespace emscripten;
EM_JS(void, loadTensorflow, (),
{
console.log(spikeContext);
const path = require('path');
const tf = require('@tensorflow/tfjs-node');
const workDir = process.cwd();
spikeContext = { path, tf, workDir };
console.log({workDir});
});
EM_JS(void, locateModel, (const char* basePath, int basePathLen),
{
const { path } = spikeContext;
const base = UTF8ToString(basePath, basePathLen);
const modelDirPath = path.relative(process.cwd(), base);
const modelPath = path.join(modelDirPath, "model.json");
const modelUri = 'file://' + modelPath;
console.log({basePath, base, modelPath, modelUri});
spikeContext = { ...spikeContext, modelPath, modelUri };
console.log(Object.keys(spikeContext));
});
EM_JS(void, loadModel, (),
{
const { tf, modelUri } = spikeContext;
console.log({modelUri});
Asyncify.handleAsync(async () => {
const model = await tf.loadGraphModel(modelUri);
console.log(model);
spikeContext = { ...spikeContext, model };
});
});
int main()
{
val::global().set("spikeContext", val::object());
loadTensorflow();
const std::string basePath{"tf-wasm/e1431.tfjs.model"};
locateModel(basePath.c_str(), basePath.length());
loadModel();
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment