Skip to content

Instantly share code, notes, and snippets.

@Dobiasd
Last active December 12, 2018 14:48
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Dobiasd/eacfa84d00fc1f935f97621ec2c748a6 to your computer and use it in GitHub Desktop.
Save Dobiasd/eacfa84d00fc1f935f97621ec2c748a6 to your computer and use it in GitHub Desktop.

fdeep::model::predic takes (and returns) not one fdeep::tensor5 but an std::vector of them (fdeep::tensor5s). That is because in Keras a model (at least created with the functional API) can have multiple input tensors and output tensors. For example:

from keras.models import Model
from keras.layers import Input, Concatenate, Add

inputs = [
    Input(shape=(240, 320, 3)),
    Input(shape=(240, 320, 3))
]

outputs = [
    Concatenate()([inputs[0], inputs[1]]),
    Add()([inputs[0], inputs[1]])
]

model = Model(inputs=inputs, outputs=outputs)
model.compile(loss='mse', optimizer='nadam')
model.save('multi_input_and_output_model.h5', include_optimizer=False)

Now in C++, we would then also provide (and receive) two tensors:

#include <fdeep/fdeep.hpp>
int main()
{
    const auto model = fdeep::load_model("multi_input_and_output_model.json");
    const auto result = model.predict({
        fdeep::tensor5(fdeep::shape5(1, 1, 240, 320, 3), 42),
        fdeep::tensor5(fdeep::shape5(1, 1, 240, 320, 3), 43)
        });
    std::cout << fdeep::show_tensor5s(result) << std::endl;
}

They are not processed independently and are not used for batch prediction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment