fdeep::model::predic
takes (and returns) not one fdeep::tensor5
but an std::vector
of them (fdeep::tensor5s
).
That is because in Keras a model (at least created with the functional API) can have multiple input tensors and output tensors. For example:
from keras.models import Model
from keras.layers import Input, Concatenate, Add
inputs = [
Input(shape=(240, 320, 3)),
Input(shape=(240, 320, 3))
]
outputs = [
Concatenate()([inputs[0], inputs[1]]),
Add()([inputs[0], inputs[1]])
]
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss='mse', optimizer='nadam')
model.save('multi_input_and_output_model.h5', include_optimizer=False)
Now in C++, we would then also provide (and receive) two tensors:
#include <fdeep/fdeep.hpp>
int main()
{
const auto model = fdeep::load_model("multi_input_and_output_model.json");
const auto result = model.predict({
fdeep::tensor5(fdeep::shape5(1, 1, 240, 320, 3), 42),
fdeep::tensor5(fdeep::shape5(1, 1, 240, 320, 3), 43)
});
std::cout << fdeep::show_tensor5s(result) << std::endl;
}
They are not processed independently and are not used for batch prediction.