Skip to content

Instantly share code, notes, and snippets.

@u8sand
Created October 10, 2023 19:48
Show Gist options
  • Save u8sand/8a3132dd04d936a4bd64f0867bf1b49f to your computer and use it in GitHub Desktop.
Save u8sand/8a3132dd04d936a4bd64f0867bf1b49f to your computer and use it in GitHub Desktop.
A slightly easier to use tensorflow-serving -- auto-construct models.config

README

docker.io/tensorflow/serving requires that you build a config file when working with more than one model. I find this a bit silly, this image builds it for you assuming you just have all the models in their separate directories.

  • /models/model-a/1/
  • /models/model-b/1/
  • ...
FROM tensorflow/serving
ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["bash", "/entrypoint.sh"]
CMD []
#/bin/bash
(
echo "model_config_list: {"
FIRST=true
while IFS='\n' read MODEL; do
if [ $FIRST = true ]; then
FIRST=false;
else
echo ",";
fi
echo " config: {"
echo " name: \"${MODEL}\","
echo " base_path: \"/models/${MODEL}\","
echo " model_platform: \"tensorflow\""
echo " }"
done < <(find /models -mindepth 1 -maxdepth 1 -type d -printf "%f\n")
echo "}"
) > /tmp/models.config
/usr/bin/tf_serving_entrypoint.sh --model_config_file=/tmp/models.config $@
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment