Below you will find instructions to run the inception_client without having to compile it with bazel.
We need the files predict_pb2.py
and prediction_service_pb2.py
and their dependencies.
$ git clone https://github.com/tobegit3hub/deep_recommend_system
$ cd deep_recommend_system/python_predict_client
$ curl -JLO https://raw.githubusercontent.com/tensorflow/serving/master/tensorflow_serving/example/inception_client.py
$ sed -i 's/from tensorflow_serving.apis import predict_pb2/import predict_pb2/g' inception_client.py
$ sed -i 's/from tensorflow_serving.apis import prediction_service_pb2/import prediction_service_pb2/g' inception_client.py
$ pip install enum34 futures mock numpy grpcio tensorflow
If tensorflow has no candidates to install from
pip
you can try installing like this:
$ pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.2.0-cp27-none-linux_x86_64.whl
$ curl -o 'labrador.jpg' 'http://cdn2-www.dogtime.com/assets/uploads/gallery/labrador-retriever-dog-breed-pictures/labrador-retriever-dog-pictures-7.jpg'
$ python inception_client.py --server=tensorflow-serving:9000 --image=labrador.jpg
NOTE: A tensorflow-serving server has to be running with hostname
tensorflow-serving
and port9000
. In order to launch a server you can check the tensorflow-inception helm chart