In Specific Node
core@alex-k8s-2 ~ $ vi /mnt/data2/index.html
core@alex-k8s-2 ~ $ cat /mnt/data2/index.html
'Hello from Kubernetes Local storage'
PV,PVC,Status
<!DOCTYPE html> | |
<html lang="en"> | |
<head> | |
<meta charset="UTF-8"> | |
<link rel="shortcut icon" href="#" /> | |
<title>Echo Example</title> | |
<script src="https://code.jquery.com/jquery-1.9.1.min.js"></script> | |
<script type="text/javascript" src="compiled.js"></script> | |
<script type="text/javascript"> |
""" GRPC Client for SSD TF Serving Model""" | |
from __future__ import division | |
__author__ = "Alex Punnen" | |
__date__ = "March 2019" | |
import grpc | |
import numpy | |
import tensorflow as tf | |
import time |
Scrum Master and Scrum Slaves, | |
Yellow all walls with notes, | |
Step aside and see with glee, | |
The horror on the janitor's face. | |
And from what his broom has spared, | |
Pick a note to work on, | |
Stand-up straight before the master | |
To report or descope in glee. |
In Specific Node
core@alex-k8s-2 ~ $ vi /mnt/data2/index.html
core@alex-k8s-2 ~ $ cat /mnt/data2/index.html
'Hello from Kubernetes Local storage'
PV,PVC,Status
#!/bin/bash | |
# Generically install rook and test it out | |
: ${ROOK_BRANCH:=release-1.1} | |
TICK_CHAR='>' | |
mark_done () { | |
file_done=$1 | |
date '+%s' > $file_done | |
echo 'done' >> $file_done | |
} |
#Server docker run --net=host --runtime=nvidia -it --rm -p 8900:8500 -p 8901:8501 -v /home/alex/coding/IPython_neuralnet/models:/models -e MODEL_NAME=retinanet tensorflow/serving:latest-gpu --rest_api_port=0 --enable_batching=true --model_config_file=/models/model_configs/retinanet.json
#Client
docker run -it --runtime=nvidia --net=host -v /home/alex/coding/IPython_neuralnet:/coding --rm alexcpn/tfserving-keras-retinanet-dev-gpu To run TF Client unset http_proxy unset https_proxy root@drone-OMEN:/coding/tfserving_client#python retinanet_client.py -num_tests=1 -server=127.0.0.1:8500 -batch_size=1 -img_path='../examples/google1.jpg'
# Simple TFServing example; Based on | |
# https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_client.py | |
# Added simpler mnist loading parts and removed some complexity | |
#!/usr/bin/env python2.7 | |
"""A client that talks to tensorflow_model_server loaded with mnist model. | |
The client downloads test images of mnist data set, queries the service with | |
such test images to get predictions, and calculates the inference error rate. | |
Typical usage example: |
Status: Downloaded newer image for nvidia/cuda:10.0-base | |
Wed Oct 9 08:11:31 2019 | |
+-----------------------------------------------------------------------------+ | |
| NVIDIA-SMI 410.78 Driver Version: 410.78 CUDA Version: 10.0 | | |
|-------------------------------+----------------------+----------------------+ | |
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | |
|===============================+======================+======================| | |
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A | | |
| N/A 36C P8 9W / N/A | 306MiB / 8119MiB | 0% Default | |
Doc - https://github.com/operator-framework/operator-sdk/blob/master/doc/helm/user-guide.md
Chart https://hub.helm.sh/charts/bitnami/cassandra/3.4.3
operator-sdk new cassandra-helm-operator --type=helm --helm-chart=cassandra --helm-chart-repo=https://charts.bitnami.com/bitnami --verbose
Deploy CRD
`kubectl --insecure-skip-tls-verify --kubeconfig ~/keys/ee1-kubeconfig.config create -f deploy/crds/charts.helm.k8s.io_cassandras_crd.yaml
https://gvisor.dev/docs/user_guide/quick_start/kubernetes/ Using Containerd
You can also setup Kubernetes nodes to run pods in gvisor using the containerd CRI runtime and the gvisor-containerd-shim. You can use either the io.kubernetes.cri.untrusted-workloadannotation or RuntimeClass to run Pods with runsc. You can find instructions here.
[centos@azuretest-1 root]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME azuretest-1 Ready master 40d v1.17.0 192.168.0.26 CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://1.13.1 azuretest-2 Ready 40d v1.17.0 192.168.0.6 CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://1.13.1