So first, let's create our pod.
$ kubectl create -f https://gist.githubusercontent.com/elsonrodriguez/60e53e2479dc3146447b/raw/3b4d703855500c829ae58d70e386b4e41e4f7996/pod.yaml
pod "gputest" created
Now let's see it go:
windows: | |
- name: kube-rolling | |
root: ~/ | |
layout: main-horizontal | |
panes: | |
- commands: | |
- export update_command="kubectl rolling-update <rc-name> --update-period=1s --image=<image_url>" | |
- clear | |
- read -p "$update_command" | |
- $update_command |
#!/bin/bash | |
REGISTRY_URL=domain.com/name/project | |
RC_NAME=replication-controller | |
GITVER=`git rev-parse -q HEAD | cut -c1-8` | |
DOCKER_TAG=$REGISTRY_URL:$GITVER | |
docker build -t $DOCKER_TAG . | |
docker push $DOCKER_TAG | |
kubectl rolling-update $RC_NAME --image=$DOCKER_TAG |
#get all nodes running pod with specified label | |
kubectl get pods -l “key=value" -o template --template='{{range .items }}{{.spec.nodeName}}{{"\n" | printf "%v"}}{{end}}’ | |
#KERNEL=="js?", ATTRS{name}=="HJZ Mayflash Wiimote PC Adapter", NAME="input/js100" | |
#SUBSYSTEM=="input", KERNEL=="js[0-9]*", ATTRS{name}=="HJZ Mayflash Wiimote PC Adapter", ENV{ID_INPUT_JOYSTICK}=="?*", MODE="0000", ENV{ID_INPUT_JOYSTICK}="", RUN+="/bin/rm %E{DEVNAME}" | |
SUBSYSTEM=="input", ATTRS{name}=="HJZ Mayflash Wiimote PC Adapter", ENV{ID_INPUT_JOYSTICK}=="?*", RUN+="/bin/rm %E{DEVNAME}", ENV{ID_INPUT_JOYSTICK}="" | |
SUBSYSTEM=="input", ATTRS{name}=="HJZ Mayflash Wiimote PC Adapter", KERNEL=="js[0-9]*", RUN+="/bin/rm %E{DEVNAME}", ENV{ID_INPUT_JOYSTICK}="" |
scp ~/Downloads/*.deb user@host: | |
scp ~/Downloads/etcd*.tar.gz user@host: | |
scp ~/oss/kubernetes/_output/dockerized/bin/linux/amd64/k* user@host: |
This is the formula for terminating k8s nodes that have a bad kubelet. This effectively fences nodes that are unable to unmount/detach their own storage.
Early testing shows that this may not work if a node is rebooted in response to a kubelet/system issue. Therefore the action taken in response to a problem on a kubernetes node MUST BE to terminate it.
Make LB like this:
"HealthCheck": {
Summarizing 16 Failures: | |
[Fail] [k8s.io] EmptyDir volumes [It] should support (non-root,0777,default) [Conformance] | |
/Users/eorodrig/oss/kubernetes-elsonrodriguez/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1617 | |
[Fail] [k8s.io] EmptyDir volumes [It] should support (root,0644,tmpfs) [Conformance] | |
/Users/eorodrig/oss/kubernetes-elsonrodriguez/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1617 | |
[Fail] [k8s.io] EmptyDir volumes [It] volume on default medium should have the correct mode [Conformance] | |
/Users/eorodrig/oss/kubernetes-elsonrodriguez/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1617 |
Summarizing 20 Failures: | |
[Fail] [k8s.io] ConfigMap [It] should be consumable from pods in volume as non-root [Conformance] | |
/Users/elsonrodriguez/oss/kubernetes-elsonrodriguez/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1620 | |
[Fail] [k8s.io] EmptyDir volumes [It] should support (non-root,0777,default) [Conformance] | |
/Users/elsonrodriguez/oss/kubernetes-elsonrodriguez/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1620 | |
[Fail] [k8s.io] EmptyDir volumes [It] should support (root,0644,tmpfs) [Conformance] | |
/Users/elsonrodriguez/oss/kubernetes-elsonrodriguez/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1620 |