In Specific Node
core@alex-k8s-2 ~ $ vi /mnt/data2/index.html
core@alex-k8s-2 ~ $ cat /mnt/data2/index.html
'Hello from Kubernetes Local storage'
PV,PVC,Status
Scrum Master and Scrum Slaves, | |
Yellow all walls with notes, | |
Step aside and see with glee, | |
The horror on the janitor's face. | |
And from what his broom has spared, | |
Pick a note to work on, | |
Stand-up straight before the master | |
To report or descope in glee. |
In Specific Node
core@alex-k8s-2 ~ $ vi /mnt/data2/index.html
core@alex-k8s-2 ~ $ cat /mnt/data2/index.html
'Hello from Kubernetes Local storage'
PV,PVC,Status
#!/bin/bash | |
# Generically install rook and test it out | |
: ${ROOK_BRANCH:=release-1.1} | |
TICK_CHAR='>' | |
mark_done () { | |
file_done=$1 | |
date '+%s' > $file_done | |
echo 'done' >> $file_done | |
} |
Status: Downloaded newer image for nvidia/cuda:10.0-base | |
Wed Oct 9 08:11:31 2019 | |
+-----------------------------------------------------------------------------+ | |
| NVIDIA-SMI 410.78 Driver Version: 410.78 CUDA Version: 10.0 | | |
|-------------------------------+----------------------+----------------------+ | |
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | |
|===============================+======================+======================| | |
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A | | |
| N/A 36C P8 9W / N/A | 306MiB / 8119MiB | 0% Default | |
Doc - https://github.com/operator-framework/operator-sdk/blob/master/doc/helm/user-guide.md
Chart https://hub.helm.sh/charts/bitnami/cassandra/3.4.3
operator-sdk new cassandra-helm-operator --type=helm --helm-chart=cassandra --helm-chart-repo=https://charts.bitnami.com/bitnami --verbose
Deploy CRD
`kubectl --insecure-skip-tls-verify --kubeconfig ~/keys/ee1-kubeconfig.config create -f deploy/crds/charts.helm.k8s.io_cassandras_crd.yaml
Good/Right links https://kubevirt.io/2019/How-To-Import-VM-into-Kubevirt.html
(does not matter what it is minikube or real)
2108 minikube ip
2109 no_proxy="127.0.0.1,192.168.39.157"
2110 kubectl get pods
https://gvisor.dev/docs/user_guide/quick_start/kubernetes/ Using Containerd
You can also setup Kubernetes nodes to run pods in gvisor using the containerd CRI runtime and the gvisor-containerd-shim. You can use either the io.kubernetes.cri.untrusted-workloadannotation or RuntimeClass to run Pods with runsc. You can find instructions here.
[centos@azuretest-1 root]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME azuretest-1 Ready master 40d v1.17.0 192.168.0.26 CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://1.13.1 azuretest-2 Ready 40d v1.17.0 192.168.0.6 CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://1.13.1
[root@green--1 ~]# cat cluster-green.yaml | |
################################################################################################################# | |
# Define the settings for the rook-ceph cluster with settings that should only be used in a test environment. | |
# A single filestore OSD will be created in the dataDirHostPath. | |
# For example, to create the cluster: | |
# kubectl create -f common.yaml | |
# kubectl create -f operator.yaml | |
# kubectl create -f cluster-test.yaml | |
################################################################################################################# | |
apiVersion: ceph.rook.io/v1 |
apiVersion: apps/v1 | |
kind: StatefulSet | |
metadata: | |
name: cassandra | |
namespace: green | |
labels: | |
app: cassandra | |
spec: | |
serviceName: cassandra | |
replicas: 3 |
________________ | |
> pulseaudio_ps_do | |
alex 1953 0.4 0.2 1332100 16788 ? S<l 19:13 0:03 /usr/bin/pulseaudio --start --log-target=syslog | |
alex 5319 0.0 0.0 11468 1008 pts/0 S+ 19:25 0:00 grep pulseaudio | |
________________ | |
> which pulseaudio | |
/usr/bin/pulseaudio | |
________________ | |
> pidof pulseaudio | |
1953 |