Skip to content

Instantly share code, notes, and snippets.

View jbw976's full-sized avatar
💭
We're hiring at Upbound! https://upbound.io/jobs

Jared Watts jbw976

💭
We're hiring at Upbound! https://upbound.io/jobs
View GitHub Profile
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
> kubectl -n rook-minio get pod -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2018-11-04T03:48:38Z
generateName: my-store-
labels:
app: minio
@jbw976
jbw976 / slack-chat.txt
Last active August 15, 2018 00:30
Rook #1501 Slack discussion on repro in integration tests
Slack link (which may be archived due to 10k message limit): https://rook-io.slack.com/archives/C764K425D/p1533770804000083
we have kubelet logs: https://jenkins.rook.io/blue/organizations/jenkins/rook%2Frook/detail/PR-2010/3/artifacts
and it will even be useful to debug the statefulset issue in this build
travisn [4:33 PM]
in the kubelet log:
```22:58:24.839204 14247 desired_state_of_world_populator.go:311] Failed to add volume "rookpvc" (specName: "pvc-875ab62f-9b5e-11e8-b0eb-0af5d80321b6") for pod "875d980e-9b5e-11e8-b0eb-0af5d80321b6" to desiredStateOfWorld. err=failed to get Plugin from volumeSpec for volume "pvc-875ab62f-9b5e-11e8-b0eb-0af5d80321b6" err=no volume plugin matched```
hmm, why isn’t the flex volume plugin found?
@jbw976
jbw976 / OSD removal workaround
Last active August 22, 2021 22:34
OSD removal workaround
Remove k8s-nvme-01.acme.org node from Rook orchestration
delete OSD replica set, this will also stop/kill the OSD pod
kubectl -n rook delete replicaset rook-ceph-osd-k8s-nvme-01.acme.org
remove entry for k8s-nvme-01.acme.org node from the orchestration status map
kubectl -n rook edit cm rook-ceph-osd-orchestration-status
delete all node's OSD config maps: rook-ceph-osd-XX-fs-backup (93,94,95,96,97,98,99,100,101). example:
kubectl -n rook delete cm rook-ceph-osd-93-fs-backup
@jbw976
jbw976 / StorageClass
Created March 7, 2018 03:21
rook-agent format volume
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
creationTimestamp: 2018-01-08T14:11:51Z
labels:
app: cluster-base-storage
chart: storage-0.1.0
heritage: Tiller
#!/bin/bash -e
image_name_pattern=$1
if [[ -z ${image_name_pattern} ]]; then
echo "image_name_pattern required"
fi
vms="core-01 core-02"
vernum=$2
@jbw976
jbw976 / 01 rook-agent failed to delete PV pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
Last active January 17, 2018 22:26
rook-agent failed to delete PV pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
for useful gist title, see https://github.com/isaacs/github/issues/194