Skip to content

Instantly share code, notes, and snippets.

@xenolinux
Created May 26, 2020 20:16
Show Gist options
  • Save xenolinux/f93ce74d95c866fd57adb8849bc9d38c to your computer and use it in GitHub Desktop.
Save xenolinux/f93ce74d95c866fd57adb8849bc9d38c to your computer and use it in GitHub Desktop.
apiVersion: template.openshift.io/v1
kind: Template
metadata:
creationTimestamp: "2020-05-26T17:43:58Z"
name: ocs-osd-removal
namespace: openshift-storage
ownerReferences:
- apiVersion: ocs.openshift.io/v1
blockOwnerDeletion: true
controller: true
kind: StorageCluster
name: example-storagecluster
uid: 4354b22a-7016-46d9-bce3-0e8cc784fac2
resourceVersion: "40207"
selfLink: /apis/template.openshift.io/v1/namespaces/openshift-storage/templates/ocs-osd-removal
uid: 2058c63b-db4c-4a2b-b87e-983132e4d29b
objects:
- apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
labels:
app: ceph-toolbox-job-${FAILED_OSD_ID}
name: ocs-osd-removal-${FAILED_OSD_ID}
namespace: openshift-storage
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- /bin/bash
- -c
- "\nset -x\n\nosd_status=$(ceph osd tree | grep \"osd.${FAILED_OSD_ID}
\" | awk '{print $5}') \nif [[ \"$osd_status\" == \"up\" ]]; then \n echo
\"OSD ${FAILED_OSD_ID} is up and running.\"\n echo \"Please check if
you entered correct ID of failed osd!\"\nelse \n echo \"OSD ${FAILED_OSD_ID}
is down. Proceeding to mark out and purge\"\n ceph osd out osd.${FAILED_OSD_ID}
\n ceph osd purge osd.${FAILED_OSD_ID} --force --yes-i-really-mean-it\nfi"
image: rook/ceph:v1.3.4-4.gb97be10
name: script
resources: {}
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
readOnly: true
initContainers:
- args:
- --skip-watch
command:
- /usr/local/bin/toolbox.sh
env:
- name: ROOK_ADMIN_SECRET
valueFrom:
secretKeyRef:
key: admin-secret
name: rook-ceph-mon
image: rook/ceph:v1.3.4-4.gb97be10
imagePullPolicy: IfNotPresent
name: config-init
resources: {}
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- mountPath: /etc/rook
name: mon-endpoint-volume
restartPolicy: Never
volumes:
- configMap:
items:
- key: data
path: mon-endpoints
name: rook-ceph-mon-endpoints
name: mon-endpoint-volume
- emptyDir: {}
name: ceph-config
status: {}
parameters:
- name: FAILED_OSD_ID
required: true
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment