Skip to content

Instantly share code, notes, and snippets.

@xenolinux
Created May 19, 2020 17:54
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save xenolinux/0adf7655e420145b63869cb217e13a4d to your computer and use it in GitHub Desktop.
Save xenolinux/0adf7655e420145b63869cb217e13a4d to your computer and use it in GitHub Desktop.
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2020-05-19T17:41:35Z"
labels:
app: ceph-toolbox-job
name: ocs-osd-removal-5
namespace: openshift-storage
resourceVersion: "53763253"
selfLink: /apis/batch/v1/namespaces/openshift-storage/jobs/ocs-osd-removal-5
uid: 2a7416df-b2d3-4678-a725-1dd48d33b888
spec:
backoffLimit: 6
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: 2a7416df-b2d3-4678-a725-1dd48d33b888
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 2a7416df-b2d3-4678-a725-1dd48d33b888
job-name: ocs-osd-removal-5
spec:
containers:
- command:
- bash
- -c
- |
osd_status=$(ceph osd tree | grep "osd.5 " | awk '{print $5}')
if [[ "$osd_status" == "up" ]]; then
echo "OSD ${FAILED_OSD_ID} is up and running. Please check if you entered correct ID of failed osd!"
else
echo "OSD ${FAILED_OSD_ID} is down. Proceeding to mark out and purge."
ceph osd out osd.5
ceph osd purge osd.5 --force --yes-i-really-mean-it;fi
image: travisn/ceph:toolbox-job
imagePullPolicy: IfNotPresent
name: script
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
readOnly: true
dnsPolicy: ClusterFirst
initContainers:
- args:
- --skip-watch
command:
- /usr/local/bin/toolbox.sh
env:
- name: ROOK_ADMIN_SECRET
valueFrom:
secretKeyRef:
key: admin-secret
name: rook-ceph-mon
image: travisn/ceph:toolbox-job
imagePullPolicy: IfNotPresent
name: config-init
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- mountPath: /etc/rook
name: mon-endpoint-volume
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: data
path: mon-endpoints
name: rook-ceph-mon-endpoints
name: mon-endpoint-volume
- emptyDir: {}
name: ceph-config
status:
completionTime: "2020-05-19T17:41:39Z"
conditions:
- lastProbeTime: "2020-05-19T17:41:39Z"
lastTransitionTime: "2020-05-19T17:41:39Z"
status: "True"
type: Complete
startTime: "2020-05-19T17:41:35Z"
succeeded: 1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment