This script is designed to work with yq
to take arguments directly from a CephCluster CRD (cluster.yaml
) and zap all matching disks on the hosts.
- Requires named hosts with specific config
- Assumes devicePathFilter
- Assumes devicePathFilter regex treats /dev/disks/by-path as a base
- !!DANGER!! Assumes that your filters DO NOT select ANY disks that aren't for Ceph.
To expand on #4 - you may be relying on the feature of Ceph (or Rook?) which safely avoids disks that have filesystems on them, which allows you to use a broader device filter in your rook cluster.yaml. If you do rely on this, this script will RUIN your system and DESTROY YOUR DATA. If you don't 100% follow what I'm saying, DO NOT use this script. @todo: Check for Ceph remnants on disk before executing the cleanup
These assumptions are not necessary, they just fit with my environment when I needed this script.
This is what my cluster.yaml
spec.storage.nodes
looks like, for reference:
nodes:
- name: "dl380p-g8-01"
devicePathFilter: "pci-0000:02:00.0-sas-|nvme-1"
- name: "dl380p-g8-02"
devicePathFilter: "pci-0000:0a:00.0-sas-exp0x500a098000d7223f-|nvme-1"
- name: "dl380p-g8-03"
devicePathFilter: "pci-0000:02:00.0-sas-0x3001438025a76544-lun-|nvme-1"
- name: "dl380p-g8-04"
devicePathFilter: "pci-0000:02:00.0-sas-0x5"
Base script:
cleanup.sh <hostname> <devicePathFilter>
With yq
to pull direct from cluster.yaml
cat cluster.yaml | yq e '.spec.storage.nodes[]| .name + " " + .devicePathFilter' - | xargs -n2 ./cleanup.sh