Skip to content

Instantly share code, notes, and snippets.

Last active Jan 23, 2022
What would you like to do?
[Ceph] #instructions

from IRC channel:

17:17:39 <dirtwash> wowas: no
17:17:51 <dirtwash> wowas: out + purge, no further steps needed to remove
17:18:03 <dirtwash> purge command does everything
18:50:43 <devster> anyone can figure out what's wrong with this ceph-osd (in docker with ceph-ansible) that keeps crashing? It's a single OSD that keeps going up and down every 3 minutes almost exactly...
18:55:44 <dirtwash> devster: just purge it and redo, had this few times, could be any number of known bugs
18:55:59 <dirtwash> devster: or ask on ML
18:56:03 <dirtwash> there wont be an answer here
18:58:20 <devster> thanks dirtwash never did a purge with ceph-ansible
19:00:51 <dirtwash> devster: just run the purge command
19:00:58 <dirtwash> u only need ceph-ansible to add the OSD again
19:01:03 <dirtwash> it will most likely even use the same ID
19:01:10 <dirtwash> stop osd, purge osd
19:01:12 <dirtwash> readd osd
19:01:18 <dirtwash> last step with ansible
19:01:32 <dirtwash> oh and dd the disk before u readd it
19:01:35 <dirtwash> the first 2-3GB
19:01:38 <dirtwash> zero it
docker run -it --entrypoint /bin/bash -v /dev:/dev -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/udev/:/var/run/udev/ -v /var/run/ceph:/var/run/ceph:z --security-opt apparmor:unconfined -v /run/lvm/:/run/lvm/ --net=host --privileged=true --pid=host ceph/daemon:latest-nautilus
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment