Last active
July 14, 2020 23:48
-
-
Save travisn/4a5caf6fe69881f9be27d13128d1ded5 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
value: ceph version 14.2.8-59 nautilus | |
~/src/go/src/github.com/rook/rook$ oc get pod | |
NAME READY STATUS RESTARTS AGE | |
csi-cephfsplugin-6nlx9 3/3 Running 0 9m18s | |
csi-cephfsplugin-j527v 3/3 Running 0 9m29s | |
csi-cephfsplugin-provisioner-7d56b8d897-6npc6 5/5 Running 0 9m53s | |
csi-cephfsplugin-provisioner-7d56b8d897-zf6wn 5/5 Running 0 9m40s | |
csi-cephfsplugin-pxkkr 3/3 Running 0 9m51s | |
csi-rbdplugin-lqnwr 3/3 Running 0 9m8s | |
csi-rbdplugin-provisioner-6fd6bb9d64-lk888 5/5 Running 0 9m56s | |
csi-rbdplugin-provisioner-6fd6bb9d64-s74rs 5/5 Running 0 9m40s | |
csi-rbdplugin-rwwks 3/3 Running 0 9m55s | |
csi-rbdplugin-xtqqd 3/3 Running 0 9m19s | |
lib-bucket-provisioner-79f6dd6b99-4q86q 1/1 Running 0 54m | |
noobaa-core-0 1/1 Running 0 9m49s | |
noobaa-db-0 1/1 Running 0 10m | |
noobaa-endpoint-85f8df86ff-67bql 1/1 Running 0 10m | |
noobaa-endpoint-85f8df86ff-r7f5b 1/1 Running 0 9m52s | |
noobaa-operator-7df984b565-tk6gq 1/1 Running 0 11m | |
ocs-operator-56f48d87cc-mbbgc 0/1 Running 0 10m | |
rook-ceph-crashcollector-ip-10-0-153-80-7f4c7988cd-cplqv 1/1 Running 0 10m | |
rook-ceph-crashcollector-ip-10-0-163-129-78f7c497bf-c28wf 1/1 Running 0 10m | |
rook-ceph-crashcollector-ip-10-0-192-223-5557457c54-znls8 1/1 Running 0 10m | |
rook-ceph-drain-canary-20433ae96167361ad4f8a0e291621279-59kmfjf 1/1 Running 0 78s | |
rook-ceph-drain-canary-24e8837781bcef2b4c9c2ac42526b7e0-d9g88ll 1/1 Running 0 7m17s | |
rook-ceph-drain-canary-848b5990ad110f8bbf8fd8d20868a7da-7698j2q 1/1 Running 0 4m19s | |
rook-ceph-drain-canary-a294e8b7dede2b5f2e2569a329cf9de3-6cmbdbb 1/1 Running 0 51m | |
rook-ceph-drain-canary-bd64d177e6dfe1547e7a948faaf43c54-86tvmtx 1/1 Running 0 51m | |
rook-ceph-drain-canary-db1a3c486eb427cb085dc9d3c6aa05b9-86zgd4q 1/1 Running 0 51m | |
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-58b5cf4d4g6kj 1/1 Running 0 9m48s | |
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-755c6d79qxzzz 1/1 Running 0 8m58s | |
rook-ceph-mgr-a-6dbc78d59d-bfzrw 1/1 Running 0 7m38s | |
rook-ceph-mon-a-7f579494c4-9lg4m 1/1 Running 0 9m59s | |
rook-ceph-mon-b-5988d4dc5d-md64l 1/1 Running 0 8m38s | |
rook-ceph-mon-c-69c7f59cc8-4r887 1/1 Running 0 8m8s | |
rook-ceph-operator-5794648ccd-5nxx9 1/1 Running 0 11m | |
rook-ceph-osd-0-9d44cfd49-mc5nh 1/1 Running 0 4m19s | |
rook-ceph-osd-1-6597749b-wt4zf 1/1 Running 0 7m17s | |
rook-ceph-osd-2-85889798dd-h5fwh 1/1 Running 0 78s | |
rook-ceph-osd-3-7587f98d4-np2fg 1/1 Running 0 13m | |
rook-ceph-osd-4-b96474f9d-ns8jb 1/1 Running 0 2m55s | |
rook-ceph-osd-5-6f8c4cc7b7-8zq4n 1/1 Running 0 5m48s | |
rook-ceph-osd-prepare-ocs-deviceset-0-0-zlgwx-6m4mb 0/1 Completed 0 52m | |
rook-ceph-osd-prepare-ocs-deviceset-0-1-s8r62-klwv9 0/1 Completed 0 14m | |
rook-ceph-osd-prepare-ocs-deviceset-1-0-89xs5-thsrm 0/1 Completed 0 52m | |
rook-ceph-osd-prepare-ocs-deviceset-1-1-m9q56-bbhf7 0/1 Completed 0 14m | |
rook-ceph-osd-prepare-ocs-deviceset-2-0-zp9nh-dpxzz 0/1 Completed 0 52m | |
rook-ceph-osd-prepare-ocs-deviceset-2-1-zrk97-kkqgf 0/1 Completed 0 14m | |
rook-ceph-tools-6c67d65646-n5cvh 1/1 Running 0 50m | |
~/src/go/src/github.com/rook/rook$ oc rsh rook-ceph-tools-6c67d65646-n5cvh | |
sh-4.4# ceph -s | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
1 MDSs report slow metadata IOs | |
1 osds down | |
1 host (1 osds) down | |
Degraded data redundancy: 4201/57402 objects degraded (7.319%), 205 pgs degraded | |
13 slow ops, oldest one blocked for 56 sec, daemons [osd.0,osd.1,osd.2] have slow ops. | |
services: | |
mon: 3 daemons, quorum a,b,c (age 8m) | |
mgr: a(active, since 7m) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 5 up (since 8s), 6 in (since 13m); 217 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.13k objects, 74 GiB | |
usage: 226 GiB used, 12 TiB / 12 TiB avail | |
pgs: 30.903% pgs not active | |
4201/57402 objects degraded (7.319%) | |
21976/57402 objects misplaced (38.284%) | |
92 active+recovery_wait+degraded+remapped | |
69 activating+degraded+remapped | |
28 active+recovery_wait+degraded | |
24 active+clean | |
20 remapped+peering | |
14 active+recovery_wait+remapped | |
10 active+remapped+backfill_wait | |
8 active+recovery_wait+undersized+degraded+remapped | |
7 active+recovery_wait | |
7 active+undersized+degraded | |
5 active+undersized | |
3 active+recovery_wait+undersized+remapped | |
1 active+recovering+degraded+remapped | |
io: | |
client: 16 MiB/s wr, 0 op/s rd, 216 op/s wr | |
recovery: 40 MiB/s, 9 objects/s | |
sh-4.4# ceph -s | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
1 MDSs report slow metadata IOs | |
Degraded data redundancy: 2735/57420 objects degraded (4.763%), 162 pgs degraded, 11 pgs undersized | |
10 slow ops, oldest one blocked for 189 sec, daemons [osd.0,osd.1,osd.2] have slow ops. | |
services: | |
mon: 3 daemons, quorum a,b,c (age 8s) | |
mgr: a(active, since 9m) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 6 up (since 110s), 6 in (since 15m); 217 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.14k objects, 74 GiB | |
usage: 226 GiB used, 12 TiB / 12 TiB avail | |
pgs: 2735/57420 objects degraded (4.763%) | |
25531/57420 objects misplaced (44.464%) | |
135 active+recovery_wait+degraded+remapped | |
57 active+recovery_wait+remapped | |
35 active+clean | |
24 active+recovery_wait+degraded | |
13 active+remapped+backfill_wait | |
12 active+recovery_wait | |
9 active+recovery_wait+undersized+remapped | |
2 active+recovery_wait+undersized+degraded+remapped | |
1 active+recovering+degraded+remapped | |
io: | |
client: 964 KiB/s rd, 52 MiB/s wr, 241 op/s rd, 206 op/s wr | |
recovery: 56 MiB/s, 13 objects/s | |
sh-4.4# exit | |
~/src/go/src/github.com/rook/rook$ oc get pod | |
NAME READY STATUS RESTARTS AGE | |
csi-cephfsplugin-6nlx9 3/3 Running 0 12m | |
csi-cephfsplugin-j527v 3/3 Running 0 12m | |
csi-cephfsplugin-provisioner-7d56b8d897-6npc6 5/5 Running 0 12m | |
csi-cephfsplugin-provisioner-7d56b8d897-zf6wn 5/5 Running 0 12m | |
csi-cephfsplugin-pxkkr 3/3 Running 0 12m | |
csi-rbdplugin-lqnwr 3/3 Running 0 11m | |
csi-rbdplugin-provisioner-6fd6bb9d64-lk888 5/5 Running 0 12m | |
csi-rbdplugin-provisioner-6fd6bb9d64-s74rs 5/5 Running 0 12m | |
csi-rbdplugin-rwwks 3/3 Running 0 12m | |
csi-rbdplugin-xtqqd 3/3 Running 0 12m | |
lib-bucket-provisioner-79f6dd6b99-4q86q 1/1 Running 0 57m | |
noobaa-core-0 1/1 Running 0 12m | |
noobaa-db-0 1/1 Running 0 13m | |
noobaa-endpoint-85f8df86ff-67bql 1/1 Running 0 13m | |
noobaa-endpoint-85f8df86ff-r7f5b 1/1 Running 0 12m | |
noobaa-operator-7df984b565-tk6gq 1/1 Running 0 13m | |
ocs-operator-56f48d87cc-mbbgc 0/1 Running 0 13m | |
rook-ceph-crashcollector-ip-10-0-153-80-7f4c7988cd-cplqv 1/1 Running 0 13m | |
rook-ceph-crashcollector-ip-10-0-163-129-64884b675c-pqss2 1/1 Running 0 55s | |
rook-ceph-crashcollector-ip-10-0-192-223-5557457c54-znls8 1/1 Running 0 13m | |
rook-ceph-drain-canary-20433ae96167361ad4f8a0e291621279-59kmfjf 1/1 Running 0 4m4s | |
rook-ceph-drain-canary-24e8837781bcef2b4c9c2ac42526b7e0-d9g88ll 1/1 Running 0 10m | |
rook-ceph-drain-canary-848b5990ad110f8bbf8fd8d20868a7da-7698j2q 1/1 Running 0 7m5s | |
rook-ceph-drain-canary-a294e8b7dede2b5f2e2569a329cf9de3-6cmbdbb 1/1 Running 0 54m | |
rook-ceph-drain-canary-bd64d177e6dfe1547e7a948faaf43c54-86tvmtx 1/1 Running 0 54m | |
rook-ceph-drain-canary-db1a3c486eb427cb085dc9d3c6aa05b9-86zgd4q 1/1 Running 0 54m | |
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-58b5cf4d4g6kj 1/1 Running 0 12m | |
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-755c6d79qxzzz 1/1 Running 0 11m | |
rook-ceph-mgr-a-6dbc78d59d-bfzrw 1/1 Running 0 10m | |
rook-ceph-mon-a-6f7d5b88cf-ggnb9 1/1 Running 0 55s | |
rook-ceph-mon-b-5988d4dc5d-md64l 1/1 Running 0 11m | |
rook-ceph-mon-c-69c7f59cc8-4r887 1/1 Running 0 10m | |
rook-ceph-operator-5794648ccd-5nxx9 1/1 Running 0 13m | |
rook-ceph-osd-0-9d44cfd49-mc5nh 1/1 Running 0 7m5s | |
rook-ceph-osd-1-6597749b-wt4zf 1/1 Running 0 10m | |
rook-ceph-osd-2-85889798dd-h5fwh 1/1 Running 0 4m4s | |
rook-ceph-osd-3-6f78d76c79-jg6gc 1/1 Running 0 2m38s | |
rook-ceph-osd-4-b96474f9d-ns8jb 1/1 Running 0 5m41s | |
rook-ceph-osd-5-6f8c4cc7b7-8zq4n 1/1 Running 0 8m34s | |
rook-ceph-osd-prepare-ocs-deviceset-0-0-zlgwx-6m4mb 0/1 Completed 0 54m | |
rook-ceph-osd-prepare-ocs-deviceset-0-1-s8r62-klwv9 0/1 Completed 0 17m | |
rook-ceph-osd-prepare-ocs-deviceset-1-0-89xs5-thsrm 0/1 Completed 0 54m | |
rook-ceph-osd-prepare-ocs-deviceset-1-1-m9q56-bbhf7 0/1 Completed 0 17m | |
rook-ceph-osd-prepare-ocs-deviceset-2-0-zp9nh-dpxzz 0/1 Completed 0 54m | |
rook-ceph-osd-prepare-ocs-deviceset-2-1-zrk97-kkqgf 0/1 Completed 0 17m | |
rook-ceph-tools-6c67d65646-n5cvh 1/1 Running 0 53m | |
~/src/go/src/github.com/rook/rook$ oc rsh rook-ceph-tools-6c67d65646-n5cvh | |
sh-4.4# ceph -s | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
Degraded data redundancy: 13443/57453 objects degraded (23.398%), 252 pgs degraded | |
services: | |
mon: 3 daemons, quorum a,b,c (age 93s) | |
mgr: a(active, since 70s) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 6 up (since 23s), 6 in (since 20m); 231 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.15k objects, 74 GiB | |
usage: 226 GiB used, 12 TiB / 12 TiB avail | |
pgs: 13443/57453 objects degraded (23.398%) | |
19308/57453 objects misplaced (33.607%) | |
111 active+undersized+degraded+remapped+backfill_wait | |
71 active+recovery_wait+undersized+degraded+remapped | |
48 active+recovery_wait+degraded+remapped | |
35 active+clean | |
22 active+recovery_wait+degraded | |
1 active+recovering+undersized+remapped | |
io: | |
client: 5.6 MiB/s wr, 0 op/s rd, 551 op/s wr | |
recovery: 53 MiB/s, 13 objects/s | |
sh-4.4# date | |
Tue Jul 14 22:56:35 UTC 2020 | |
sh-4.4# date; ceph -s | |
Tue Jul 14 22:56:49 UTC 2020 | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
1 osds down | |
1 host (1 osds) down | |
Degraded data redundancy: 10854/57459 objects degraded (18.890%), 245 pgs degraded, 83 pgs undersized | |
29 slow ops, oldest one blocked for 56 sec, daemons [osd.0,osd.1] have slow ops. | |
services: | |
mon: 3 daemons, quorum a,b,c (age 2m) | |
mgr: a(active, since 108s) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 5 up (since 9s), 6 in (since 20m); 231 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.15k objects, 74 GiB | |
usage: 226 GiB used, 12 TiB / 12 TiB avail | |
pgs: 10854/57459 objects degraded (18.890%) | |
20608/57459 objects misplaced (35.866%) | |
81 active+undersized+degraded+remapped+backfill_wait | |
72 active+recovery_wait+degraded+remapped | |
69 active+recovery_wait+undersized+degraded+remapped | |
24 active+clean | |
17 active+recovery_wait+degraded | |
7 active+recovery_wait+remapped | |
6 active+undersized+degraded | |
5 active+recovery_wait | |
4 active+undersized | |
1 stale+active+clean | |
1 active+recovering+undersized+remapped | |
1 active+recovery_wait+undersized+remapped | |
io: | |
client: 35 KiB/s rd, 3.2 MiB/s wr, 8 op/s rd, 90 op/s wr | |
recovery: 86 MiB/s, 21 objects/s | |
sh-4.4# date; ceph -s | |
Tue Jul 14 23:00:23 UTC 2020 | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
Degraded data redundancy: 2631/57489 objects degraded (4.577%), 146 pgs degraded, 2 pgs undersized | |
1 slow ops, oldest one blocked for 47 sec, daemons [osd.1,osd.2] have slow ops. | |
services: | |
mon: 3 daemons, quorum a,b,c (age 5m) | |
mgr: a(active, since 5m) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 6 up (since 38s), 6 in (since 24m); 213 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.16k objects, 74 GiB | |
usage: 226 GiB used, 12 TiB / 12 TiB avail | |
pgs: 2631/57489 objects degraded (4.577%) | |
25826/57489 objects misplaced (44.923%) | |
131 active+recovery_wait+degraded+remapped | |
53 active+recovery_wait+remapped | |
38 active+clean | |
27 active+remapped+backfill_wait | |
24 active+recovery_wait | |
13 active+recovery_wait+degraded | |
1 active+recovering+undersized+degraded+remapped | |
1 active+recovery_wait+undersized+degraded+remapped | |
io: | |
client: 329 KiB/s rd, 7.3 MiB/s wr, 82 op/s rd, 151 op/s wr | |
recovery: 93 MiB/s, 23 objects/s | |
sh-4.4# ceph osd status | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state | | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
| 0 | ip-10-0-163-129.us-east-2.compute.internal | 72.4G | 1975G | 123 | 5483k | 116 | 459k | exists,up | | |
| 1 | ip-10-0-153-80.us-east-2.compute.internal | 73.7G | 1974G | 138 | 5023k | 177 | 708k | exists,up | | |
| 2 | ip-10-0-192-223.us-east-2.compute.internal | 74.1G | 1973G | 81 | 1710k | 51 | 198k | exists | | |
| 3 | ip-10-0-192-223.us-east-2.compute.internal | 1362M | 2046G | 0 | 0 | 0 | 0 | exists,up | | |
| 4 | ip-10-0-163-129.us-east-2.compute.internal | 3112M | 2044G | 0 | 0 | 2 | 10.4k | exists,up | | |
| 5 | ip-10-0-153-80.us-east-2.compute.internal | 1693M | 2046G | 4 | 23.4k | 3 | 13.1k | exists,up | | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
sh-4.4# ceph pg status | |
no valid command found; 10 closest matches: | |
pg stat | |
pg getmap | |
pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]} | |
pg dump_json {all|summary|sum|pools|osds|pgs [all|summary|sum|pools|osds|pgs...]} | |
pg dump_pools_json | |
pg ls-by-pool <poolstr> {<states> [<states>...]} | |
pg ls-by-primary <osdname (id|osd.id)> {<int>} {<states> [<states>...]} | |
pg ls-by-osd <osdname (id|osd.id)> {<int>} {<states> [<states>...]} | |
pg ls {<int>} {<states> [<states>...]} | |
pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>} | |
Error EINVAL: invalid command | |
sh-4.4# ceph pg stat | |
288 pgs: 1 active+recovering+undersized+degraded+remapped, 8 active+undersized, 114 active+recovery_wait+undersized+degraded+remapped, 98 active+undersized+degraded+remapped+backfill_wait, 8 active+recovery_wait+undersized+degraded, 47 active+undersized+degraded, 12 active+clean; 74 GiB data, 220 GiB used, 12 TiB / 12 TiB avail; 1.4 MiB/s rd, 22 MiB/s wr, 745 op/s; 20207/57495 objects degraded (35.146%); 16167/57495 objects misplaced (28.119%); 65 MiB/s, 16 objects/s recovering | |
sh-4.4# ceph osd status | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state | | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
| 0 | ip-10-0-163-129.us-east-2.compute.internal | 72.4G | 1975G | 864 | 35.1M | 1179 | 4709k | exists,up | | |
| 1 | ip-10-0-153-80.us-east-2.compute.internal | 73.8G | 1974G | 549 | 21.1M | 2079 | 8315k | exists,up | | |
| 2 | ip-10-0-192-223.us-east-2.compute.internal | 74.1G | 1973G | 375 | 19.8M | 640 | 2561k | exists,up | | |
| 3 | ip-10-0-192-223.us-east-2.compute.internal | 1364M | 2046G | 1 | 11.2k | 1 | 17.4k | exists,up | | |
| 4 | ip-10-0-163-129.us-east-2.compute.internal | 3115M | 2044G | 0 | 0 | 0 | 0 | exists,up | | |
| 5 | ip-10-0-153-80.us-east-2.compute.internal | 1694M | 2046G | 0 | 0 | 25 | 97.6k | exists,up | | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
sh-4.4# date; ceph -s | |
Tue Jul 14 23:06:31 UTC 2020 | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
Degraded data redundancy: 14/57537 objects degraded (0.024%), 10 pgs degraded, 9 pgs undersized | |
services: | |
mon: 3 daemons, quorum a,b,c (age 11m) | |
mgr: a(active, since 11m) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 6 up (since 4m), 6 in (since 30m); 213 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.18k objects, 74 GiB | |
usage: 227 GiB used, 12 TiB / 12 TiB avail | |
pgs: 14/57537 objects degraded (0.024%) | |
27731/57537 objects misplaced (48.197%) | |
170 active+recovery_wait+remapped | |
38 active+recovery_wait | |
37 active+clean | |
25 active+remapped+backfill_wait | |
8 active+recovery_wait+degraded+remapped | |
4 active+recovery_wait+undersized+remapped | |
3 active+undersized+remapped+backfill_wait | |
2 active+recovery_wait+undersized+degraded+remapped | |
1 active+recovering+remapped | |
io: | |
client: 12 MiB/s rd, 83 MiB/s wr, 3.13k op/s rd, 3.14k op/s wr | |
recovery: 2.0 MiB/s, 0 objects/s | |
sh-4.4# ceph osd status | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state | | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
| 0 | ip-10-0-163-129.us-east-2.compute.internal | 72.5G | 1975G | 427 | 16.9M | 453 | 1807k | exists,up | | |
| 1 | ip-10-0-153-80.us-east-2.compute.internal | 73.8G | 1974G | 535 | 11.4M | 715 | 2861k | exists,up | | |
| 2 | ip-10-0-192-223.us-east-2.compute.internal | 74.2G | 1973G | 248 | 15.0M | 326 | 1304k | exists,up | | |
| 3 | ip-10-0-192-223.us-east-2.compute.internal | 1365M | 2046G | 0 | 14.0k | 1 | 14.1k | exists,up | | |
| 4 | ip-10-0-163-129.us-east-2.compute.internal | 3124M | 2044G | 0 | 0 | 0 | 0 | exists,up | | |
| 5 | ip-10-0-153-80.us-east-2.compute.internal | 1694M | 2046G | 0 | 0 | 13 | 49.6k | exists,up | | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
sh-4.4# date; ceph -s | |
Tue Jul 14 23:11:09 UTC 2020 | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
Degraded data redundancy: 1/57573 objects degraded (0.002%), 1 pg degraded, 9 pgs undersized | |
services: | |
mon: 3 daemons, quorum a,b,c (age 16m) | |
mgr: a(active, since 16m) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 6 up (since 8m), 6 in (since 35m); 213 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.19k objects, 74 GiB | |
usage: 227 GiB used, 12 TiB / 12 TiB avail | |
pgs: 1/57573 objects degraded (0.002%) | |
27767/57573 objects misplaced (48.229%) | |
135 active+remapped+backfill_wait | |
73 active+clean | |
67 active+recovery_wait+remapped | |
9 active+undersized+remapped+backfill_wait | |
2 active+recovery_wait | |
1 active+recovering+remapped | |
1 active+recovery_wait+degraded+remapped | |
io: | |
client: 6.5 MiB/s rd, 44 MiB/s wr, 1.68k op/s rd, 1.78k op/s wr | |
sh-4.4# date; ceph -s | |
Tue Jul 14 23:17:02 UTC 2020 | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
1 MDSs report slow metadata IOs | |
Degraded data redundancy: 6 pgs undersized | |
0 slow ops, oldest one blocked for 32 sec, osd.1 has slow ops | |
services: | |
mon: 3 daemons, quorum a,b,c (age 22m) | |
mgr: a(active, since 22m) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 6 up (since 14m), 6 in (since 40m); 210 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.21k objects, 74 GiB | |
usage: 227 GiB used, 12 TiB / 12 TiB avail | |
pgs: 27597/57624 objects misplaced (47.892%) | |
204 active+remapped+backfill_wait | |
78 active+clean | |
5 active+undersized+remapped+backfill_wait | |
1 active+undersized+remapped+backfilling | |
io: | |
client: 3.7 MiB/s rd, 43 MiB/s wr, 934 op/s rd, 1.18k op/s wr | |
sh-4.4# date; ceph -s | |
Tue Jul 14 23:17:04 UTC 2020 | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
Degraded data redundancy: 6 pgs undersized | |
0 slow ops, oldest one blocked for 32 sec, osd.1 has slow ops | |
services: | |
mon: 3 daemons, quorum a,b,c (age 22m) | |
mgr: a(active, since 22m) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 6 up (since 14m), 6 in (since 41m); 210 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.21k objects, 74 GiB | |
usage: 227 GiB used, 12 TiB / 12 TiB avail | |
pgs: 27597/57624 objects misplaced (47.892%) | |
204 active+remapped+backfill_wait | |
78 active+clean | |
5 active+undersized+remapped+backfill_wait | |
1 active+undersized+remapped+backfilling | |
io: | |
client: 3.2 MiB/s rd, 38 MiB/s wr, 804 op/s rd, 1.05k op/s wr | |
sh-4.4# ceph osd status | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state | | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
| 0 | ip-10-0-163-129.us-east-2.compute.internal | 72.1G | 1975G | 176 | 12.6M | 198 | 787k | exists,up | | |
| 1 | ip-10-0-153-80.us-east-2.compute.internal | 73.2G | 1974G | 153 | 8556k | 190 | 760k | exists,up | | |
| 2 | ip-10-0-192-223.us-east-2.compute.internal | 74.5G | 1973G | 24 | 9815k | 12 | 47.3k | exists,up | | |
| 3 | ip-10-0-192-223.us-east-2.compute.internal | 1438M | 2046G | 0 | 0 | 0 | 0 | exists,up | | |
| 4 | ip-10-0-163-129.us-east-2.compute.internal | 3672M | 2044G | 0 | 819k | 0 | 0 | exists,up | | |
| 5 | ip-10-0-153-80.us-east-2.compute.internal | 2497M | 2045G | 1 | 1644k | 2 | 7390 | exists,up | | |
+----+--------------------------------------------+-------+-------+--------+---------+--------+---------+-----------+ | |
sh-4.4# ceph osd tree | |
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF | |
-1 12.00000 root default | |
-5 12.00000 region us-east-2 | |
-10 4.00000 zone us-east-2a | |
-9 2.00000 host ocs-deviceset-0-0-zlgwx | |
1 ssd 2.00000 osd.1 up 1.00000 1.00000 | |
-19 2.00000 host ocs-deviceset-0-1-s8r62 | |
5 ssd 2.00000 osd.5 up 1.00000 1.00000 | |
-14 4.00000 zone us-east-2b | |
-13 2.00000 host ocs-deviceset-1-0-89xs5 | |
0 ssd 2.00000 osd.0 up 1.00000 1.00000 | |
-17 2.00000 host ocs-deviceset-1-1-m9q56 | |
4 ssd 2.00000 osd.4 up 1.00000 1.00000 | |
-4 4.00000 zone us-east-2c | |
-3 2.00000 host ocs-deviceset-2-0-zp9nh | |
2 ssd 2.00000 osd.2 up 1.00000 1.00000 | |
-21 2.00000 host ocs-deviceset-2-1-zrk97 | |
3 ssd 2.00000 osd.3 up 1.00000 1.00000 | |
sh-4.4# ceph osd df | |
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS | |
1 ssd 2.00000 1.00000 2.0 TiB 73 GiB 72 GiB 32 KiB 1024 MiB 1.9 TiB 3.56 1.92 271 up | |
5 ssd 2.00000 1.00000 2.0 TiB 2.7 GiB 1.7 GiB 4 KiB 1.0 GiB 2.0 TiB 0.13 0.07 17 up | |
0 ssd 2.00000 1.00000 2.0 TiB 72 GiB 71 GiB 36 KiB 1024 MiB 1.9 TiB 3.50 1.88 256 up | |
4 ssd 2.00000 1.00000 2.0 TiB 4.6 GiB 3.3 GiB 4 KiB 1.3 GiB 2.0 TiB 0.22 0.12 32 up | |
2 ssd 2.00000 1.00000 2.0 TiB 74 GiB 73 GiB 27 KiB 1024 MiB 1.9 TiB 3.63 1.95 272 up | |
3 ssd 2.00000 1.00000 2.0 TiB 1.9 GiB 598 MiB 12 KiB 1.3 GiB 2.0 TiB 0.09 0.05 12 up | |
TOTAL 12 TiB 228 GiB 222 GiB 117 KiB 6.6 GiB 12 TiB 1.86 | |
MIN/MAX VAR: 0.05/1.95 STDDEV: 1.71 | |
sh-4.4# date; ceph -s | |
Tue Jul 14 23:24:37 UTC 2020 | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
Degraded data redundancy: 3 pgs undersized | |
services: | |
mon: 3 daemons, quorum a,b,c (age 29m) | |
mgr: a(active, since 29m) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 6 up (since 22m), 6 in (since 48m); 204 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.23k objects, 74 GiB | |
usage: 228 GiB used, 12 TiB / 12 TiB avail | |
pgs: 27055/57690 objects misplaced (46.897%) | |
201 active+remapped+backfill_wait | |
84 active+clean | |
2 active+undersized+remapped+backfill_wait | |
1 active+undersized+remapped+backfilling | |
io: | |
client: 165 KiB/s rd, 44 MiB/s wr, 41 op/s rd, 133 op/s wr | |
sh-4.4# date; ceph -s | |
Tue Jul 14 23:41:39 UTC 2020 | |
cluster: | |
id: bdbfe8d3-887c-4aa3-8564-c631b3d0aafc | |
health: HEALTH_WARN | |
Degraded data redundancy: 1 pg undersized | |
services: | |
mon: 3 daemons, quorum a,b,c (age 47m) | |
mgr: a(active, since 46m) | |
mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay | |
osd: 6 osds: 6 up (since 39m), 6 in (since 65m); 192 remapped pgs | |
task status: | |
scrub status: | |
mds.ocs-storagecluster-cephfilesystem-a: idle | |
mds.ocs-storagecluster-cephfilesystem-b: idle | |
data: | |
pools: 3 pools, 288 pgs | |
objects: 19.28k objects, 74 GiB | |
usage: 228 GiB used, 12 TiB / 12 TiB avail | |
pgs: 25796/57831 objects misplaced (44.606%) | |
190 active+remapped+backfill_wait | |
96 active+clean | |
1 active+remapped+backfilling | |
1 active+undersized+remapped+backfilling | |
io: | |
client: 5.8 MiB/s rd, 36 MiB/s wr, 1.47k op/s rd, 1.53k op/s wr | |
sh-4.4# ceph osd df | |
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS | |
1 ssd 2.00000 1.00000 2.0 TiB 70 GiB 69 GiB 36 KiB 1024 MiB 1.9 TiB 3.42 1.84 260 up | |
5 ssd 2.00000 1.00000 2.0 TiB 5.9 GiB 4.9 GiB 8 KiB 1024 MiB 2.0 TiB 0.29 0.15 28 up | |
0 ssd 2.00000 1.00000 2.0 TiB 71 GiB 70 GiB 28 KiB 1024 MiB 1.9 TiB 3.47 1.87 253 up | |
4 ssd 2.00000 1.00000 2.0 TiB 5.2 GiB 4.2 GiB 16 KiB 1024 MiB 2.0 TiB 0.26 0.14 35 up | |
2 ssd 2.00000 1.00000 2.0 TiB 72 GiB 71 GiB 27 KiB 1024 MiB 1.9 TiB 3.53 1.90 267 up | |
3 ssd 2.00000 1.00000 2.0 TiB 3.7 GiB 2.7 GiB 24 KiB 1024 MiB 2.0 TiB 0.18 0.10 20 up | |
TOTAL 12 TiB 228 GiB 222 GiB 141 KiB 6.0 GiB 12 TiB 1.86 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment