Skip to content

Instantly share code, notes, and snippets.

@fghaas
Created October 12, 2017 10:45
Show Gist options
  • Save fghaas/f547243b0f7ebb78ce2b8e80b936e42c to your computer and use it in GitHub Desktop.
Save fghaas/f547243b0f7ebb78ce2b8e80b936e42c to your computer and use it in GitHub Desktop.
Excessive data redundancy in Ceph 12.2.1
training@deploy:~/ceph-ansible$ sudo ceph -s
cluster:
id: 2beeb83a-2aa5-47e3-a3c3-39890821c410
health: HEALTH_WARN
Degraded data redundancy: 30197/3622 objects degraded (833.711%), 168 pgs unclean, 168 pgs degraded, 1 pg u
ndersized
services:
mon: 3 daemons, quorum daisy,eric,frank
mgr: frank(active), standbys: eric, daisy
mds: cephfs-1/1/1 up {0=daisy=up:active}, 2 up:standby
osd: 3 osds: 3 up, 3 in; 1 remapped pgs
data:
pools: 16 pools, 2576 pgs
objects: 1811 objects, 1252 MB
usage: 1989 MB used, 49087 MB / 51076 MB avail
pgs: 30197/3622 objects degraded (833.711%)
2408 active+clean
165 active+recovery_wait+degraded
2 active+recovering+degraded
1 active+undersized+degraded+remapped+backfill_wait
io:
recovery: 1062 kB/s, 14 objects/s
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment