Last active
January 4, 2016 03:19
-
-
Save briancline/8561195 to your computer and use it in GitHub Desktop.
Adjusting PGs to better distribute OSD utilization (hopefully)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
file02.ads1:~% date | |
Wed Jan 22 09:23:42 CST 2014 | |
file02.ads1:~% ssh root@heap01 'df -h $(mount | awk "/osd.ceph-/ {print \$3}" | sort)' | |
Filesystem Size Used Avail Use% Mounted on | |
/dev/sdb1 2.8T 1.6T 1.2T 57% /var/lib/ceph/osd/ceph-0 | |
/dev/sdc1 2.8T 1.6T 1.2T 59% /var/lib/ceph/osd/ceph-4 | |
/dev/sdd1 2.8T 2.1T 705G 75% /var/lib/ceph/osd/ceph-6 | |
file02.ads1:~% ssh root@heap02 'df -h $(mount | awk "/osd.ceph-/ {print \$3}" | sort)' | |
Filesystem Size Used Avail Use% Mounted on | |
/dev/sdb1 2.8T 2.0T 800G 72% /var/lib/ceph/osd/ceph-1 | |
/dev/sdc1 1.9T 1.4T 530G 72% /var/lib/ceph/osd/ceph-3 | |
/dev/sdd1 2.8T 2.0T 829G 71% /var/lib/ceph/osd/ceph-5 | |
----- | |
2014-01-22 09:19:21.878818 mon.0 [INF] pgmap v528444: 667 pgs: 666 active+clean, | |
1 active+clean+scrubbing+deep; | |
5288 GB data, 10578 GB used, | |
5243 GB / 15821 GB avail; | |
61640 B/s rd, 3852 kB/s wr, 7 op/s | |
----- | |
file02.ads1:~% ceph osd pool get media pg_num | |
pg_num: 200 | |
file02.ads1:~% ceph osd pool get archive pg_num | |
pg_num: 125 | |
----- | |
New PG count (general equation obtained from Ceph docs): | |
(OSDs * 100) ÷ Replicas for pool | |
( 6 * 100) ÷ 2 | |
= 300 | |
----- | |
file02.ads1:~% ceph osd pool set archive pg_num 300 | |
set pool 5 pg_num to 300 | |
file02.ads1:~% ceph osd pool set archive pgp_num 300 | |
set pool 5 pgp_num to 300 | |
file02.ads1:~% ceph osd pool set media pg_num 300 | |
set pool 4 pg_num to 300 | |
file02.ads1:~% ceph osd pool set media pgp_num 300 | |
set pool 4 pgp_num to 300 | |
----- | |
2014-01-22 09:34:33.659867 mon.0 [INF] pgmap v528971: 942 pgs: 703 active+clean, | |
195 active+remapped+wait_backfill, | |
43 active+remapped+backfilling, | |
1 active+clean+scrubbing+deep; | |
7068 GB data, 10597 GB used, | |
5224 GB / 15821 GB avail; | |
395526/4032842 objects degraded (9.808%); | |
116 MB/s, 29 objects/s recovering | |
----- | |
2014-01-22 13:07:39.236343 mon.0 [INF] pgmap v538019: 942 pgs: 941 active+clean, | |
1 active+remapped; | |
7064 GB data, 10602 GB used, | |
5219 GB / 15821 GB avail; | |
504 kB/s rd, 3 op/s | |
2014-01-22 13:07:40.323296 mon.0 [INF] pgmap v538020: 942 pgs: 942 active+clean; | |
7064 GB data, 10602 GB used, | |
5219 GB / 15821 GB avail; | |
1364 kB/s rd, 1141 kB/s wr, 13 op/s | |
----- | |
file02.ads1:/mnt/media% ssh root@heap01 'df -h $(mount | awk "/osd.ceph-/ {print \$3}" | sort)' | |
Filesystem Size Used Avail Use% Mounted on | |
/dev/sdb1 2.8T 1.7T 1.1T 61% /var/lib/ceph/osd/ceph-0 | |
/dev/sdc1 2.8T 1.6T 1.2T 58% /var/lib/ceph/osd/ceph-4 | |
/dev/sdd1 2.8T 2.0T 776G 73% /var/lib/ceph/osd/ceph-6 | |
file02.ads1:/mnt/media% ssh root@heap02 'df -h $(mount | awk "/osd.ceph-/ {print \$3}" | sort)' | |
Filesystem Size Used Avail Use% Mounted on | |
/dev/sdb1 2.8T 2.0T 793G 72% /var/lib/ceph/osd/ceph-1 | |
/dev/sdc1 1.9T 1.4T 479G 75% /var/lib/ceph/osd/ceph-3 | |
/dev/sdd1 2.8T 1.9T 882G 69% /var/lib/ceph/osd/ceph-5 | |
file02.ads1:/mnt/media% ceph osd pool set media-old pg_num 300 | |
set pool 3 pg_num to 300 | |
file02.ads1:/mnt/media% ceph osd pool set media-old pgp_num 300 | |
set pool 3 pgp_num to 300 | |
----- | |
2014-01-22 13:17:28.237218 mon.0 [INF] pgmap v538309: 1092 pgs: 150 creating, 942 active+clean; 7035 GB data, 10588 GB used, 5233 GB / 15821 GB avail; 868 kB/s rd, 6 op/s | |
2014-01-22 13:18:24.057038 mon.0 [INF] pgmap v538336: 1092 pgs: 1092 active+clean; 10266 GB data, 10588 GB used, 5233 GB / 15821 GB avail; 1212 kB/s rd, 9 op/s | |
2014-01-22 13:19:21.603665 mon.0 [INF] pgmap v538363: 1092 pgs: 959 active+clean, 89 active+remapped+wait_backfill, 44 active+remapped+backfilling; 10266 GB data, 10589 GB used, 5232 GB / 15821 GB avail; 948 kB/s rd, 7 op/s; 741710/6020779 objects degraded (12.319%); 129 MB/s, 32 objects/s recovering | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment