Skip to content

Instantly share code, notes, and snippets.

@neoaggelos
Created October 19, 2023 12:14
Show Gist options
  • Save neoaggelos/00f311292a96c25c772ded8f1c5abcf4 to your computer and use it in GitHub Desktop.
Save neoaggelos/00f311292a96c25c772ded8f1c5abcf4 to your computer and use it in GitHub Desktop.
create ceph pools with custom device class

Custom device class

Configure custom device classes (e.g. nvme disks, high io disks, limiting hosts)

By default, ceph uses ssd and hdd disk classes

ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE    RAW USE  DATA     OMAP    META     AVAIL    %USE   VAR   PGS  STATUS
 0    hdd  0.01459   1.00000  15 GiB  6.2 GiB  1.2 GiB   1 KiB  513 MiB  8.8 GiB  41.44  1.48   44      up
 1    hdd  0.01459   1.00000  15 GiB  6.1 GiB  1.1 GiB   3 KiB  657 MiB  8.9 GiB  40.75  1.46   53      up

To configure custom class "klass" for an OSD (after adding):

# set device class of "osd.1" to "klass"
ceph osd crush rm-device-class "osd.1"
ceph osd cruss set-device-class "klass" "osd.1"

Then, create crush rule:

# create crush rule "replicated_klass" using device class "klass"
ceph osd crush rule create-replicated "replicated_klass" default host "klass"

# create crush rule "replicated_klass2" using device classs "klass2"
ceph osd crush rule create-replicated "replicated_klass2" default host "klass2"

And set crush rule for an osd pool:

# create rbd pool "pool0" and use "replicated_klass" crush rule
ceph osd pool create "pool0"
rbd pool init "pool0"
ceph osd pool set "pool0" crush_rule "replicated_klass"

# create rbd pool "pool1" and use "replicated_klass2" crush rule
ceph osd pool create "pool1"
rbd pool init "pool1"
ceph osd pool set "pool1" crush_rule "replicated_klass2"

Wait for cluster to rebalance:

ceph -w
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment