Skip to content

Instantly share code, notes, and snippets.

@vheathen
Created March 3, 2017 12:22
Show Gist options
  • Save vheathen/cf2203aeb53e33e3f80c8c64a02263bc to your computer and use it in GitHub Desktop.
Save vheathen/cf2203aeb53e33e3f80c8c64a02263bc to your computer and use it in GitHub Desktop.
ceph multi-root
# ...
[osd]
# Don't update crush map on start - double root
osd crush update on start = false
# ...
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1
# devices
# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
# buckets
root sata {
id -1 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
}
root ssd {
id -2 # do not change unnecessarily
# weight 0.000
alg straw
hash 0 # rjenkins1
}
# rules
rule sata {
ruleset 0
type replicated
min_size 1
max_size 10
step take sata
step chooseleaf firstn 0 type host
step emit
}
rule ssd {
ruleset 1
type replicated
min_size 1
max_size 10
step take ssd
step chooseleaf firstn 0 type host
step emit
}
# end crush map
10.0.0.1 alpha-ssd alpha-sata alpha
10.0.0.2 beta-ssd beta-sata beta
10.0.0.3 gamma-ssd gamma-sata gamma
# 3 hosts cluster
# change /etc/hosts like hosts file (see below)
# Install ceph: http://docs.ceph.com/docs/master/start/ but don't install disks
# Optimize /etc/ceph.conf (different topic)
# and add the [osd] part from the ceph.conf (see below)
# Change crush map:
# Get crush map
ceph osd getcrushmap -o crush.default
# Decompile crush map
crushtool -d crush.default -o crush.default.txt
# Make a copy of the crush map
cp crush.default.txt crush.new.txt
# Change the crush map (see crush.new.txt below)
vi crush.new.txt
# Compile crush map
crushtool -c crush.new.txt -o crush.new
# Set the crush map
ceph osd setcrushmap -i crush.new
# Create crush map host records for each type of the hosts as in example:
# ceph osd crush add-bucket alpha-ssd host
# ceph osd crush add-bucket alpha-sata host
# Move each host to the relevant root tree:
# ceph osd crush move alpha-ssd root=ssd
# ceph osd crush move alpha-sata root=sata
# you can use script like this:
for t in ssd sata; do
for i in alpha beta gamma; do
ceph osd crush add-bucket $i-$t host;
ceph osd crush move $i-$t root=$t;
done;
done
# Run ceph -w on another console to check logs
# Add osd (disk) with ceph-deploy:
ceph-deploy osd prepare alpha:sdc --overwrite-conf # for on-disk jornal
# or
ceph-deploy osd prepare alpha:sdd:/dev/sda --overwrite-conf # for external journal
# Move osd to the relevant host and root:
ceph osd crush set {id-or-name} {weight} root={pool-name} [{bucket-type}={bucket-name} ...]
# for example (get id-or-name and weight from ceph -w output):
ceph osd crush set osd.0 1.7408 root=ssd host=alpha-ssd
# Create pools:
ceph osd pool create pool-ssd 256 256 replicated ssd
ceph osd pool create pool-sata 256 256 replicated sata
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment