#Ceph benchmark
I have three nodes running with Proxmox hypervisor and ceph cluster in the 10G network.
Regarding ceph, each node have:
- 2 PCI SSD drive in ssd pool (128GB each)
- 3 SAS HDD drive in hdd pool (6TB each)
ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-100 3.00000 root hdd
-102 1.00000 host HV-01-hdd
6 1.00000 osd.6 up 1.00000 1.00000
7 1.00000 osd.7 up 1.00000 1.00000
8 1.00000 osd.8 up 1.00000 1.00000
-103 1.00000 host HV-02-hdd
9 1.00000 osd.9 up 1.00000 1.00000
10 1.00000 osd.10 up 1.00000 1.00000
11 1.00000 osd.11 up 1.00000 1.00000
-104 1.00000 host HV-03-hdd
12 1.00000 osd.12 up 1.00000 1.00000
13 1.00000 osd.13 up 1.00000 1.00000
14 1.00000 osd.14 up 1.00000 1.00000
-1 3.00000 root ssd-cache
-2 1.00000 host HV-01-ssd-cache
0 1.00000 osd.0 up 1.00000 1.00000
1 1.00000 osd.1 up 1.00000 1.00000
-3 1.00000 host HV-02-ssd-cache
2 1.00000 osd.2 up 1.00000 1.00000
3 1.00000 osd.3 up 1.00000 1.00000
-4 1.00000 host HV-03-ssd-cache
4 1.00000 osd.4 up 1.00000 1.00000
5 1.00000 osd.5 up 1.00000 1.00000
##Before tiering
###HDD pool test rados bench -p test-hdd 600 write --no-cleanup
Total time run: 600.350796
Total writes made: 28484
Write size: 4194304
Bandwidth (MB/sec): 189.782
Stddev Bandwidth: 78.7144
Max bandwidth (MB/sec): 544
Min bandwidth (MB/sec): 0
Average Latency: 0.337183
Stddev Latency: 0.35895
Max latency: 5.45677
Min latency: 0.0230326
rados bench -p test-hdd 600 seq
Total time run: 68.378041
Total reads made: 28484
Read size: 4194304
Bandwidth (MB/sec): 1666.266
Average Latency: 0.0383959
Max latency: 0.24366
Min latency: 0.00560207
rados bench -p test-hdd 600 rand
Total time run: 600.059177
Total reads made: 250574
Read size: 4194304
Bandwidth (MB/sec): 1670.329
Average Latency: 0.0383101
Max latency: 0.234654
Min latency: 0.00523469
###SSD pool test
rados bench -p test-ssd 600 write --no-cleanup
Total time run: 600.752459
Total writes made: 41592
Write size: 4194304
Bandwidth (MB/sec): 276.933
Stddev Bandwidth: 121.509
Max bandwidth (MB/sec): 548
Min bandwidth (MB/sec): 0
Average Latency: 0.231092
Stddev Latency: 0.232751
Max latency: 2.71455
Min latency: 0.0229486
rados bench -p test-ssd 600 seq
Total time run: 105.532674
Total reads made: 41592
Read size: 4194304
Bandwidth (MB/sec): 1576.460
Average Latency: 0.0405882
Max latency: 0.550308
Min latency: 0.005967
rados bench -p test-ssd 600 rand
Total time run: 600.059951
Total reads made: 260485
Read size: 4194304
Bandwidth (MB/sec): 1736.393
Average Latency: 0.0368541
Max latency: 0.858585
Min latency: 0.0054079
##After tiering
rados bench -p test-hdd 600 write --no-cleanup
Total time run: 600.644228
Total writes made: 38133
Write size: 4194304
Bandwidth (MB/sec): 253.947
Stddev Bandwidth: 109.66
Max bandwidth (MB/sec): 552
Min bandwidth (MB/sec): 0
Average Latency: 0.251998
Stddev Latency: 0.266661
Max latency: 3.60217
Min latency: 0.0259546
rados bench -p test-hdd 600 seq
Total time run: 93.425101
Total reads made: 38133
Read size: 4194304
Bandwidth (MB/sec): 1632.666
Average Latency: 0.0391905
Max latency: 0.484948
Min latency: 0.00534133
rados bench -p test-hdd 600 rand
Total time run: 600.058321
Total reads made: 255727
Read size: 4194304
Bandwidth (MB/sec): 1704.681
Average Latency: 0.0375399
Max latency: 2.65156
Min latency: 0.00510764
Я правильно понял, что для ВМ под вынь кеш из ssd ухудшает производительность? Или ошибка закралась?