Skip to content

Instantly share code, notes, and snippets.

@kvaps
Last active April 9, 2018 07:01
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kvaps/8a0b4eff1760bad567b6 to your computer and use it in GitHub Desktop.
Save kvaps/8a0b4eff1760bad567b6 to your computer and use it in GitHub Desktop.

#Ceph benchmark

I have three nodes running with Proxmox hypervisor and ceph cluster in the 10G network.

Regarding ceph, each node have:

  • 2 PCI SSD drive in ssd pool (128GB each)
  • 3 SAS HDD drive in hdd pool (6TB each)

ceph osd tree

ID   WEIGHT  TYPE NAME                UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-100 3.00000 root hdd                                                   
-102 1.00000     host HV-01-hdd                                         
   6 1.00000         osd.6                 up  1.00000          1.00000 
   7 1.00000         osd.7                 up  1.00000          1.00000 
   8 1.00000         osd.8                 up  1.00000          1.00000 
-103 1.00000     host HV-02-hdd                                         
   9 1.00000         osd.9                 up  1.00000          1.00000 
  10 1.00000         osd.10                up  1.00000          1.00000 
  11 1.00000         osd.11                up  1.00000          1.00000 
-104 1.00000     host HV-03-hdd                                         
  12 1.00000         osd.12                up  1.00000          1.00000 
  13 1.00000         osd.13                up  1.00000          1.00000 
  14 1.00000         osd.14                up  1.00000          1.00000 
  -1 3.00000 root ssd-cache                                             
  -2 1.00000     host HV-01-ssd-cache                                   
   0 1.00000         osd.0                 up  1.00000          1.00000 
   1 1.00000         osd.1                 up  1.00000          1.00000 
  -3 1.00000     host HV-02-ssd-cache                                   
   2 1.00000         osd.2                 up  1.00000          1.00000 
   3 1.00000         osd.3                 up  1.00000          1.00000 
  -4 1.00000     host HV-03-ssd-cache                                   
   4 1.00000         osd.4                 up  1.00000          1.00000 
   5 1.00000         osd.5                 up  1.00000          1.00000 

##Before tiering

###HDD pool test rados bench -p test-hdd 600 write --no-cleanup

 Total time run:         600.350796
Total writes made:      28484
Write size:             4194304
Bandwidth (MB/sec):     189.782

Stddev Bandwidth:       78.7144
Max bandwidth (MB/sec): 544
Min bandwidth (MB/sec): 0
Average Latency:        0.337183
Stddev Latency:         0.35895
Max latency:            5.45677
Min latency:            0.0230326

rados bench -p test-hdd 600 seq

 Total time run:        68.378041
Total reads made:     28484
Read size:            4194304
Bandwidth (MB/sec):    1666.266

Average Latency:       0.0383959
Max latency:           0.24366
Min latency:           0.00560207

rados bench -p test-hdd 600 rand

 Total time run:        600.059177
Total reads made:     250574
Read size:            4194304
Bandwidth (MB/sec):    1670.329

Average Latency:       0.0383101
Max latency:           0.234654
Min latency:           0.00523469

###SSD pool test

rados bench -p test-ssd 600 write --no-cleanup

 Total time run:         600.752459
Total writes made:      41592
Write size:             4194304
Bandwidth (MB/sec):     276.933

Stddev Bandwidth:       121.509
Max bandwidth (MB/sec): 548
Min bandwidth (MB/sec): 0
Average Latency:        0.231092
Stddev Latency:         0.232751
Max latency:            2.71455
Min latency:            0.0229486

rados bench -p test-ssd 600 seq

 Total time run:        105.532674
Total reads made:     41592
Read size:            4194304
Bandwidth (MB/sec):    1576.460

Average Latency:       0.0405882
Max latency:           0.550308
Min latency:           0.005967

rados bench -p test-ssd 600 rand

 Total time run:        600.059951
Total reads made:     260485
Read size:            4194304
Bandwidth (MB/sec):    1736.393

Average Latency:       0.0368541
Max latency:           0.858585
Min latency:           0.0054079

##After tiering

rados bench -p test-hdd 600 write --no-cleanup

 Total time run:         600.644228
Total writes made:      38133
Write size:             4194304
Bandwidth (MB/sec):     253.947

Stddev Bandwidth:       109.66
Max bandwidth (MB/sec): 552
Min bandwidth (MB/sec): 0
Average Latency:        0.251998
Stddev Latency:         0.266661
Max latency:            3.60217
Min latency:            0.0259546

rados bench -p test-hdd 600 seq

 Total time run:        93.425101
Total reads made:     38133
Read size:            4194304
Bandwidth (MB/sec):    1632.666

Average Latency:       0.0391905
Max latency:           0.484948
Min latency:           0.00534133

rados bench -p test-hdd 600 rand

 Total time run:        600.058321
Total reads made:     255727
Read size:            4194304
Bandwidth (MB/sec):    1704.681 

Average Latency:       0.0375399
Max latency:           2.65156
Min latency:           0.00510764

#Linux VM benchmark

before each command is executed sudo echo 3 | sudo tee /proc/sys/vm/drop_caches && sudo sync

###Without RBD cache on client

# dd if=/dev/zero of=./test bs=1k count=4000000
4000000+0 records in
4000000+0 records out
4096000000 bytes (4.1 GB) copied, 14.7433 s, 278 MB/s

# dd if=./test of=/dev/null bs=1k count=4000000
4000000+0 records in
4000000+0 records out
4096000000 bytes (4.1 GB) copied, 3.69977 s, 1.1 GB/s

# dd if=/dev/zero of=./test bs=4k count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 13.5798 s, 302 MB/s

# dd if=./test of=/dev/null bs=4k count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 3.32988 s, 1.2 GB/s

# dd if=/dev/zero of=./test bs=8M count=512
512+0 records in
512+0 records out
4294967296 bytes (4.3 GB) copied, 20.8665 s, 206 MB/s

# dd if=./test of=/dev/null bs=8M count=128
128+0 records in
128+0 records out
1073741824 bytes (1.1 GB) copied, 0.855307 s, 1.3 GB/s

###With RBD cache on client

# dd if=/dev/zero of=./test bs=1k count=4000000
4000000+0 records in
4000000+0 records out
4096000000 bytes (4.1 GB) copied, 11.9787 s, 342 MB/s

# dd if=./test of=/dev/null bs=1k count=4000000
4000000+0 records in
4000000+0 records out
4096000000 bytes (4.1 GB) copied, 3.60961 s, 1.1 GB/s

# dd if=/dev/zero of=./test bs=4k count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 16.6614 s, 246 MB/s

# dd if=./test of=/dev/null bs=4k count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 3.26184 s, 1.3 GB/s

# dd if=/dev/zero of=./test bs=8M count=512
512+0 records in
512+0 records out
4294967296 bytes (4.3 GB) copied, 14.5202 s, 296 MB/s

# dd if=./test of=/dev/null bs=8M count=128
128+0 records in
128+0 records out
1073741824 bytes (1.1 GB) copied, 0.83268 s, 1.3 GB/s

#Windows VM benchmark

###Without RBD cache on client

-----------------------------------------------------------------------
CrystalDiskMark 4.0.3 x64 (C) 2007-2015 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :  1143.550 MB/s
  Sequential Write (Q= 32,T= 1) :   122.437 MB/s
  Random Read 4KiB (Q= 32,T= 1) :    70.663 MB/s [ 17251.7 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :    13.511 MB/s [  3298.6 IOPS]
         Sequential Read (T= 1) :   640.929 MB/s
        Sequential Write (T= 1) :    95.850 MB/s
   Random Read 4KiB (Q= 1,T= 1) :     8.763 MB/s [  2139.4 IOPS]
  Random Write 4KiB (Q= 1,T= 1) :     1.625 MB/s [   396.7 IOPS]

  Test : 1024 MiB [C: 94.0% (75.1/79.9 GiB)] (x3)
  Date : 2015/07/06 14:14:05
    OS : Windows Server 2008 R2 Server Standard (full installation) SP1 [6.1 Build 7601] (x64)

###With RBD cache on client

-----------------------------------------------------------------------
CrystalDiskMark 4.0.3 x64 (C) 2007-2015 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :   778.601 MB/s
  Sequential Write (Q= 32,T= 1) :   140.752 MB/s
  Random Read 4KiB (Q= 32,T= 1) :    37.714 MB/s [  9207.5 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :    11.318 MB/s [  2763.2 IOPS]
         Sequential Read (T= 1) :   633.984 MB/s
        Sequential Write (T= 1) :    74.041 MB/s
   Random Read 4KiB (Q= 1,T= 1) :     8.583 MB/s [  2095.5 IOPS]
  Random Write 4KiB (Q= 1,T= 1) :     1.330 MB/s [   324.7 IOPS]

  Test : 1024 MiB [C: 94.0% (75.1/79.9 GiB)] (x3)
  Date : 2015/07/06 14:32:40
    OS : Windows Server 2008 R2 Server Standard (full installation) SP1 [6.1 Build 7601] (x64)
@kharlashkin
Copy link

Я правильно понял, что для ВМ под вынь кеш из ssd ухудшает производительность? Или ошибка закралась?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment