Skip to content

Instantly share code, notes, and snippets.

@kemingy
Last active April 26, 2024 09:18
Show Gist options
  • Save kemingy/64a04c64d9290eb4fff62d2d82d7fef5 to your computer and use it in GitHub Desktop.
Save kemingy/64a04c64d9290eb4fff62d2d82d7fef5 to your computer and use it in GitHub Desktop.

Test disk speed

# sequential write
sudo fio --name=write_throughput --directory=. --numjobs=4 \
      --size=2G --time_based --runtime=5m --ramp_time=2s --ioengine=libaio \
      --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=write \
      --group_reporting=1 --iodepth_batch_submit=64 \
      --iodepth_batch_complete_max=64
      
# random write
sudo fio --name=write_iops --directory=. --size=2G \
      --time_based --runtime=5m --ramp_time=2s --ioengine=libaio --direct=1 \
      --verify=0 --bs=4K --iodepth=256 --rw=randwrite --group_reporting=1  \
      --iodepth_batch_submit=256  --iodepth_batch_complete_max=256
  • read
# sequential read
sudo fio --name=read_throughput --directory=. --numjobs=4 \
      --size=2G --time_based --runtime=5m --ramp_time=2s --ioengine=libaio \
      --direct=1 --verify=0 --bs=1M --iodepth=64 --rw=read \
      --group_reporting=1 \
      --iodepth_batch_submit=64 --iodepth_batch_complete_max=64
      
# random read
sudo fio --name=read_iops --directory=. --size=2G \
      --time_based --runtime=5m --ramp_time=2s --ioengine=libaio --direct=1 \
      --verify=0 --bs=4K --iodepth=256 --rw=randread --group_reporting=1 \
      --iodepth_batch_submit=256  --iodepth_batch_complete_max=256
  • write
dd if=/dev/zero of=/tmp/output bs=64k count=100k conv=fdatasync
  • read
dd if=/tmp/output of=/dev/null bs=64k

Check storage blocks

lsblk

LVM

# scan block devices
sudo lvmdiskscan

# create physical volumn
sudo pvcreate /dev/sda /dev/sdb

# list physical volumn
sudo pvs

# create volumn group
sudo vgcreate lvms /dev/sda /dev/sdb

# list volumn group
sudo vgs

# create raid0-like striped volumn
sudo lvcreate --type striped -i 2 -L 10G -n lvmcache lvms

# format
sudo mkfs.ext4 /dev/lvms/lvmcache

# mount
sudo mkdir -p /mnt/lvmcache
sudo mount /dev/lvms/lvmcache /mnt/lvmcache

LVM cache

# create cache meta
sudo lvcreate -n meta -L 122m lvms /dev/nvme1n1
# create cache
sudo lvcreate -n cache -L 100g lvms /dev/nvme1n1
# create slow LV
sudo lvcreate -n ebs -l 100%FREE lvms /dev/nvme2n1

# create cache
sudo lvconvert --type cache-pool --poolmetadata lvms/meta lvms/cache
# map the cache
sudo lvconvert --type cache --cachepool lvms/cache lvms/ebs

# writethrough -> writeback
sudo lvchange --cachemode writeback lvms/ebs
@kemingy
Copy link
Author

kemingy commented Apr 23, 2024

test with dd bs=64k

  • AWS EBS gp3 (IOPS 3000)
read:  7.8 GB/s
write: 131 MB/s
  • AWS EBS with LVM striped (2xio2 IOPS 5000)
read:  7.8 GB/s
write: 595 MB/s
  • AWS EBS with LVM linear (2xio2 IOPS 5000)
read:  7.7 GB/s
write: 594 MB/s

@kemingy
Copy link
Author

kemingy commented Apr 24, 2024

test with fio

  • AWS EBS gp3 (IOPS 3000)
sequential write: 132MB/s
random write:     12.3MB/s
sequential read:  132MB/s
random read:      12.3MB/s
  • AWS EBS with LVM striped (2xio2 IOPS 5000)
sequential write: 594MB/s
random write:     41.0MB/s
sequential read:  594MB/s
random read:      41.0MB/s
  • AWS EBS with LVM linear (2xio2 IOPS 5000)
sequential write: 595MB/s
random write:     22.3MB/s
sequential read:  595MB/s
random read:      22.7MB/s

@kemingy
Copy link
Author

kemingy commented Apr 24, 2024

Using AWS c5d.2xlarge local SSD as LV cache for EBS (io2 IOPS 3000).

  • write to EBS with LV cache

The write speed is limited by the local SSD IOPS.

   bw (  KiB/s): min=135209, max=495616, per=100.00%, avg=375863.88, stdev=11163.28, samples=1071
   iops        : min=  132, max=  484, avg=366.97, stdev=10.91, samples=1071
  lat (usec)   : 10=10.71%, 20=19.55%, 50=0.26%, 250=0.03%
  lat (msec)   : 2=0.07%, 10=1.06%, 50=2.24%, 100=0.97%, 250=4.72%
  lat (msec)   : 500=10.73%, 750=12.19%, 1000=8.67%, 2000=28.94%, >=2000=0.11%

Run status group 0 (all jobs):
  WRITE: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=48.1GiB (51.6GB), run=300826-300826msec
  • write to EBS without LV cache
   bw (  KiB/s): min=430297, max=1445888, per=100.00%, avg=690678.37, stdev=64711.48, samples=2017
   iops        : min=  420, max= 1412, avg=674.38, stdev=63.19, samples=2017
  lat (usec)   : 4=0.37%, 10=32.14%, 20=13.15%, 50=0.19%, 500=0.03%
  lat (usec)   : 750=0.02%
  lat (msec)   : 2=0.56%, 4=1.18%, 10=2.54%, 20=2.10%, 50=8.16%
  lat (msec)   : 100=8.80%, 250=16.79%, 500=10.11%, 750=2.64%, 1000=0.86%
  lat (msec)   : 2000=0.45%

Run status group 0 (all jobs):
  WRITE: bw=567MiB/s (595MB/s), 567MiB/s-567MiB/s (595MB/s-595MB/s), io=166GiB (179GB), run=300196-300196msec

@kemingy
Copy link
Author

kemingy commented Apr 25, 2024

GCP c3-standard-4-lssd

  • using the local SSD
sequential write: 406MB/s
sequential read:  737MB/s
random write: 410MB/s
random read:  737MB/s
  • default balanced persistent disk (block size is 512 cat /sys/block/<disk>/queue/physical_block_size)
sequential write: 176MB/s
sequential read:  176MB/s
random write: 14.7MB/s
random read:  14.7MB/s
  • write to the persistent disk with local SSD cache (writeback)
sequential write: 411MB/s
sequential read:  634MB/s
random write: 32.5MB/s
random read:  539MB/s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment