Last active
March 8, 2023 14:51
-
-
Save willholley/1b2e7494f26214d49436ec7448884cc7 to your computer and use it in GitHub Desktop.
fio config for dbcore benchmarking, based on https://cloud.google.com/compute/docs/disks/benchmarking-pd-performance
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[global] | |
direct=1 | |
group_reporting=1 | |
ioengine=libaio | |
numjobs=16 | |
ramp_time=2s | |
runtime=60s | |
size=10G | |
time_based=1 | |
verify=0 | |
directory=/srv/fio | |
startdelay=300 | |
[write_throughput] | |
bs=1M | |
iodepth=64 | |
rw=write | |
stonewall | |
[write_iops] | |
bs=4K | |
iodepth=256 | |
rw=randwrite | |
stonewall | |
[read_throughput] | |
bs=1M | |
iodepth=64 | |
rw=read | |
stonewall | |
[read_iops] | |
bs=4K | |
iodepth=256 | |
rw=randread | |
stonewall |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The idea being that for latency you want to saturate the IOPS so you issue lots of small block size requests, and for throughput you want to have fewer, larger IOPS so you are more likely to saturate the bandwidth.