Skip to content

Instantly share code, notes, and snippets.

@willholley
Last active March 8, 2023 14:51
Show Gist options
  • Save willholley/1b2e7494f26214d49436ec7448884cc7 to your computer and use it in GitHub Desktop.
Save willholley/1b2e7494f26214d49436ec7448884cc7 to your computer and use it in GitHub Desktop.
[global]
direct=1
group_reporting=1
ioengine=libaio
numjobs=16
ramp_time=2s
runtime=60s
size=10G
time_based=1
verify=0
directory=/srv/fio
startdelay=300
[write_throughput]
bs=1M
iodepth=64
rw=write
stonewall
[write_iops]
bs=4K
iodepth=256
rw=randwrite
stonewall
[read_throughput]
bs=1M
iodepth=64
rw=read
stonewall
[read_iops]
bs=4K
iodepth=256
rw=randread
stonewall
@willholley
Copy link
Author

The idea being that for latency you want to saturate the IOPS so you issue lots of small block size requests, and for throughput you want to have fewer, larger IOPS so you are more likely to saturate the bandwidth.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment