Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
fio test
2 problems:
1. ignores runtime
2. doesn't output actual measurements (other than issued total)
[~/fio-2.0.13]# time ./fio --name=randread --rw=randread --random_distribution=pareto:0.5 --size=1g --bs=8k --runtime=10 --time_based --filename=fio.tmp
randread: (g=0): rw=randread, bs=8K-8K/8K-8K/8K-8K, ioengine=sync, iodepth=1
fio-2.0.13
Starting 1 process
^C
fio: terminating on signal 2
randread: (groupid=0, jobs=1): err= 0: pid=18537: Sat Feb 2 02:50:03 2013
lat (usec) : 2=100.00%
cpu : usr=0.00%, sys=0.00%, ctx=4776, majf=0, minf=0
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=2274693/w=0/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
real 0m28.465s
user 0m5.731s
sys 0m20.696s
@joelandman
Copy link

Hmm .... same command line (apart from the time) on Linux, lots more information (yours was terminating on signal 2, looks like you hit ctrl-C before it completed?)

[root@jr5-lab test]# ./fio --name=randread --rw=randread --random_distribution=pareto:0.5 --size=1g --bs=8k --runtime=10 --time_based --filename=fio.tmp
randread: (g=0): rw=randread, bs=8K-8K/8K-8K/8K-8K, ioengine=sync, iodepth=1
fio-2.0.13-87-gf67d6
Starting 1 process
randread: Laying out IO file(s) (1 file(s) / 1024MB)
Jobs: 1 (f=1): [r] [27.3% done] [49296K/0K/0K /s] [6162 /0 /0 iops] [eta 00m:08Jobs: 1 (f=1): [r] [36.4% done] [40576K/0K/0K /s] [5072 /0 /0 iops] [eta 00m:07Jobs: 1 (f=1): [r] [45.5% done] [41504K/0K/0K /s] [5188 /0 /0 iops] [eta 00m:06Jobs: 1 (f=1): [r] [54.5% done] [41304K/0K/0K /s] [5163 /0 /0 iops] [eta 00m:05Jobs: 1 (f=1): [r] [100.0% done] [40840K/0K/0K /s] [5105 /0 /0 iops] [eta 00m:00s]
randread: (groupid=0, jobs=1): err= 0: pid=13878: Fri Feb 1 23:08:44 2013
read : io=428600KB, bw=42856KB/s, iops=5356 , runt= 10001msec
clat (usec): min=65 , max=1199 , avg=180.44, stdev=35.24
lat (usec): min=65 , max=1200 , avg=180.79, stdev=35.25
clat percentiles (usec):
| 1.00th=[ 74], 5.00th=[ 94], 10.00th=[ 139], 20.00th=[ 151],
| 30.00th=[ 169], 40.00th=[ 191], 50.00th=[ 193], 60.00th=[ 195],
| 70.00th=[ 197], 80.00th=[ 205], 90.00th=[ 213], 95.00th=[ 217],
| 99.00th=[ 237], 99.50th=[ 262], 99.90th=[ 302], 99.95th=[ 318],
| 99.99th=[ 342]
bw (KB/s) : min=40368, max=57152, per=100.00%, avg=43016.42, stdev=4488.30
lat (usec) : 100=5.06%, 250=94.28%, 500=0.66%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%
cpu : usr=3.70%, sys=23.40%, ctx=53586, majf=0, minf=25
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=53575/w=0/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
READ: io=428600KB, aggrb=42855KB/s, minb=42855KB/s, maxb=42855KB/s, mint=10001msec, maxt=10001msec

I am trying to get your same behavior, I lit up the job and hit Ctrl-C to see if it simply skips printing ...

[root@jr5-lab test]# ./fio --name=randread --rw=randread --random_distribution=pareto:0.5 --size=1g --bs=8k --runtime=10 --time_based --filename=fio.tmp
randread: (g=0): rw=randread, bs=8K-8K/8K-8K/8K-8K, ioengine=sync, iodepth=1
fio-2.0.13-87-gf67d6
Starting 1 process
randread: Laying out IO file(s) (1 file(s) / 1024MB)
^C
fio: terminating on signal 2

Run status group 0 (all jobs):

Yeah, is it possible that a Ctrl-C or kill signal somehow hit the app?

@brendangregg
Copy link
Author

latest version of fio (git://git.kernel.dk/fio.git) fixed many problems; --clocksource=clock_gettime fixed the runtime.

Note that this does mean a high rate of gettime calls:

# dtrace -n 'syscall:::entry /execname == "fio"/ { @[probefunc] = count(); }'
dtrace: description 'syscall:::entry ' matched 234 probes
^C

  write                                                             2
  ioctl                                                             8
  waitsys                                                         200
  nanosleep                                                       208
  llseek                                                        85056
  read                                                          85057
  clock_gettime                                                340261

Although, the CPU cost is only about 10% for this particular workload (8k random read cached), so, not as bad as the syscall count would suggest:

# dtrace -n 'syscall:::entry /execname == "fio"/ { self->ts = vtimestamp; } syscall:::return /self->ts/ { @[probefunc] = sum(vtimestamp - self->ts); self->ts = 0; }'
dtrace: description 'syscall:::entry ' matched 468 probes
^C

  write                                                        155539
  ioctl                                                        734339
  waitsys                                                     1453280
  nanosleep                                                   7008990
  llseek                                                     98417743
  clock_gettime                                             238765014
  read                                                     1842462503

It's spending most of its CPU time in read(), which is good. Numbers are nanosecond sums of CPU time during syscalls. YMMV. I should also sample this as well (other than tracing) to double check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment