Jens,
I ran a testing with vsync+randrw. I expected the both read and write's precent are
50%. But vmstat showed write was more than 99%.
---------The first lines of job file------------
[global]
direct=0
ioengine=vsync
iodepth=256
iodepth_batch=32
size=2G
bs=1k-4k
bs_unaligned
numjobs=2
loops=5
runtime=1200
group_reporting
directory=/mnt/stp/fiodata
[job_sdb1_sub0]
startdelay=0
rw=randrw
filename=data0/f1:data0/f2
[job_sdb1_sub1]
startdelay=0
rw=randrw
filename=data0/f1:data0/f2
...
---------------vmstat 1--------------
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 31 0 14453336 38924 1628636 0 0 48 15132 3760 1565 0 1 84 15 0
0 33 0 14453012 38928 1629164 0 0 44 13824 3429 740 0 0 86 13 0
2 31 0 14458504 38944 1624116 0 0 104 14564 3558 990 0 0 87 12 0
0 30 0 14452204 38948 1630532 0 0 24 14864 3845 1285 0 0 87 12 0
0 32 0 14449304 38956 1633472 0 0 92 14432 3546 844 0 1 87 13 0
0 30 0 14451312 38956 1631444 0 0 84 15808 3813 1294 0 1 86 13 0
0 33 0 14441184 38960 1642032 0 0 816 14724 3800 2014 0 1 85 14 0
0 35 0 14440068 38972 1643120 0 0 76 16096 3797 2064 0 1 84 15 0
0 32 0 14424916 38980 1657896 0 0 96 15352 3747 1579 0 1 87 12 0
0 32 0 14433916 38988 1648928 0 0 148 14316 3374 1495 0 1 82 16 0
0 32 0 14429288 38992 1654356 0 0 116 16596 3940 3061 1 1 83 15 0
0 32 0 14415516 39000 1667372 0 0 840 12996 3546 1822 0 1 86 13 0
0 30 0 14414820 39012 1668348 0 0 56 14408 3562 1084 0 1 86 13 0
0 36 0 14408020 39012 1675200 0 0 92 14744 3645 1116 0 0 87 13 0
0 29 0 14405396 39012 1677760 0 0 40 13632 3526 1069 0 0 87 12 0
-yanmin
Received on Mon Apr 07 2008 - 08:13:26 CEST
This archive was generated by hypermail 2.2.0 : Mon Apr 07 2008 - 09:00:01 CEST