Jens Axboe wrote:
> [...]
> Are you sure you used the new base?
>
>
I thougth I was and I did a double checking... And I wasn't on the new
base...
What a shame on me...
As you can imagine... your patch solved this issue.
Thanks for this quick patch and sorry for the false positive report ;)
So now, the performance is falling down slowly (sounds good) but I my
case the job stopped at 90%.
So the 10% remaining represents 15G meaning there is enough place to do
some other tests as we skip 2G at each turn.
Any idea why it stops at this size ?
I also wonder how "per" can reach 100.12% as you explain it shows the
speed threads can take from the same disk. Here I'm running on the same
disk with 1 thread so I can't expect reaching more than 100 % right ?
/me thinks you will detest him quickly :D
[root_at_max1 ~]# /home/build/rpm/BUILD/fio/fio disk-zone-profile
/dev/sdb: (g=0): rw=read, bs=64K-64K/64K-64K, ioengine=libaio, iodepth=2
Starting 1 process
Jobs: 1 (f=1): [R] [90.1% done] [ 35303/ 0 kb/s] [eta 00m:33s]
/dev/sdb: (groupid=0, jobs=1): err= 0: pid=3793
read : io=17152MiB, bw=59813KiB/s, iops=912, runt=300687msec
slat (usec): min= 15, max= 181, avg=17.14, stdev= 2.23
clat (usec): min= 1336, max=16256, avg=2171.23, stdev=752.47
bw (KiB/s) : min=35187, max=77332, per=100.12%, avg=59887.57,
stdev=11685.53
cpu : usr=0.51%, sys=1.65%, ctx=274434
IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
issued r/w: total=274433/0, short=0/0
lat (msec): 2=55.76%, 4=41.76%, 10=2.44%, 20=0.04%
Run status group 0 (all jobs):
READ: io=17152MiB, aggrb=59813KiB/s, minb=59813KiB/s,
maxb=59813KiB/s, mint=300687msec, maxt=300687msec
Disk stats (read/write):
sdb: ios=274433/0, merge=0/0, ticks=581267/0, in_queue=581267,
util=100.00%
Received on Fri Jul 20 2007 - 13:57:43 CEST
This archive was generated by hypermail 2.2.0 : Fri Jul 20 2007 - 14:00:02 CEST