With the following job description
[global]
bs=1k
direct=1
rw=read
ioengine=libaio
iodepth=2
zonesize=1k
zoneskip=1023k
write_bw_log
[/dev/cciss/c0d1]
write_iolog=foo2
The idea here is that I wanted to line up my zones to start at 1M
boundaries across the disk by writing 1k and skipping the next 1023k.
In practice I don't get the alignment because of an extra initial I/O.
I get an iolog that looks like
# head foo2
fio version 2 iolog
/dev/cciss/c0d1 add
/dev/cciss/c0d1 open
/dev/cciss/c0d1 read 0 1024
/dev/cciss/c0d1 read 1024 1024
/dev/cciss/c0d1 read 1049600 1024
/dev/cciss/c0d1 read 2098176 1024
There's a read that I don't expect in that log, namely the read starting
at byte 1024. Because that read is there, the disk zones get offset by
one block. I expected output like
# head foo2
fio version 2 iolog
/dev/cciss/c0d1 add
/dev/cciss/c0d1 open
/dev/cciss/c0d1 read 0 1024
/dev/cciss/c0d1 read 1048576 1024
/dev/cciss/c0d1 read 2097152 1024
/dev/cciss/c0d1 read 3145728 1024
Where the zones are all aligned to 1m boundaries. I can get what I want
with specifying "offset=1023k" which effectively puts the I/O for the
first zone at the end of that zone which isn't great, but it does give
me aligned zones.
# head foo2
fio version 2 iolog
/dev/cciss/c0d1 add
/dev/cciss/c0d1 open
/dev/cciss/c0d1 read 1047552 1024
/dev/cciss/c0d1 read 1048576 1024
/dev/cciss/c0d1 read 2097152 1024
Is this the expected behavior? Am I just not getting the point of
zonesize/zonefile or is this a bug?
Thanks,
Ryan
BTW: I'm using fio version 1.17 from the DAG repository on RHEL AS 4u5
with kernel 2.6.9-55.ELsmp
Received on Thu Jan 31 2008 - 13:05:41 CET
This archive was generated by hypermail 2.2.0 : Thu Jan 31 2008 - 13:30:01 CET