This has been seen using fio 1.16.18 on a x86_64 system, mandriva
Corporate server 4 (2006.0) using gcc version 4.0.1.
Here come a scenario I'd like to do with fio :
[global]
rw=read
size=16m
ioengine=sync
iodepth=1
direct=0
filename="testfile.fio"
[job1]
description="A sequential read @ 8k"
bs=8k
[job2]
description="A sequential read @ 128k"
bs=128k
When defining the bs inside a job, fio shows me this error :
[root_at_max1 ~]# fio test.fio
job1: (g=0): rw=read, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1
job2: (g=0): rw=read, bs=128K-128K/128K-128K, ioengine=sync, iodepth=1
Starting 2 processes
*** glibc detected *** double free or corruption (out):
0x000000000052c8e0 ***
fio: pid=19306, got signal=6
job1: (groupid=0, jobs=1): err= 0: pid=19306
[...]
Then, If I set a 4k value with direct=0 I can see the following output
[root_at_max1 ~]# fio test.fio
job1: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
job2: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
Starting 2 processes
fio: pid=19350, got signal=11
fio: pid=19351, got signal=11
Run status group 0 (all jobs):
Disk stats (read/write):
sda: ios=1/0, merge=0/0, ticks=7/0, in_queue=7, util=4.93%
and this in my /var/log/messages
fio[19346]: segfault at 0000000000000048 rip 000000000040561d rsp
00007ffffff9d2d0 error 6
fio[19347]: segfault at 0000000000000048 rip 000000000040561d rsp
00007ffffff9d2d0 error 6
Note, that setting bs=4k __and__ direct=1 make things work perfectly
(but in that test I would like not beeing O_DIRECT ;))
So its sounds that a 4k bs with direct=0 generates the troubles.
Hope this helps,
Erwan,
Received on Fri Jul 20 2007 - 12:02:49 CEST
This archive was generated by hypermail 2.2.0 : Fri Jul 20 2007 - 12:30:01 CEST