Re: [BUG] mmap and file init with 0 length

From: Zhang, Yanmin <yanmin_zhang_at_linux.intel.com>
Date: Wed, 26 Mar 2008 16:49:40 +0800

On Wed, 2008-03-26 at 09:20 +0100, Jens Axboe wrote:
> On Wed, Mar 26 2008, Zhang, Yanmin wrote:
> > fio is killed with a SIGBUS. Pls. see below job file.
> >
> > Actually, if the ioengine=mmap and rw=write/randwrite, fio just
> > unlinks the file and creates a new file whose length is 0. sys_mmap
> > could succeed, but later on when fio accesses the mmapped area, kernel
> > will send fio a SIGBUS and kill it.
> >
> >
> > Could fio create a real file when ioengine=mmap and
> > rw=write/randwrite? For exmaple, below job_file could ask fi oto
> > create file data0/f1 whose length is 4G.
> >
>
> I've committed a fix for this, the below one-liner should fix it for
> you.
>
> diff --git a/filesetup.c b/filesetup.c
> index e847276..bb43ee5 100644
> --- a/filesetup.c
> +++ b/filesetup.c
> @@ -30,7 +30,8 @@ static int extend_file(struct thread_data *td, struct fio_file *f)
> * does that for operations involving reads, or for writes
> * where overwrite is set
> */
> - if (td_read(td) || (td_write(td) && td->o.overwrite))
> + if (td_read(td) || (td_write(td) && td->o.overwrite) ||
> + (td_write(td) && td->io_ops->flags & FIO_NOEXTEND))
> new_layout = 1;
> if (td_write(td) && !td->o.overwrite)
> unlink_file = 1;
This patch does fix it. I downloaded the latest git tarball of fio and got below error.

[ymzhang_at_lkp-tt02-x8664 fio]$ fio /tmp/job_file
job_group0_sub0: (g=0): rw=randrw, bs=1K-1K/4K-4K, ioengine=vsync, iodepth=256
job_group0_sub0: (g=0): rw=randrw, bs=1K-1K/4K-4K, ioengine=vsync, iodepth=256
job_group0_sub1: (g=1): rw=randrw, bs=1K-1K/4K-4K, ioengine=vsync, iodepth=256
job_group0_sub1: (g=1): rw=randrw, bs=1K-1K/4K-4K, ioengine=vsync, iodepth=256
Starting 4 processes
fio: failed allocating random map. If running a large number of jobs, try the 'norandommap' option
fio: failed allocating random map. If running a large number of jobs, try the 'norandommap' option
fio: failed allocating random map. If running a large number of jobs, try the 'norandommap' option
fio: failed allocating random map. If running a large number of jobs, try the 'norandommap' option

I checked HOWTO that norandommap couldn't guarantee every block is written/read. Is there any other
method to solve above error?

yanmin
Received on Wed Mar 26 2008 - 09:49:40 CET

This archive was generated by hypermail 2.2.0 : Wed Mar 26 2008 - 10:00:03 CET