Improving ZFS performance for large directories

Kevin Day toasty at dragondata.com
Mon Feb 25 17:00:11 UTC 2013


On Feb 20, 2013, at 4:58 AM, Andriy Gapon <avg at FreeBSD.org> wrote:

> on 19/02/2013 22:10 Kevin Day said the following:
>> Timing doing an "ls" in large directories 20 times, the first is the slowest,
> then all subsequent listings are roughly the same. There doesn't appear to be any
> gain after 20 repetitions
> 
> I think that the above could be related to the below
> 
>> 	vfs.zfs.arc_meta_limit                  16398159872
>> 	vfs.zfs.arc_meta_used                   16398120264
> 


Doing some more testing…

After a fresh reboot, without the SSD cache, an ls(1) in a large directory is pretty fast. After we've been running for an hour or so, the speed gets progressively worse. I can kill all other activity on the system, and it's still bad. I reboot, and it's back to normal. 

On an idle system, I watched gstat(8), during the ls(1) the drives are basically at 100% busy while it's running, reading far more data than I'd think necessary to read a directory. top(1) is showing that the "zfskern" kernel process is burning a lot of CPU during that time too. Is there a possibility there's a bug/sub-optimal access pattern we're hitting when the arc_meta_limit is hit? Something akin to if something that was just read doesn't get put into the arc_meta cache, it's having to re-read the same data many times just to iterate through the directory?

I've been hesitating to increase the arc size because we've only got 64GB of memory here and I can't add any further. The processes running on the system themselves need a fair chunk of ram, so I'm trying to figure out how we can either upgrade this motherboard to something newer or reduce our memory size. I've got a feeling I'm going to need to do this, but since this is a non-commercial project it's kinda hard to spend that much money on it. :)

-- Kevin



More information about the freebsd-fs mailing list