how big can kern.maxvnodes get?
Chris Peiffer
bsdlists at cabstand.com
Thu Dec 30 20:27:57 UTC 2010
On Thu, Dec 30, 2010 at 09:37:37AM -0500, John Baldwin wrote:
> Chris Peiffer wrote:
> >I have a backend server running 8-2-PRERELEASE with lots of
> >independent files that randomly grow and then get truncated to
> >zero. (Think popserver.)
> >
> >Roughly 2.5 million inodes on each of 4 Intel SSD disks. 24 gb of RAM
> >in the system. I want to maximize the buffer cache in RAM.
> >
> >I doubled kern.maxvnodes to 942108 and reads/second went down and
> >memory use went up, (as I expected) but right now there's still about
> >15g RAM marked as free.
> >
> >vfs.numvnodes crept up to 821704 and has hovered there. The file
> >sizes range to 1 mb but most are in the range 0-10k. Since the server
> >operations are so simple kern.openfiles hovers in the range 100-200.
> >
> >Obviously, all things being equal I'd like to give the filesystem
> >buffer cache access to that free RAM by allowing more vnodes to stay
> >cached.
> >
> >Can I increase kern.maxvnodes by another factor of 2? more? Are there
> >any known problems with stepping it up, besides general memory
> >exhaustion? With so much free RAM I'd like to turn the dial a little
> >bit but I wonder if there are other linked things I should watch out
> >for.
>
> You can increase it, but if numvnodes is less than maxvnodes then it
> won't help you as the system hasn't had to recycle any vnodes yet. It
> is already caching all the vnodes you have accessed in that case.
>
> If the files are frequently truncated then you may end up with a lot of
> free RAM simply because there isn't enough recently used data to cache.
> The VM system will cache everything that is accessed until either 1)
> the pages become invalid (e.g. due to an unlink or truncate) or 2) free
> memory runs low enough to trigger pagedaemon to harvest some inactive
> pages. If you have 15G of free RAM, then 2) isn't happening and your
> working set is less than your RAM.
>
Thanks John.
The system has around 2m non-empty data files. I assumed that since
the numvnodes quickly jumped to 820k and hovered, there must be some
"min free" threshold below maxvnodes that an unloaded system would try
to maintain. But I will investigate my exact working set before I tune
maxvnodes up.
More information about the freebsd-fs
mailing list