Re: nullfs and ZFS issues

From: Doug Ambrisko <ambrisko_at_ambrisko.com>
Date: Thu, 21 Apr 2022 16:49:02 UTC
On Thu, Apr 21, 2022 at 03:44:02PM +0200, Alexander Leidinger wrote:
| Quoting Mateusz Guzik <mjguzik@gmail.com> (from Thu, 21 Apr 2022  
| 14:50:42 +0200):
| 
| > On 4/21/22, Alexander Leidinger <Alexander@leidinger.net> wrote:
| >> I tried nocache on a system with a lot of jails which use nullfs,
| >> which showed very slow behavior in the daily periodic runs (12h runs
| >> in the night after boot, 24h or more in subsequent nights). Now the
| >> first nightly run after boot was finished after 4h.
| >>
| >> What is the benefit of not disabling the cache in nullfs? I would
| >> expect zfs (or ufs) to cache the (meta)data anyway.
| >>
| >
| > does the poor performance show up with
| > https://people.freebsd.org/~mjg/vnlru_free_pick.diff ?
| 
| I would like to have all the 22 jails run the periodic scripts a  
| second night in a row before trying this.
| 
| > if the long runs are still there, can you get some profiling from it?
| > sysctl -a before and after would be a start.
| >
| > My guess is that you are the vnode limit and bumping into the 1 second sleep.
| 
| That would explain the behavior I see since I added the last jail  
| which seems to have crossed a threshold which triggers the slow  
| behavior.
| 
| Current status (with the 112 nullfs mounts with nocache):
| kern.maxvnodes:               10485760
| kern.numvnodes:                3791064
| kern.freevnodes:               3613694
| kern.cache.stats.heldvnodes:    151707
| kern.vnodes_created:         260288639
| 
| The maxvnodes value is already increased by 10 times compared to the  
| default value on this system.

With the patch, you shouldn't mount with nocache!  However, you might
want to tune:
	vfs.zfs.arc.meta_prune
	vfs.zfs.arc.meta_adjust_restarts

Since the code on restart will increment the prune amount by
vfs.zfs.arc.meta_prune and submit that amount to the vnode reclaim
code.  So then it will end up reclaiming a lot of vnodes.  The
defaults of 10000 * 4096 and submitting it each loop can most of
the cache to be freed.  With relative small values of them, then
the cache didn't shrink to much.

Doug A.