8-STABLE and swap

Jeremy Chadwick freebsd at jdc.parodius.com
Wed May 11 12:16:39 UTC 2011


On Wed, May 11, 2011 at 12:58:50PM +0200, Robert Schulze wrote:
> We are running 8-STABLE (csupped at 20110504) on a NFS fileserver.
> It has 32 GB RAM and uses ZFS:
> 
> home        ONLINE       0     0     0
> 	  raidz2    ONLINE       0     0     0
> 	    da1     ONLINE       0     0     0
> 	    da2     ONLINE       0     0     0
> 	    da3     ONLINE       0     0     0
> 	    da4     ONLINE       0     0     0
> 	    da5     ONLINE       0     0     0
> 	  raidz2    ONLINE       0     0     0
> 	    da6     ONLINE       0     0     0
> 	    da7     ONLINE       0     0     0
> 	    da8     ONLINE       0     0     0
> 	    da9     ONLINE       0     0     0
> 	    da10    ONLINE       0     0     0
> 	logs
> 	  mirror    ONLINE       0     0     0
> 	    da12    ONLINE       0     0     0
> 	    da13    ONLINE       0     0     0
> 	cache
> 	  ad4       ONLINE       0     0     0
> 	  ad8       ONLINE       0     0     0
> 
> Before upgrading from 8.0, the machine never used the whole system
> memory, it left about 10 GB free even after about 100 days uptime.
> Now, it eats RAM insanely (wired is between 29 GB and 30 GB), which
> is quite good I think, but after about 3 days uptime, we now have
> 106 MB swapped out. Both L2ARC SSDs are ~74 GB in size, arc_summary
> prints the following values:
> 
> ARC Size:
> 	Current Size:			76.21%	23440.22M (arcsize)
> 	Target Size: (Adaptive)		76.52%	23535.40M (c)
> 	Min Size (Hard Limit):		12.50%	3844.77M (c_min)
> 	Max Size (High Water):		~8:1	30758.16M (c_max)
> 
> L2 ARC Size:
> 	Current Size: (Adaptive)		88466.19M
> 	Header Size:			0.29%	259.21M
> 
> 
> The following sysctls were set:
> 
> security.bsd.see_other_uids=0
> kern.maxvnodes=400000
> kern.ipc.somaxconn=8192
> kern.ipc.maxsockbuf=1024000
> net.inet.udp.maxdgram=57344
> vfs.ufs.dirhash_maxmem=25165824
> 
> My question now: why does the machine swap, is this normal behaviour?
> Why is wired at about 30 GB if ARC=23 GB and L2ARC-header=259 MB?

I'm not really all that familiar with L2ARC at this point (conceptually
yes, real-world use no), but the delta (23GB vs. 30GB wired) is probably
explainable.  The most common reason, as I understand it, is that memory
becomes fragmented in such a way that there are unoptimised/non-optimal
page layouts in memory resulting in a waste.

I tend not to use the "arcstats" script and instead look at the sysctl
data from "sysctl -a kstat.zfs.misc.arcstats".  I guess I'm used to
looking at it by now.

Anyway, for example, on my 8GB machine with vfs.zfs.arc_max="6144M" set
in /boot/loader.conf, running 8.2-STABLE dated May 6th, "Wired" on my
machine has occasionally reached 6.8GBytes.  How much of this was ZFS?
About 6.4GBytes -- the remaining 0.4GBytes?  mysqld.

What you're looking for is something very low-level that gives a
complete and total kernel-level memory map to show you exactly where
everything is going.  I believe "vmstat -z" provides that.

So basically what I'm trying to say here is that you're running top,
looking at Wired, and saying "all of this is ZFS" when that's definitely
not what Wired represents exclusively.

If you're worried about swap usage, try limiting the ARC size more using
/boot/loader.conf.  You do not need to adjust vm.kmem_size or
vm.kmem_size_max if the machine is running 8.2-RELEASE or newer.  I only
mention this because all the online docs you'll find mention tuning one
or both of those two; that only applies to older FreeBSD releases.

-- 
| Jeremy Chadwick                                   jdc at parodius.com |
| Parodius Networking                       http://www.parodius.com/ |
| UNIX Systems Administrator                  Mountain View, CA, USA |
| Making life hard for others since 1977.               PGP 4BD6C0CB |



More information about the freebsd-stable mailing list