Suggestion for hardware for ZFS fileserver

Rick Macklem rmacklem at uoguelph.ca
Thu Dec 20 23:03:02 UTC 2018


Peter Eriksson wrote:
>I can give you the specs for the servers we use here for our FreeBSD-based >fileservers - which have been working really well for us serving Home directors
[good stuff snipped]

>NFS (NFSv4 only, Kerberos/GSS authentication)
>  More or less the only thing we’ve tuned for NFS so far is:
>     nfsuserd_flags="-manage-gids -domain OURDOMAIN -usertimeout 10 -usermax >100000 16”
>  As more clients start using NFS I assume we will have to adjust other stuff too.. >Suggestions are welcome :-)
I am not the best person to suggest values for these tunables because I neve
run an NFS server under heavy load, but at least I can mention possible values.
(I'll assume a 64bit arch with more than a few Gbytes of RAM that can be dedicated
 to serving NFS.)
For NFSv3 and NFSv4.0 clients:
- The DRC (which improves correctness and not performance) is enabled for TCP.
  (Some NFS server vendors only use the DRC for UDP.) This can result in significant
  CPU overheads and RPC RTT delays. You have two alternatives:
  1 - set vfs.nfsd.cachetcp = 0 to disable use of the DRC for TCP.
  2 - Increase vfs.nfsd.tcphighwater to something like 100000.
       You can also decrease vfs.nfsd.tcpcachetimeo, but that reduces the
       effectiveness of the DRC for TCP, since the timeout needs to be larger
       than the longest time it is likely for a client to take to do a TCP reconnect and
       retry RPCs after a server crash or network partitioning.
  For NFSv4.1, you don't need to do the above, because it uses something called
  sessions instead of the DRC. For NFSv4.1 clients you will, however, want to
  increase vfs.nfsd.sessionhashsize to something like 1000.

For NFSv4.0 and NFSv4.1 clients, you will want to increase the state related stuff
to something like:
vfs.nfsd.fhhashsize=10000
vfs.nfsd.statehashsize=100
vfs.nfsd.clienthashsize=1000 (or 1/10th of the number of client mounts up to
   something like 10000)

As you can see, it depends upon which NFS version your clients are using.
("nfsstat -m" should tell you that on both FreeBSD and Linux clients.)

If your exported file systems are UFS, you might consider increasing your buffer
cache size, but not for ZFS exports.

Most/all of these need to be set in your /boot/loader.conf, since they need
to be statically configured. vfs.nfsd.cachetcp can be cleared at any time, I think?

For your case of mostly non-NFS usage, it is hard to say if/when you want to do
the above, but these changes probably won't hurt when you have 256Gbytes
of RAM.

Good luck with it, rick
[more good stuff snipped]


More information about the freebsd-fs mailing list