[Bug 277197] NFS is much to slow at 10GbaseT
Date: Tue, 20 Feb 2024 16:34:45 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277197 Bug ID: 277197 Summary: NFS is much to slow at 10GbaseT Product: Base System Version: 14.0-RELEASE Hardware: amd64 OS: Any Status: New Severity: Affects Only Me Priority: --- Component: kern Assignee: bugs@FreeBSD.org Reporter: h2+fbsdports@fsfe.org I have FreeBSD 14 server and client. Both have Intel X540 10GBase-T adapters and are connected via CAT7 and a Netgear switch that has the respective 10GBase-T ports. Via iperf3, I measure 1233 MiB/s (9.87GBit/s) throughput. Via nc, I measure 1160 MiB/s throughput. Via NFS, I get around 190-250MiB/s. I did not expect to get the full 1100MiB/s with NFS, but I did hope to be between 600-800MB/s at least. Various guides suggest tinkering with different TCP related sysctls, but I haven't had any luck improving the performance. And since nc also manages to push >1GByte over TCP, this doesn't seem like the core of the problem. I have replaced the base system's ix with the one from ports, but no change. Again, I don't think the driver or the network stack have an issue per se; it seems to be NFS related. I have used default options to do the mounts. This is what nfsstat shows for the NFS3 mount: ``` nfsv3,tcp,resvport,nconnect=1,hard,cto,lockd,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,n egnametimeo=60,rsize=65536,wsize=65536,readdirsize=65536,readahead=1,wcommitsize=16777216,timeout=120,retrans=2 ``` and for the NFS4 mount: ``` nfsv4,minorversion=2,tcp,resvport,nconnect=1,hard,cto,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,namet imeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=65536,readahead=1,wcommitsize=16777216,timeout=120,re trans=2147483647 ``` Am I missing something? Is this a bug or a configuration problem? I will try to set up a linux NFS client to see if the issues are client or server related. Thanks for your help! P.S.: The server has an NVME raidz and can maintain throughput speeds over 900MiB/s reading and writing hundreds of gigabytes from/to different datasets of the pool. Even with encryption and compression. So I don't think disks are a limiting factor. -- You are receiving this mail because: You are the assignee for the bug.