Network loss

Rick Macklem rmacklem at uoguelph.ca
Thu Feb 27 23:13:04 UTC 2014


Markus Gebert wrote:
> 
> On 27.02.2014, at 02:00, Rick Macklem <rmacklem at uoguelph.ca> wrote:
> 
> > John Baldwin wrote:
> >> On Tuesday, February 25, 2014 2:19:01 am Johan Kooijman wrote:
> >>> Hi all,
> >>> 
> >>> I have a weird situation here where I can't get my head around.
> >>> 
> >>> One FreeBSD 9.2-STABLE ZFS/NFS box, multiple Linux clients. Once
> >>> in
> >>> a while
> >>> the Linux clients loose their NFS connection:
> >>> 
> >>> Feb 25 06:24:09 hv3 kernel: nfs: server 10.0.24.1 not responding,
> >>> timed out
> >>> 
> >>> Not all boxes, just one out of the cluster. The weird part is
> >>> that
> >>> when I
> >>> try to ping a Linux client from the FreeBSD box, I have between
> >>> 10
> >>> and 30%
> >>> packetloss - all day long, no specific timeframe. If I ping the
> >>> Linux
> >>> clients - no loss. If I ping back from the Linux clients to FBSD
> >>> box - no
> >>> loss.
> >>> 
> >>> The errors I get when pinging a Linux client is this one:
> >>> ping: sendto: File too large
> 
> We were facing similar problems when upgrading to 9.2 and have stayed
> with 9.1 on affected systems for now. We’ve seen this on HP G8
> blades with 82599EB controllers:
> 
> ix0 at pci0:4:0:0:	class=0x020000 card=0x18d0103c chip=0x10f88086
> rev=0x01 hdr=0x00
>     vendor     = 'Intel Corporation'
>     device     = '82599EB 10 Gigabit Dual Port Backplane Connection'
>     class      = network
>     subclass   = ethernet
> 
> We didn’t find a way to trigger the problem reliably. But when it
> occurs, it usually affects only one interface. Symptoms include:
> 
> - socket functions return the 'File too large' error mentioned by
> Johan
> - socket functions return 'No buffer space’ available
> - heavy to full packet loss on the affected interface
> - “stuck” TCP connection, i.e. ESTABLISHED TCP connections that
> should have timed out stick around forever (socket on the other side
> could have been closed ours ago)
> - userland programs using the corresponding sockets usually got stuck
> too (can’t find kernel traces right now, but always in network
> related syscalls)
> 
> Network is only lightly loaded on the affected systems (usually 5-20
> mbit, capped at 200 mbit, per server), and netstat never showed any
> indication of ressource shortage (like mbufs).
> 
> What made the problem go away temporariliy was to ifconfig down/up
> the affected interface.
> 
> We tested a 9.2 kernel with the 9.1 ixgbe driver, which was not
> really stable. Also, we tested a few revisions between 9.1 and 9.2
> to find out when the problem started. Unfortunately, the ixgbe
> driver turned out to be mostly unstable on our systems between these
> releases, worse than on 9.2. The instability was introduced shortly
> after to 9.1 and fixed only very shortly before 9.2 release. So no
> luck there. We ended up using 9.1 with backports of 9.2 features we
> really need.
> 
> What we can’t tell is wether it’s the 9.2 kernel or the 9.2 ixgbe
> driver or a combination of both that causes these problems.
> Unfortunately we ran out of time (and ideas).
> 
> 
> >> EFBIG is sometimes used for drivers when a packet takes too many
> >> scatter/gather entries.  Since you mentioned NFS, one thing you
> >> can
> >> try is to
> >> disable TSO on the intertface you are using for NFS to see if that
> >> "fixes" it.
> >> 
> > And please email if you try it and let us know if it helps.
> > 
> > I've think I've figured out how 64K NFS read replies can do this,
> > but I'll admit "ping" is a mystery? (Doesn't it just send a single
> > packet that would be in a single mbuf?)
> > 
> > I think the EFBIG is replied by bus_dmamap_load_mbuf_sg(), but I
> > don't know if it can happen for an mbuf chain with < 32 entries?
> 
> We don’t use the nfs server on our systems, but they’re
> (new)nfsclients. So I don’t think our problem is nfs related, unless
> the default rsize/wsize for client mounts is not 8K, which I thought
> it was. Can you confirm this, Rick?
> 
Well, if you don't specify any mount options, it will be
min(64K, what-the-server-specifies).

"nfsstat -m" should show you what it actually is using, for 9.2 or
later.

8K would be used if you specified "udp".

For the client, it would be write requests that could be 64K.
You could try "wsize=32768,rsize=32768" (it is actually the
wsize that matters for this case, but you might as well set
rsize at the same time). With these options specified, you
know what the maximum value is (it will still be reduced for
udp or if the server wants it smaller).

rick

> IIRC, disabling TSO did not make any difference in our case.
> 
> 
> Markus
> 
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to
> "freebsd-net-unsubscribe at freebsd.org"
> 


More information about the freebsd-net mailing list