9.2 ixgbe tx queue hang
Rick Macklem
rmacklem at uoguelph.ca
Fri Mar 21 23:44:55 UTC 2014
Christopher Forgeron wrote:
>
>
>
>
>
>
> Hello all,
>
> I ran Jack's ixgbe MJUM9BYTES removal patch, and let iometer hammer
> away at the NFS store overnight - But the problem is still there.
>
>
> From what I read, I think the MJUM9BYTES removal is probably good
> cleanup (as long as it doesn't trade performance on a lightly memory
> loaded system for performance on a heavily memory loaded system). If
> I can stabilize my system, I may attempt those benchmarks.
>
>
> I think the fix will be obvious at boot for me - My 9.2 has a 'clean'
> netstat
> - Until I can boot and see a 'netstat -m' that looks similar to that,
> I'm going to have this problem.
>
>
> Markus: Do your systems show denied mbufs at boot like mine does?
>
>
> Turning off TSO works for me, but at a performance hit.
>
> I'll compile Rick's patch (and extra debugging) this morning and let
> you know soon.
>
>
>
>
>
>
> On Thu, Mar 20, 2014 at 11:47 PM, Christopher Forgeron <
> csforgeron at gmail.com > wrote:
>
>
>
>
>
>
>
>
> BTW - I think this will end up being a TSO issue, not the patch that
> Jack applied.
>
> When I boot Jack's patch (MJUM9BYTES removal) this is what netstat -m
> shows:
>
> 21489/2886/24375 mbufs in use (current/cache/total)
> 4080/626/4706/6127254 mbuf clusters in use (current/cache/total/max)
> 4080/587 mbuf+clusters out of packet secondary zone in use
> (current/cache)
> 16384/50/16434/3063627 4k (page size) jumbo clusters in use
> (current/cache/total/max)
> 0/0/0/907741 9k jumbo clusters in use (current/cache/total/max)
>
> 0/0/0/510604 16k jumbo clusters in use (current/cache/total/max)
> 79068K/2173K/81241K bytes allocated to network (current/cache/total)
> 18831/545/4542 requests for mbufs denied
> (mbufs/clusters/mbuf+clusters)
>
> 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
> 0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
> 15626/0/0 requests for jumbo clusters denied (4k/9k/16k)
>
> 0 requests for sfbufs denied
> 0 requests for sfbufs delayed
> 0 requests for I/O initiated by sendfile
>
> Here is an un-patched boot:
>
> 21550/7400/28950 mbufs in use (current/cache/total)
> 4080/3760/7840/6127254 mbuf clusters in use (current/cache/total/max)
> 4080/2769 mbuf+clusters out of packet secondary zone in use
> (current/cache)
> 0/42/42/3063627 4k (page size) jumbo clusters in use
> (current/cache/total/max)
> 16439/129/16568/907741 9k jumbo clusters in use
> (current/cache/total/max)
>
> 0/0/0/510604 16k jumbo clusters in use (current/cache/total/max)
> 161498K/10699K/172197K bytes allocated to network
> (current/cache/total)
> 18345/155/4099 requests for mbufs denied
> (mbufs/clusters/mbuf+clusters)
>
> 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
> 0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
> 3/3723/0 requests for jumbo clusters denied (4k/9k/16k)
>
> 0 requests for sfbufs denied
> 0 requests for sfbufs delayed
> 0 requests for I/O initiated by sendfile
>
>
>
> See how removing the MJUM9BYTES is just pushing the problem from the
> 9k jumbo cluster into the 4k jumbo cluster?
>
> Compare this to my FreeBSD 9.2 STABLE machine from ~ Dec 2013 : Exact
> same hardware, revisions, zpool size, etc. Just it's running an
> older FreeBSD.
>
> # uname -a
> FreeBSD SAN1.XXXXX 9.2-STABLE FreeBSD 9.2-STABLE #0: Wed Dec 25
> 15:12:14 AST 2013 aatech at FreeBSD-Update
> Server:/usr/obj/usr/src/sys/GENERIC amd64
>
> root at SAN1:/san1 # uptime
> 7:44AM up 58 days, 38 mins, 4 users, load averages: 0.42, 0.80, 0.91
>
> root at SAN1:/san1 # netstat -m
> 37930/15755/53685 mbufs in use (current/cache/total)
> 4080/10996/15076/524288 mbuf clusters in use
> (current/cache/total/max)
> 4080/5775 mbuf+clusters out of packet secondary zone in use
> (current/cache)
> 0/692/692/262144 4k (page size) jumbo clusters in use
> (current/cache/total/max)
> 32773/4257/37030/96000 9k jumbo clusters in use
> (current/cache/total/max)
>
> 0/0/0/508538 16k jumbo clusters in use (current/cache/total/max)
> 312599K/67011K/379611K bytes allocated to network
> (current/cache/total)
>
> 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
> 0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
> 0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
> 0/0/0 requests for jumbo clusters denied (4k/9k/16k)
> 0/0/0 sfbufs in use (current/peak/max)
> 0 requests for sfbufs denied
> 0 requests for sfbufs delayed
> 0 requests for I/O initiated by sendfile
> 0 calls to protocol drain routines
>
> Lastly, please note this link:
>
> http://lists.freebsd.org/pipermail/freebsd-net/2012-October/033660.html
>
Hmm, this mentioned the ethernet header being in the TSO segment. I think
I already mentioned my TCP/IP is rusty and I know diddly about TSO.
However, at a glance it does appear the driver uses ether_output() for
TSO segments and, as such, I think an ethernet header is prepended to the
TSO segment. (This makes sense, since how else would the hardware know
what ethernet header to use for the TCP segments generated.)
I think prepending the ethernet header could push the total length
over 64K, given a default if_hw_tsomax == IP_MAXPACKET. And over 64K
isn't going to fit in 32 * 2K (mclbytes) clusters, etc and so forth.
Anyhow, I think the attached patch will reduce if_hw_tsomax, so that
the result should fit in 32 clusters and avoid EFBIG for this case,
so it might be worth a try?
(I still can't think of why the CSUM_TSO bit isn't set for the printf()
case, but it seems TSO segments could generate EFBIG errors.)
Maybe worth a try, rick
> It's so old that I assume the TSO leak that he speaks of has been
> patched, but perhaps not. More things to look into tomorrow.
>
>
>
>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ixgbe.patch
Type: text/x-patch
Size: 473 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-net/attachments/20140321/41c44241/attachment.bin>
More information about the freebsd-net
mailing list