igb and jumbo frames
Tom Judge
tom at tomjudge.com
Fri Dec 3 19:02:06 UTC 2010
Hi,
So I have been playing around with some new hosts I have been deploying
(Dell R710's).
The systems have a single dual port card in them:
igb0 at pci0:5:0:0: class=0x020000 card=0xa04c8086 chip=0x10c98086
rev=0x01 hdr=0x00
vendor = 'Intel Corporation'
class = network
subclass = ethernet
cap 01[40] = powerspec 3 supports D0 D3 current D0
cap 05[50] = MSI supports 1 message, 64 bit, vector masks
cap 11[70] = MSI-X supports 10 messages in map 0x1c enabled
cap 10[a0] = PCI-Express 2 endpoint max data 256(512) link x4(x4)
igb1 at pci0:5:0:1: class=0x020000 card=0xa04c8086 chip=0x10c98086
rev=0x01 hdr=0x00
vendor = 'Intel Corporation'
class = network
subclass = ethernet
cap 01[40] = powerspec 3 supports D0 D3 current D0
cap 05[50] = MSI supports 1 message, 64 bit, vector masks
cap 11[70] = MSI-X supports 10 messages in map 0x1c enabled
cap 10[a0] = PCI-Express 2 endpoint max data 256(512) link x4(x4)
Running 8.1 these cards panic the system at boot when initializing the
jumbo mtu, so to solve this I back ported the stable/8 driver to 8.1 and
booted with this kernel. So far so good.
However when configuring the interfaces with and mtu of 8192 the system
is unable to allocate the required mbufs for the receive queue.
I believe the message was from here:
http://fxr.watson.org/fxr/source/dev/e1000/if_igb.c#L1209
After a little digging and playing with just one interface i discovered
that the default tuning for kern.ipc.nmbjumbo9 was insufficient to run a
single interface with jumbo frames as it seemed just the TX queue
consumed 90% of the available 9k jumbo clusters.
So my question is (well 2 questions really):
1) Should igb be auto tuning kern.ipc.nmbjumbo9 and kern.ipc.nmbclusters
up to suite its needs?
2) Should this be documented in igb(4)?
Tom
--
TJU13-ARIN
More information about the freebsd-net
mailing list