em0 performance subpar

Adam Stylinski kungfujesus06 at gmail.com
Fri Apr 29 15:12:44 UTC 2011


On Thu, Apr 28, 2011 at 09:41:01PM -0700, Jack Vogel wrote:
> We rarely test 32bit any more, the only time we would is because of a
> problem,
> so 99% is amd64, so that's not the problem.
> 
> Running netperf without any special arguments, using TCP_STREAM and
> TCP_MAERTS tests what numbers are you seeing?
> 
> Jack
> 
> 
> On Thu, Apr 28, 2011 at 5:34 PM, Adam Stylinski <kungfujesus06 at gmail.com>wrote:
> 
> > On Thu, Apr 28, 2011 at 6:47 PM, Adam Stylinski <kungfujesus06 at gmail.com>wrote:
> >
> >> On Thu, Apr 28, 2011 at 02:22:29PM -0700, Jack Vogel wrote:
> >> > My validation engineer set things up on an 8.2 REL system, testing the
> >> > equivalent of
> >> > HEAD, and he reports performance is fine. This is without any tweaks
> >> from
> >> > what's
> >> > checked in.
> >> >
> >> > Increasing the descriptors to 4K is way overkill and might actually
> >> cause
> >> > problems,
> >> > go back to default.
> >> >
> >> > He has a Linux test client, what are you transmitting to?
> >> >
> >> > Jack
> >> >
> >> >
> >> > On Thu, Apr 28, 2011 at 11:00 AM, Adam Stylinski <
> >> kungfujesus06 at gmail.com>wrote:
> >> >
> >> > > On Thu, Apr 28, 2011 at 09:52:14AM -0700, Jack Vogel wrote:
> >> > > > Adam,
> >> > > >
> >> > > > The TX ring for the legacy driver is small right now compared to em,
> >> try
> >> > > > this experiment,
> >> > > > edit if_lem.c, search for "lem_txd" and change EM_DEFAULT_TXD to
> >> 1024,
> >> > > see
> >> > > > what
> >> > > > that does, then 2048.
> >> > > >
> >> > > > My real strategy with the legacy code was that it should stable,
> >> meaning
> >> > > not
> >> > > > getting
> >> > > > a lot of changes... that really hasn't worked out over time. I
> >> suppose
> >> > > I'll
> >> > > > have to try and
> >> > > > give it some tweaks and let you try it. The problem with this code
> >> is it
> >> > > > technically supports
> >> > > > a huge range of old stuff we don't test any more, things I do might
> >> cause
> >> > > > other regressions :(
> >> > > >
> >> > > > Oh well, let me know if increasing the TX descriptors helps.
> >> > > >
> >> > > > Jack
> >> > > Jack,
> >> > >
> >> > > Is this the same thing as adjusting these values?:
> >> > >
> >> > > hw.em.rxd=4096
> >> > > hw.em.txd=4096
> >> > >
> >> > > If so I've maxed this out and it's not helping.  I'll give it a shot
> >> on my
> >> > > 8-STABLE box as it has a kernel I can play with.
> >> > >
> >> > > Setting the MTU to 1500 gave lower throughput.
> >> > >
> >> > > --
> >> > > Adam Stylinski
> >> > > PGP Key: http://pohl.ececs.uc.edu/~adam/publickey.pub
> >> > > Blog: http://technicallyliving.blogspot.com
> >> > >
> >>
> >> I am transmitting to a linux client (kernel 2.6.38, 9000 byte MTU, PCI-Ex
> >> based card).  My sysctl's on the Linux client (apart from the default) look
> >> like so:
> >>
> >> net.ipv4.ip_forward = 0
> >> # Enables source route verification
> >> net.ipv4.conf.default.rp_filter = 1
> >> # Enable reverse path
> >> net.ipv4.conf.all.rp_filter = 1
> >> net.core.rmem_max = 16777216
> >> net.core.wmem_max = 16777216
> >> net.ipv4.tcp_rmem = 4096 87380 16777216
> >> net.ipv4.tcp_wmem = 4096 87380 16777216
> >> net.core.wmem_default = 87380
> >> net.core.rmem_default = 87380
> >> net.ipv4.tcp_mem = 98304 131072 196608
> >> net.ipv4.tcp_no_metrics_save = 1
> >> net.ipv4.tcp_window_scaling = 1
> >> dev.rtc.max-user-freq = 1024
> >>
> >> The exact troublesome device (as reported by pciconf):
> >>
> >> em0 at pci0:7:5:0: class=0x020000 card=0x13768086 chip=0x107c8086 rev=0x05
> >> hdr=0x00
> >>    vendor     = 'Intel Corporation'
> >>     device     = 'Gigabit Ethernet Controller (Copper) rev 5 (82541PI)'
> >>    class      = network
> >>    subclass   = ethernet
> >> Apart from bus saturation (which I don't suspect is the problem) I'm not
> >> sure what the issue could be.  What should I try next?
> >>
> >> --
> >> Adam Stylinski
> >> PGP Key: http://pohl.ececs.uc.edu/~adam/publickey.pub
> >> Blog: http://technicallyliving.blogspot.com
> >>
> >
> > One detail that I didn't mention which may or may not matter is that the
> > issues I'm having this with are on amd64 distributions.  I have the same
> > card in an x86 system and iperf with nearly default settings managed to do
> > 850ish Mbits.  Did your lab tests consist of amd64 machines?
> >

TCP_STREAM:
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.121 (192.168.0.121) port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  87380  87380    10.00     578.77  

TCP_MAERTS:

TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.121 (192.168.0.121) port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  87380  87380    10.00     836.91

interesting, so TCP_STREAM specifically has the issue.  I suppose I could read documentation on what the tests do but what does this tell?

-- 
Adam Stylinski
PGP Key: http://pohl.ececs.uc.edu/~adam/publickey.pub
Blog: http://technicallyliving.blogspot.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 834 bytes
Desc: not available
Url : http://lists.freebsd.org/pipermail/freebsd-net/attachments/20110429/1dac18fa/attachment.pgp


More information about the freebsd-net mailing list