Gigabit ethernet questions?
David Christensen
davidch at broadcom.com
Fri Aug 11 18:04:36 UTC 2006
> Greeting colleagues. I've got two DL-360(pciX bus) servers,
> with BCM5704 NetXtreme Dual Gigabit Adapters(bge). The Uname
> is 6.1-RELEASE-p3. The bge interfaces of the both servers are
> connected with each other with a cat6 patchcord.
> Here are my settings:
> kernel config:
> options DEVICE_POLLING
> options HZ=1000 #
>
> sysctl.conf:
> kern.polling.enable=1
> net.inet.ip.intr_queue_maxlen=5000
> kern.ipc.maxsockbuf=8388608
> net.inet.tcp.sendspace=3217968
> net.inet.tcp.recvspace=3217968
> net.inet.tcp.rfc1323=1
>
> bge1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 9000
> options=5b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,POLLING>
> inet 192.168.0.1 netmask 0xffffff00 broadcast 192.168.0.255
> ether 00:17:a4:3a:e1:81
> media: Ethernet autoselect (1000baseTX <full-duplex>)
> status: active
> (note mtu 9000)
>
> and here are tests results:
>
> netperf:
>
> TCP STREAM TEST to 192.168.0.1
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 6217968 6217968 6217968 10.22 320.04
>
> UDP UNIDIRECTIONAL SEND TEST to 192.168.0.1
> Socket Message Elapsed Messages
> Size Size Time Okay Errors Throughput
> bytes bytes secs # # 10^6bits/sec
>
> 9216 9216 10.00 118851 1724281 876.20
> 41600 10.00 0 0.00]
>
>
>
> iperf:
> gate2# iperf -s -N
> ------------------------------------------------------------
> Server listening on TCP port 5001
> TCP window size: 3.07 MByte (default)
> ------------------------------------------------------------
> [ 4] local 192.168.0.2 port 5001 connected with 192.168.0.1
> port 52597
> [ 4] 0.0-10.1 sec 384 MBytes 319 Mbits/sec
>
> Also I can say, that I've managed to achieve about 500mbit.s
> by tuning tcp window with -w key in iperf.
>
> How can we explain such a low tcp performance? What else is
> to tune? Is there somebody who achieved gigabit speed with
> tcp on freebsd?
You're test is non-optimal for the 5704 since the ports are linked
together back-to-back. In a dual port configuration such as the 5704
each port must arbitrate for access to the PCI bus. Due to an errata
for the 5704, the BGE_PCIDMAWCTL_ONEDMA_ATONCE bit is set which allows
only one port access to the PCI bus at a time for the duration of the
DMA transaction, rather than allowing the two ports to interleave DMAs.
Your test configuration is a worst case scenario since both ports are
active in both directions at the same time. If you can change your
test to use a second system you should see the TCP performance rise
substantially.
Dave
More information about the freebsd-net
mailing list