How to obtain place of low perfomance?
Pyun YongHyeon
pyunyh at gmail.com
Fri Oct 29 18:19:25 UTC 2010
On Fri, Oct 29, 2010 at 10:20:10AM +0300, ?????????????? ?????????????? wrote:
> Hi, Freebsd-net.
>
> serv1# ifocnfig nfe0
> nfe0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
> options=10b<RXCSUM,TXCSUM,VLAN_MTU,TSO4>
> ether 00:13:d4:ce:82:16
> inet 10.11.8.17 netmask 0xfffffc00 broadcast 10.11.11.255
> inet 10.11.8.15 netmask 0xfffffc00 broadcast 10.11.11.255
> media: Ethernet autoselect (1000baseTX <full-duplex>)
> status: active
> serv1# ifconfig igb0
> igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
> options=19b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4>
> ether 00:1b:21:45:da:b8
> media: Ethernet autoselect (1000baseTX <full-duplex>)
> status: active
> serv1# ifconfig vlan7
> vlan7: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
> options=3<RXCSUM,TXCSUM>
> ether 00:1b:21:45:da:b8
> inet 10.11.15.15 netmask 0xffffff00 broadcast 10.11.15.255
> inet 10.11.7.1 netmask 0xffffff00 broadcast 10.11.7.255
> media: Ethernet autoselect (1000baseTX <full-duplex>)
> status: active
> vlan: 7 parent interface: igb0
>
> doing bw test with iperf it show low performance on nfe0.
>
> # iperf -c 10.11.8.17
> ------------------------------------------------------------
> Client connecting to 10.11.8.17, TCP port 5001
> TCP window size: 32.5 KByte (default)
> ------------------------------------------------------------
> [ 3] local 10.11.8.16 port 63911 connected with 10.11.8.17 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.5 sec 124 MBytes 98.8 Mbits/sec
> # iperf -c 10.11.7.1
> ------------------------------------------------------------
> Client connecting to 10.11.7.1, TCP port 5001
> TCP window size: 32.5 KByte (default)
> ------------------------------------------------------------
> [ 3] local 10.11.7.2 port 61422 connected with 10.11.7.1 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.3 sec 800 MBytes 653 Mbits/sec
>
> despite on it is integrated I expect about 300-400Mbit throughput
> does nfe0 really so poor NIC?
nfe(4) controllers would not be one of best controllers targeted
for server environments but generally it's not poor for desktop
users. I mean you should be able to saturate link when you use bulk
TCP/UDP transfers.
Last time I tried iperf it was not reliable. Did you disable
threading of iperf? Also note, both sender/receiver of iperf should
be built with same configuration option.
More information about the freebsd-net
mailing list