tuning routing using cxgbe and T580-CR cards?
Olivier Cochard-Labbé
olivier at cochard.me
Sat Jul 12 12:17:54 UTC 2014
On Fri, Jul 11, 2014 at 8:03 PM, Bjoern A. Zeeb <
bzeeb-lists at lists.zabbadoz.net> wrote:
> On 11 Jul 2014, at 17:28 , John Jasem <jjasen at gmail.com> wrote:
>
> > c) the defaults for the cxgbe driver appear to be 8 rx queues, and N tx
> > queues, with N being the number of CPUs detected. For a system running
> > multiple cards, routing or firewalling, does this make sense, or would
> > balancing tx and rx be more ideal? And would reducing queues per card
> > based on NUMBER-CPUS and NUM-CHELSIO-PORTS make sense at all?
> > ...
> > g) Are there other settings I should be looking at, that may squeeze out
> > a few more packets?
>
> If you are primarily forwarding packets (you say "routing" multiple times)
> the first thing you should do is turn off LRO and TSO on all ports.
>
Hi Bjoern,
I was not aware of disabling LRO+TSO for forwarding packet.
If I read correctly the wikipedia page of LRO[1]: Disabling LRO is not a
concern of performance but only of not breaking the end-to-end principle,
right ?
But regarding TSO[2]: It should improve performance only between the TCP
and IP layer. But paquet forwarded didn't have to cross TCP<->IP layer,
then disabling TSO should not impact performance, right ?
I've tried to benchs the differences on my lab:
- Hardware: quad cores (Intel Xeon L5630 2.13GHz, hyper-threading disabled)
with 2 ports Intel 10-Gigabit X540-AT2
- Multi-flows (different UDP ports) of small packet (60B) at about 10Mpps
(pkt-gen -f tx -i ix0 -n 1000000000 -l 60 -d 9.3.3.1:2000-9.3.3.1:4000 -D
a0:36:9f:1e:28:14 -s 8.3.3.1 -w 4)
- Result collected on the receiver side in Paquet-Per-Second unit.
ministat -w 74 tso.lro.enabled tso.lro.disabled
x tso.lro.enabled
+ tso.lro.disabled
+--------------------------------------------------------------------------+
| + + x+ * x+ x x|
||____________M_|_A________________|________A_M_________________________| |
+--------------------------------------------------------------------------+
N Min Max Median Avg Stddev
x 5 1724046 1860817 1798145 1793343 61865.164
+ 5 1702496 1798998 1725396 1734863.2 38178.905
No difference proven at 95.0% confidence
=> There is not difference: Then I can disable LRO for respecting the
end-to-end principle. But why disabling TSO ?
Regards,
Olivier
[1] http://en.wikipedia.org/wiki/Large_receive_offload
[2] http://en.wikipedia.org/wiki/Large_segment_offload
More information about the freebsd-net
mailing list