Bridging Benchmarks
Luigi Rizzo
rizzo at icir.org
Tue Sep 16 15:12:24 PDT 2003
On Tue, Sep 16, 2003 at 04:45:36PM -0500, David J Duchscher wrote:
> We have been benchmarking FreeBSD configured as a bridge and I thought
> I would share the data that we have been collecting. Its a work in
> progress so more data will show up as try some more Ethernet cards and
> machine configurations. Everything is 100Mbps at the moment. Would be
> very interested in any thoughts, insights or observations people might
> have.
>
> http://wolf.tamu.edu/~daved/bench-100/
interesting results, thanks for sharing them.
I would like to add a few comments and suggestions:
* as the results with the Gbit card show, the system per se
is able to work at wire speed at 100Mbit/s, but some cards and/or
drivers have bugs which prevent full-speed operation.
Among these, i ran extensive experiments on the Intel PRO/100,
and depending on how you program the card, the maximum transmit
speed ranges from ~100kpps (with the default driver) to ~120kpps
no matter how fast the CPU is. I definitely blame the hardware here.
* I have had very good results with cards supported by the 'dc'
driver (Intel 21143 chipset and various clones) -- wire speed even
at 64-byte frames. Possibly the 'sis' chips might do the same.
I know the 'dc' cards are hard to find these days, but i would
definitely try one of them if possible.
I would also love to see numbers with the 'rl' cards (Realtek8139,
most of the cards you find around in the stores) which are
probably among the slowest ones we have.
* the "latency" curves for some of the cards are quite strange
(making me suspect bugs in the drivers or the like).
How do you define the 'latency', how do you measure it, and do
you know if it is affected by changing "options HZ=..." in your
kernel config file (default is 100, i usually recommend using
1000) ?
* especially under heavy load (e.g. when using bridge_ipfw=1 and
largish rulesets), you might want to build a kernel with
options DEVICE_POLLING and do a 'sysctl kern.polling.enable=1'
(see "man polling" for other options you should use).
It would be great to have the graphs with and without polling,
and also with/without bridge_ipfw (even with a simple one-line
firewall config) to get an idea of the overhead.
The use of polling should prevent the throughput dip after
the box reaches the its throughput limit visible in some
of the 'Frame loss' graphs.
Polling support is available for a number of cards including
'dc', 'em', 'sis', 'fxp' and possibly a few others.
cheers
luigi
More information about the freebsd-net
mailing list