svn commit: r215368 - in stable/7/sys: arm/at91
arm/xscale/ixp425
contrib/dev/oltr dev/ae dev/an dev/ar dev/arl dev/ath dev/awi dev/ce dev/cm
dev/cnw dev/cp dev/cs dev/ctau dev/cx dev/cxgb dev/ed d...
Bruce Evans
brde at optusnet.com.au
Wed Nov 17 18:54:19 UTC 2010
On Wed, 17 Nov 2010, Maxim Sobolev wrote:
> On 11/16/2010 8:12 AM, Bruce Evans wrote:
>> This was quite low for yestdeay's uses (starting in about 1995), but today
>> it is little missed since only yesterday's low-end hardware uses it. Most
>> of today's interfaces are 1Gbps, and for this it is almost essential for
>> the hardware to have a ring buffer with > 50 entries, so most of today's
>> drivers ignore ifqmaxlen and set the queue length to the almost equally
>> bogus value of the ring buffer size (-1). I set it to about 10000 instead
>> in bge and em (10000 is too large, but fixes streaming under certain loads
>> when hz is small).
>
> One of those interfaces is if_rl, which is still quite popular these days and
> supports speeds up to 1gbps (which I believe triggered this change). But in
It is the only one on the list that I used. Maybe it should be handled
specially. Just bump up its queue lengths to maybe 128 for 100 Mbps and
512 for 1 Gbps in all cases, or tune this depending on the amount of memory?
> general I agree, unfortunately FreeBSD network subsystem is tuned for
> yesteday's speeds. We are seeing lot of lookups and other issues under high
> PPS. I wish somebody could stand and pick up the task of cleaning it up and
> re-tuning eventually for 2010. We could probably even sponsor in part such a
> work (anyone).
I haven't seen any lockups, but just the maximum pps on fixed hardware
decreasing with every increase in the FreeBSD version number (about 30%
since FreeBSD-5). My hardware CPU and bus are saturated by low-end em
1 Gbps and medium-end bge 1 Gbps, so bloat in the stack translates into
lower pps. I tuned bge a lot to make it fast under the version of
FreeBSD-5 that I usually run, but barely touched upper layers.
> Apart from interface tuning for Gbps speeds, another area that needs more
> work is splitting up memory pool for the IPC from the memory pool for the
> other networking. Today's software is highly distributed and rock-solid IPC
> is a must for the FreeBSD being a solid server application platform. That's
> OK when under the load we drop some packets, but it's not OK when extreme
> network activity can bring down communications between application and
> database system within the host itself. And that's exactly what can happen in
> FreeBSD.
Does flow control help here? I think it should prevent most dropped packets,
but be actively harmful if it stops the flow when IPC packets are queued
behind non-IPC ones. Large queue lengths are also bad for latency.
Bruce
More information about the svn-src-all
mailing list