Latency issues with buf_ring

Andre Oppermann oppermann at networx.ch
Tue Dec 4 20:02:32 UTC 2012


On 04.12.2012 20:34, Adrian Chadd wrote:
> .. and it's important to note that buf_ring itself doesn't have the
> race condition; it's the general driver implementation that's racy.
>
> I have the same races in ath(4) with the watchdog programming. Exactly
> the same issue.

Our IF_* stack/driver boundary handoff isn't up to the task anymore.

Also the interactions are either poorly defined or understood in many
places.  I've had a few chats with yongari@ and am experimenting with
a modernized interface in my branch.

The reason I stumbled across it was because I'm extending the hardware
offload feature set and found out that the stack and the drivers (and
the drivers among themself) are not really in sync with regards to behavior.

For most if not all ethernet drivers from 100Mbit/s the TX DMA rings
are so large that buffering at the IFQ level doesn't make sense anymore
and only adds latency.  So it could simply directly put everything into
the TX DMA and not even try to soft-queue.  If the TX DMA ring is full
ENOBUFS is returned instead of filling yet another queue.  However there
are ALTQ interactions and other mechanisms which have to be considered
too making it a bit more involved.

I'm coming up with a draft and some benchmark results for an updated
stack/driver boundary in the next weeks before xmas.

-- 
Andre



More information about the freebsd-net mailing list