Latency issues with buf_ring
Robert Watson
rwatson at FreeBSD.org
Thu Dec 6 09:39:48 UTC 2012
On Tue, 4 Dec 2012, Andre Oppermann wrote:
> For most if not all ethernet drivers from 100Mbit/s the TX DMA rings are so
> large that buffering at the IFQ level doesn't make sense anymore and only
> adds latency. So it could simply directly put everything into the TX DMA
> and not even try to soft-queue. If the TX DMA ring is full ENOBUFS is
> returned instead of filling yet another queue. However there are ALTQ
> interactions and other mechanisms which have to be considered too making it
> a bit more involved.
I asserted for many years that software-side queueing would be subsumed by
increasingly large DMA descriptor rings for the majority of devices and
configurations. However, this turns out not to have happened in a number of
scenarios, and so I've revised my conclusions there. I think we will continue
to need to support transmit-side buffering, ideally in the form of a set of
"libraries" that device drivers can use to avoid code replication and
integrate queue management features fairly transparently.
I'm a bit worried by the level of copy-and-paste between 10gbps device drivers
right now -- for 10/100/1000 drivers, the network stack contains the majority
of the code, and the responsibility of the device driver is to advertise
hardware features and manage interactions with rings, interrupts, etc. On the
10gbps side, we see lots of code replication, especially in queue management,
and it suggests to me (as discussed for several years in a row at BSDCan and
elsehwere) that it's time to do a bit of revisiting of ifnet, pull more code
back into the central stack and out of device drivers, etc. That doesn't
necessarily mean changing notions of ownership of event models, rather,
centralising code in libraries rather than all over the place. This is
something to do with some care, of course.
Robert
More information about the freebsd-net
mailing list