TX Multiqueue?
Muhammad Shafiq
Muhammad.Shafiq at neterion.com
Fri Sep 28 09:26:14 PDT 2007
-----Original Message-----
From: owner-freebsd-current at freebsd.org
[mailto:owner-freebsd-current at freebsd.org] On Behalf Of Jack Vogel
Sent: Monday, September 24, 2007 10:28 AM
To: Kip Macy
Cc: Darren Reed; freebsd-net at freebsd.org; FreeBSD Current
Subject: Re: TX Multiqueue?
On 9/23/07, Kip Macy <kip.macy at gmail.com> wrote:
> On 9/23/07, Darren Reed <darrenr at freebsd.org> wrote:
> > Kip Macy wrote:
> > > My ethng branch supports multiple rx and tx queues.
> > >
> > > -Kip
> > >
> >
> > What are your plans for how we use/manage/interact with the mutiple
> > rx/tx queues?
>
> The rx hardware queue is determined by the hardware. Different
> hardware allows for different policies. I just use the stock rss_hash
> of a crc32 of the 4-tuple in cxgb. I've added a field to the pkthdr
> which cxgb uses the least significant bits of to determine which
> outbound queue to use. Its up to the upper layers to determine how to
> set those bits. One of the changes that is required to take advantaged
> of this is moving the queues into the driver. I've added a new
> if_start function to ifnet to take advantage of this. I also have a
> normal if_start function for backward compatibility.
Yes, the queues in Oplin and Zoar are also a hardware feature, not
just some software infrastructure. There are a number of different
ways that it can be configured. I did not have some settled notion
of how things should be managed but I would rather not have it
be something done under the covers in the driver, a configurable
stack option seems better to me. This needs to be done right or
it will just be a hack, I don't know who the right parties are, it
should
not be just a one-person decision, those with a stake in this sort
of thing should all be involved.
[Muhammad Shafiq]
Based upon our experience, form other OS(s), the HW TX/RX queues are
quite helpful in reducing CPU utilization (in MP machines). For example,
XFRAME-I/II TX/RX queues, originally intended for "I/O Virtualization",
can be affiliated to specific CPU(s); the resulting locality of
reference improves cache hits and reduces lock contention, if any. The
HW details of this mechanism can be found at the following link:
http://www.neterion.com/support/xframe_developer.html
We would prefer to have a configurable, generic, mechanism to exploit
the multiple TX/RX queues, now supported by NIC vendors. If possible,
the mechanism should be extendable to fit into specific
capability/requirement of HW vendors.
Cheers,
Jack
_______________________________________________
freebsd-current at freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to
"freebsd-current-unsubscribe at freebsd.org"
More information about the freebsd-net
mailing list