cvs commit: src/sys/dev/bge if_bge.c
Scott Long
scottl at samsco.org
Sat Dec 23 21:51:31 PST 2006
Robert Watson wrote:
>
> On Sat, 23 Dec 2006, John Polstra wrote:
>
>>> That said, dropping and regrabbing the driver lock in the rxeof
>>> routine of any driver is bad. It may be safe to do, but it incurs
>>> horrible performance penalties. It essentially allows the
>>> time-critical, high priority RX path to be constantly preempted by
>>> the lower priority if_start or if_ioctl paths. Even without this
>>> preemption and priority inversion, you're doing an excessive number
>>> of expensive lock ops in the fast path.
>>
>> We currently make this a lot worse than it needs to be by handing off
>> the received packets one at a time, unlocking and relocking for every
>> packet. It would be better if the driver's receive interrupt handler
>> would harvest all of the incoming packets and queue them locally.
>> Then, at the end, hand off the linked list of packets to the network
>> stack wholesale, unlocking and relocking only once. (Actually, the
>> list could probably be handed off at the very end of the interrupt
>> service routine, after the driver has already dropped its lock.) We
>> wouldn't even need a new primitive, if ether_input() and the other
>> if_input() functions were enhanced to deal with a possible list of
>> packets instead of just a single one.
>
> I try this experiement every few years, and generally don't measure much
> improvement. I'll try it again with 10gbps early next year once back in
> the office again. The more interesting transition is between the link
> layer and the network layer, which is high on my list of topics to look
> into in the next few weeks. In particular, reworking the ifqueue
> handoff. The tricky bit is balancing latency, overhead, and concurrency...
>
> FYI, there are several sets of patches floating around to modify if_em
> to hand off queues of packets to the link layer, etc. They probably
> need updating, of course, since if_em has changed quite a bit in the
> last year. In my implementaiton, I add a new input routine that accepts
> mbuf packet queues.
>
Have you tested this with more than just your simple netblast and
netperf tests? Have you measured CPU usage during your tests? With
10Gb coming, pipelined processing of RX packets is becoming an
interesting topic for all OSes from a number of companies. I understand
your feeling about the bottleneck being higher up than at just if_input.
We'll see how this holds up.
Scott
More information about the cvs-src
mailing list