two NIC on 2 core system (scheduling problem)

Alexander Motin mav at FreeBSD.org
Wed Oct 29 03:36:47 PDT 2008


Bartosz Giza wrote:
> Tuesday 28 of October 2008 19:10:43 Alexander Motin napisał(a):
>> Bartosz Giza wrote:
>>>> The CPU time you see there includes much more then just a card
>>>> handling itself. It also includes CPU time of the most parts of
>>>> network stack used to process received packet. So if you have NAT, big
>>>> firewall, netgraph or any other CPU-hungry actions done with packets
>>>> incoming via em0 you will see such results.
>>>> Even more interesting is that if bge0 or fxp0 cards will require much
>>>> CPU time to send packet, this time will also be accounted to em0
>>>> process.
> 
> I have checked this and you are right. When i turned off ipfw; taskq process 
> started to use less cpu. But still what is strange why processing from 
> other cards are counted in em0 taskq ?

What you mean by "processing from other cards"? em0 taskq counts all
processing caused by packets incoming via em0 up to and including
processing of their transmission by bge/fxp drivers. Same is about
bge/fxp. If bge/fxp/em drivers would have separate transmission
processes - you would see them, but they don't, so their CPU time
accounted to the caller.

> This is quite strange and in that 
> way em0 taskq process is using more cpu on one of the cores. So what i 
> think the best would be to have only em NICs because processing of the 
> packets would be splitted to those taskq processes is that right ?

em0 processes packets in separate process named taskq, bge does it
directly in interrupt handler process. There is no principal difference
for you I think.

> Ok, good to know. But how is counted firewall overhead when i would have 
> only bge cards. They don't use taskq so i assume i would see this as system 
> usage correct ?

You would see a lot of interrupt time in this case.

-- 
Alexander Motin


More information about the freebsd-net mailing list