netisr process eats 100% cpu
Dmitry Sivachenko
trtrmitya at gmail.com
Sat Sep 12 08:47:53 UTC 2015
> On 11 сент. 2015 г., at 20:19, hiren panchasara <hiren at strugglingcoder.info> wrote:
>
> On 09/11/15 at 12:46P, Dmitry Sivachenko wrote:
>>
>>> hiren panchasara <hiren at strugglingcoder.info> wrote:
>>>
>>> Unsure at the moment if loopback is causing the trouble for you or not.
>>> See:
>>
>> (please keep me CC'ed, I am not subscribed to -net)
>>
>>
>>>
>>> https://lists.freebsd.org/pipermail/freebsd-net/2015-February/041239.html
>>>
>>>
>>
>> Yes, this thread looks similar.
>>
>>
>>> You may want to try:
>>> 1) pmcstat and see if you can catch something
>>
>> What particular should I look for? Here is first lines of pmcstat -T -S instructions -w 1:
>> PMC: [INSTR_RETIRED_ANY] Samples: 157198 (100.0%) , 0 unresolved
>>
>> %SAMP IMAGE FUNCTION CALLERS
>> 13.2 kernel cpu_search_highest cpu_search_highest:12.0 sched_idletd:1.2
>> 8.3 kernel ipfw_chk ipfw_check_packet
>> 3.1 myprogram memsetAVX _ZN12TLz4Compress7DoWriteEPKv
>> 2.3 kernel tcp_output tcp_usr_send:1.0 tcp_do_segment:0.9
>>
>>
>>> 2) disable checksum on localhost
>>
>>
>> I tried, but nothing has changed.
>>
>>
>>> 3) look at netisr settings. sysctl net.isr o/p and how it looks under
>>> netstat -Q. I am not sure if adding more threads to netisr via
>>
>>
>> What should I look for?
>>
>>
>>> net.isr.numthreads would help. (Note its a loader.conf variable)
>>
>>
>> This netisr load looks parasitical to me (as I noted, moving haproxy to a separate machine does not burn CPU cycles on netisr, why is localhost special?)
>>
>> Even if adding more threads to netisr would boost network utilization, wouldn't those CPU cycles spent for netisr just be a waste of energy? I have other tasks for these CPU.
>>
>
> I am not sure what keep cpu busy with netisr when localhost is involved.
>
> You may want to post o/p of
> # sysctl net.isr
net.isr.numthreads: 1
net.isr.maxprot: 16
net.isr.defaultqlimit: 256
net.isr.maxqlimit: 10240
net.isr.bindthreads: 0
net.isr.maxthreads: 1
net.isr.dispatch: direct
> # netstat -Q
Configuration:
Setting Current Limit
Thread count 1 1
Default queue limit 256 10240
Dispatch policy direct n/a
Threads bound to CPUs disabled n/a
Protocols:
Name Proto QLimit Policy Dispatch Flags
ip 1 4096 flow default ---
igmp 2 256 source default ---
rtsock 3 256 source default ---
arp 7 256 source default ---
ether 9 256 source direct ---
ip6 10 256 flow default ---
Workstreams:
WSID CPU Name Len WMark Disp'd HDisp'd QDrops Queued Handled
0 0 ip 8 165 9862463 0 0 126594714 136424969
0 0 igmp 0 0 0 0 0 0 0
0 0 rtsock 0 1 0 0 0 10 10
0 0 arp 0 0 15640 0 0 0 15640
0 0 ether 0 0 9878107 0 0 0 9878107
0 0 ip6 0 2 4 0 0 12 16
> # sysctl net.inet | grep queue
net.inet.ip.intr_queue_maxlen: 4096
net.inet.ip.intr_queue_drops: 0
net.inet.ip.dummynet.queue_count: 0
>
> A suggestion I see at https://calomel.org/freebsd_network_tuning.html is
> to increase localhost n/w buffers. Not sure if this'll help your case.
> net.local.stream.sendspace=164240 # (default 8192)
> net.local.stream.recvspace=164240 # (default 8192)
I already had 65536 for that, but increased up to 164240, nothing has changed.
>
> Now I'll let someone else with more ideas/clues comment.
>
More information about the freebsd-net
mailing list