amd64/165783: rtadvd eats 100% cpu
Yury
hawk256 at yandex.ru
Tue Mar 6 17:30:12 UTC 2012
>Number: 165783
>Category: amd64
>Synopsis: rtadvd eats 100% cpu
>Confidential: no
>Severity: serious
>Priority: high
>Responsible: freebsd-amd64
>State: open
>Quarter:
>Keywords:
>Date-Required:
>Class: sw-bug
>Submitter-Id: current-users
>Arrival-Date: Tue Mar 06 17:30:12 UTC 2012
>Closed-Date:
>Last-Modified:
>Originator: Yury
>Release: 9.0-STABLE amd64
>Organization:
ArtTelecom
>Environment:
FreeBSD gw02.local 9.0-STABLE FreeBSD 9.0-STABLE #0: Sun Mar 4 17:52:22 MSK 2012 hawk at gw02.local:/usr/obj/usr/src/sys/Hawk amd64
>Description:
I am using this server as NAS for pptp and pppoe connect of my users. For it, I use MPD5.6, Dummynet, Radius, ipfw, quagga.
Now I what to using ipv6. For it, I use ipv6cp from MPD and rtadvd from FreeBSD. Rtadvd start and send prefix for each of connect (ngX). Separated conf and rtadvd for each ngX where useing ipv6 (host say yes for ipv6cp). All of this works, but rtadvd make cpu uses to 100%. This is very bad for my server. Why rtadvd do it?
I use cpuset for rtadv use only cpu3.
Server: Intel SR1630GP, X3470, 4G RAM, Intel 2xigb (576).
Configs:
gw02# cat /tmp/rtadvd.ng53
ng53:\
:addr="2001:67c:3c4:1::":prefixlen#64:\
:rdnss="2001:67c:3c4::1":dnssl="art-telecom.ru":
gw02# cat /boot/loader.conf
autoboot_delay="3"
alias_ftp_load="YES"
net.isr.maxthreads=4
net.isr.bindthreads=1
hw.igb.rx_process_limit=4096
net.graph.maxalloc=65536
net.graph.maxdata=32768
vm.kmem_size=1G
hw.igb.rxd=4096
hw.igb.txd=4096
hw.igb.max_interrupt_rate=32000
gw02# cat /etc/sysctl.conf
net.inet.icmp.icmplim=20000
kern.threads.max_threads_per_proc=5000
kern.ipc.nmbclusters=500000
kern.ipc.maxsockbuf=83886080
kern.maxfiles=2048000
kern.maxfilesperproc=200000
net.inet.tcp.sendspace=3217968
net.inet.tcp.recvspace=3217968
kern.ipc.somaxconn=32768
net.graph.maxdgram=8388608
net.graph.recvspace=8388608
net.route.netisr_maxqlen=4096
net.isr.defaultqlimit=4096
net.inet.ip.dummynet.hash_size=1024
net.inet.ip.fw.dyn_buckets=1024
net.inet.ip.dummynet.pipe_slot_limit=2048
net.inet.ip.fastforwarding=0
net.inet.ip.dummynet.io_fast=0
net.isr.direct=0
gw02# cat /usr/local/etc/mpd5/mpd.conf
startup:
# configure the console
set console self 127.0.0.1 5005
set user hawk 128 admin
set console open
# configure the web server
set web self 10.11.0.12 5006
set user hawk 128 admin
set web open
default:
load conf_bundle
load pptpd
load pppoed
# netflow:
# set netflow peer 10.11.0.2 12345
# set netflow timeouts 300 1800
radius:
set radius server 10.11.0.2 128 1812 1813
set radius retries 3
set radius timeout 3
set radius me 10.11.0.12
set auth acct-update 300
set auth enable radius-auth
set auth enable radius-acct
set radius enable message-authentic
conf_bundle:
create bundle template B_ppp
set iface disable on-demand
set iface enable proxy-arp
set iface enable tcpmssfix
# set iface enable netflow-in
# set iface enable netflow-out
set iface up-script /usr/local/etc/mpd5/link-up.sh
set iface down-script /usr/local/etc/mpd5/link-down.sh
set ipcp ranges 178.217.96.12/32 0.0.0.0/0
set ipcp dns 178.217.96.1
set bundle enable ipv6cp
pptpd:
create link template L_pptp pptp
set link max-children 5000
set auth disable internal
load radius
# load netflow
set link action bundle B_ppp
set link no pap
set link enable chap
set link enable chap-md5
set link enable chap-msv1
set link accept chap-msv1
set link enable chap-msv2
set link disable eap
set link keep-alive 60 160
set link max-redial -1
set pptp self 0.0.0.0
set pptp disable windowing
set pptp enable always-ack
set link enable incoming
pppoed:
create link template L_pppoe pppoe
set link max-children 5000
set auth disable internal
load radius
# load netflow
set link action bundle B_ppp
set link no pap
set link enable chap
set link enable chap-md5
set link enable chap-msv1
set link accept chap-msv1
set link enable chap-msv2
set link disable eap
set link keep-alive 60 160
set link max-redial -1
create link template vlan198 L_pppoe
set pppoe iface vlan198
set pppoe service art
set link enable incoming
create link template vlan101 L_pppoe
set pppoe iface vlan101
set pppoe service art
set link enable incoming
create link template vlan102 L_pppoe
set pppoe iface vlan102
set pppoe service art
set link enable incoming
..
last pid: 12961; load averages: 165.24, 163.98, 156.74 up 1+00:32:29 21:18:08
365 processes: 171 running, 163 sleeping, 31 waiting
CPU 0: 0.0% user, 0.0% nice, 13.4% system, 7.5% interrupt, 79.1% idle
CPU 1: 0.4% user, 0.0% nice, 7.5% system, 9.8% interrupt, 82.3% idle
CPU 2: 0.0% user, 0.0% nice, 3.5% system, 15.7% interrupt, 80.7% idle
CPU 3: 28.3% user, 0.0% nice, 64.6% system, 7.1% interrupt, 0.0% idle
Mem: 151M Active, 561M Inact, 917M Wired, 64K Cache, 618M Buf, 4231M Free
Swap: 4096M Total, 4096M Free
PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
11 root 155 ki31 0K 64K CPU0 0 20.4H 89.79% idle{idle: cpu0}
11 root 155 ki31 0K 64K RUN 1 20.7H 86.96% idle{idle: cpu1}
11 root 155 ki31 0K 64K CPU2 2 20.3H 81.30% idle{idle: cpu2}
12 root -92 - 0K 496K WAIT 2 58:18 8.40% intr{irq269: igb1:que}
12 root -92 - 0K 496K WAIT 3 46:52 7.86% intr{irq267: igb0:que}
12 root -92 - 0K 496K WAIT 0 53:35 7.18% intr{irq264: igb0:que}
12 root -92 - 0K 496K WAIT 2 38:42 6.40% intr{irq266: igb0:que}
12 root -92 - 0K 496K WAIT 1 37:01 5.37% intr{irq265: igb0:que}
13 root -16 - 0K 64K sleep 2 36:58 4.59% ng_queue{ng_queue3}
13 root -16 - 0K 64K sleep 2 37:07 4.39% ng_queue{ng_queue1}
13 root -16 - 0K 64K sleep 1 36:57 4.39% ng_queue{ng_queue0}
13 root -16 - 0K 64K sleep 0 36:59 4.20% ng_queue{ng_queue2}
92241 root 72 0 12180K 2060K RUN 3 0:58 0.29% rtadvd
6206 root 20 0 75716K 21052K select 1 9:41 0.20% mpd5{mpd5}
55018 root 72 0 12180K 2276K RUN 3 1:45 0.20% rtadvd
94795 root 72 0 12180K 2288K RUN 3 1:19 0.20% rtadvd
96217 root 72 0 12180K 2292K RUN 3 1:17 0.20% rtadvd
96753 root 72 0 12180K 2276K RUN 3 1:16 0.20% rtadvd
99202 root 72 0 12180K 2040K RUN 3 0:55 0.20% rtadvd
28135 root 72 0 12180K 2044K RUN 3 0:42 0.20% rtadvd
51803 root 72 0 12180K 2032K RUN 3 0:33 0.20% rtadvd
45074 root 72 0 12180K 2020K RUN 3 0:24 0.20% rtadvd
46505 root 72 0 12180K 2020K RUN 3 0:23 0.20% rtadvd
68796 root 72 0 12180K 2024K RUN 3 0:17 0.20% rtadvd
82492 root 72 0 12180K 2008K RUN 3 0:14 0.20% rtadvd
20190 root 72 0 12180K 1996K RUN 3 0:06 0.20% rtadvd
4733 root 72 0 12180K 1996K RUN 3 0:02 0.20% rtadvd
0 root -92 0 0K 336K - 1 18:53 0.10% kernel{igb0 que}
46545 root 72 0 12180K 2264K RUN 3 1:52 0.10% rtadvd
47535 root 72 0 12180K 2276K RUN 3 1:52 0.10% rtadvd
51586 root 72 0 12180K 2252K RUN 3 1:49 0.10% rtadvd
53974 root 72 0 12180K 2256K RUN 3 1:48 0.10% rtadvd
54423 root 72 0 12180K 2256K RUN 3 1:46 0.10% rtadvd
63216 root 72 0 12180K 2260K RUN 3 1:39 0.10% rtadvd
70566 root 72 0 12180K 2276K RUN 3 1:35 0.10% rtadvd
71568 root 72 0 12180K 2268K RUN 3 1:32 0.10% rtadvd
78424 root 72 0 12180K 2264K RUN 3 1:30 0.10% rtadvd
78288 root 72 0 12180K 2268K RUN 3 1:30 0.10% rtadvd
80315 root 72 0 12180K 2268K RUN 3 1:27 0.10% rtadvd
81627 root 72 0 12180K 2284K RUN 3 1:25 0.10% rtadvd
88424 root 72 0 12180K 2272K RUN 3 1:22 0.10% rtadvd
94847 root 72 0 12180K 2272K RUN 3 1:19 0.10% rtadvd
>How-To-Repeat:
Use rtadvd on ngX interface of MPD. I have proble with first of rtadvd.
>Fix:
>Release-Note:
>Audit-Trail:
>Unformatted:
More information about the freebsd-amd64
mailing list