dummynet dropping too many packets
rihad
rihad at mail.ru
Wed Oct 7 11:23:39 UTC 2009
rihad wrote:
> Oleg Bulyzhin wrote:
>> On Wed, Oct 07, 2009 at 03:16:27PM +0500, rihad wrote:
>>> Oleg Bulyzhin wrote:
>>>> On Wed, Oct 07, 2009 at 02:23:47PM +0500, rihad wrote:
>>>>
>>>> Few questions:
>>>> 1) why are you not using fastforwarding?
>>>> 2) search_steps/searches ratio is not that good, are you using
>>>> 'buckets'
>>>> keyword in your pipe configuration?
>>>> 3) you have net.inet.ip.fw.one_pass = 0, is it intended?
>>>>
>>> 1) and 3): the box does traffic accounting and shaping, so I need
>>> one_pass=0 to do both ngtee and pipes.
>> Still can not see any objection for not using fastforwarding, and usually
>> ipfw ruleset can be rearranged for using dummynet & netgraph with
>> one_pass=1.
>
> You probably have some special sources of documentation ;-) According to
> man ipfw, both "netgraph/ngtee" and "pipe" decide the fate of the packet
> unless one_pass=0. Or do you mean sprinkling smart skiptos here and
> there? ;-)
>
>> Could you show your 'ipfw show' output? (hide ip addresses if you wish
>> but
>> keep counters please).
>>
> Here it is, in its whole glory:
>
> 00100 10434423 1484891105 allow ip from any to any via lo0
> 00200 2 14 deny ip from any to 127.0.0.0/8
> 00300 1 4 deny ip from 127.0.0.0/8 to any
> 01000 3300039938 327603104711 allow ip from any to any in
> 01010 26214900 421138433 allow ip from me to any out
> 01020 5453857 46806278 allow icmp from any to any out
> 01030 3268289053 327224694165 ngtee 1 ip from any to any out
> 01040 18681181 1089636054 skipto 1100 ip from table(127) to any out
> recv bce0 xmit bce1
> 01060 777488848 76743392754 pipe tablearg ip from any to table(0) out
> recv bce0 xmit bce1
> 01070 776831109 76682499457 allow ip from any to table(0) out recv
> bce0 xmit bce1
> 01100 13102697 808411842 pipe tablearg ip from any to table(2) out
> 65535 662648946 66711487830 allow ip from any to any
>
> table(127) is static in nature and is under 100 entries.
> table(0) and table(2) have the same IP clients' addresses but different
> pipe IDs.
>
>>> 2) Hm, I'm not using "buckets", but rather
>>> net.inet.ip.dummynet.hash_size. It's at default, 64. I've tried
>>> setting net.inet.ip.dummynet.hash_size=65536 in sysctl.conf but
>>> somehow it was still 64 after reboot, so I left it at 64. Should I
>>> make it 128? 256? Does it matter that much? The load is at approx.
>>> 70-120 consumers per pipe, so I thought 64 bucket size was enough.
>> It depends on traffic pattern, try to increase it and watch
>> search_steps/searches ratio (~1.001 is good enough)
>>
After reconfiguring all pipes with hash_size=256 (4 times as much as
before), the ratio has started decreasing slowly. Run by me every 5-100
seconds:
[rihad at billing ~]$ echo "$(sysctl -n
net.inet.ip.dummynet.search_steps)/$(sysctl -n
net.inet.ip.dummynet.searches)" | bc -l
1.10639566354978963640
[rihad at billing ~]$ echo "$(sysctl -n
net.inet.ip.dummynet.search_steps)/$(sysctl -n
net.inet.ip.dummynet.searches)" | bc -l
1.10638988711274017516
[rihad at billing ~]$ echo "$(sysctl -n
net.inet.ip.dummynet.search_steps)/$(sysctl -n
net.inet.ip.dummynet.searches)" | bc -l
1.10637649664889937145
[rihad at billing ~]$ echo "$(sysctl -n
net.inet.ip.dummynet.search_steps)/$(sysctl -n
net.inet.ip.dummynet.searches)" | bc -l
1.10636898392044547569
[rihad at billing ~]$ echo "$(sysctl -n
net.inet.ip.dummynet.search_steps)/$(sysctl -n
net.inet.ip.dummynet.searches)" | bc -l
1.10634798328730542254
[rihad at billing ~]$ echo "$(sysctl -n
net.inet.ip.dummynet.search_steps)/$(sysctl -n
net.inet.ip.dummynet.searches)" | bc -l
1.10608591323771604268
[rihad at billing ~]$ echo "$(sysctl -n
net.inet.ip.dummynet.search_steps)/$(sysctl -n
net.inet.ip.dummynet.searches)" | bc -l
1.10600110020578292697
but the number of drops is still there. Run every minute:
Wed Oct 7 11:00:44 UTC 2009 34630 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:01:44 UTC 2009 34630 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:02:44 UTC 2009 34729 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:03:44 UTC 2009 34729 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:04:44 UTC 2009 34861 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:05:44 UTC 2009 34932 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:06:44 UTC 2009 35499 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:07:45 UTC 2009 35780 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:08:45 UTC 2009 35841 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:09:45 UTC 2009 36348 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:10:45 UTC 2009 36568 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:11:45 UTC 2009 36673 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:12:45 UTC 2009 36673 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:13:46 UTC 2009 36673 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:14:46 UTC 2009 36673 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:15:46 UTC 2009 36673 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:16:46 UTC 2009 36849 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:17:46 UTC 2009 37234 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:18:46 UTC 2009 37949 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:19:47 UTC 2009 38043 output packets dropped due to no
bufs, etc.
Wed Oct 7 11:20:47 UTC 2009 38549 output packets dropped due to no
bufs, etc.
2200-2350 users online (ipfw table load). I'll wait and see if the drop
rate approaches 500-1000 per second as the number of online users comes
close to 3-4K.
net.isr.direct=0
More information about the freebsd-net
mailing list