Similar entries in source tracking table

Max maximos at als.nnov.ru
Tue May 2 15:56:20 UTC 2017


It's OK to have more than one source record in this case. I guess they 
belong to different instances of your ruleset. And expired entries are 
removed.

However, it might be a problem if they have a big timeout.


02.05.2017 15:51, Babak Farrokhi пишет:
> Hello,
>
> After setting src.track to 0 I could reproduce the situation, but this time
> both entries disappeared after a short while (as per “interval” timer setting).
>
>> Babak
>
> On 2 May 2017, at 16:26, Max wrote:
>
>> Could you set "src.track" to zero and check if the issue persists?
>>
>>
>> 02.05.2017 10:01, Babak Farrokhi пишет:
>>> Hello,
>>>
>>> Here it is:
>>>
>>> # pfctl -vvsS
>>> No ALTQ support in kernel
>>> ALTQ related functions disabled
>>> 192.168.232.1 -> 192.168.0.104 ( states 0, connections 0, rate 0.0/0s )
>>>      age 00:00:53, expires in 00:59:52, 6 pkts, 504 bytes, nat rule 0
>>> 192.168.232.1 -> 192.168.0.104 ( states 0, connections 0, rate 0.0/0s )
>>>      age 00:01:21, expires in 00:59:20, 16 pkts, 1344 bytes, nat rule 0
>>>
>>> # pfctl -vvss
>>> No ALTQ support in kernel
>>> ALTQ related functions disabled
>>>
>>> I am running 11-STABLE r317643. Please note that this is only reproducible when you reload your pf configuration and tables.
>>>
>>>>>> Babak
>>>
>>>
>>> On 2 May 2017, at 10:41, Max wrote:
>>>
>>>> Hello,
>>>> Can you show "pfctl -vsS" output? And what version of FreeBSD are you running?
>>>>
>>>>
>>>> 01.05.2017 17:59, Babak Farrokhi пишет:
>>>>> Hello,
>>>>>
>>>>> I was running an experiment with pf in which I encountered an unusual case.
>>>>>
>>>>> In a nat setup, is this okay to have multiple similar entries in source tracking table?
>>>>>
>>>>> # pfctl -sS
>>>>> 192.168.232.1 -> 192.168.0.104 ( states 0, connections 0, rate 0.0/0s )
>>>>> 192.168.232.1 -> 192.168.0.104 ( states 0, connections 0, rate 0.0/0s )
>>>>> 192.168.232.1 -> 192.168.0.104 ( states 0, connections 0, rate 0.0/0s )
>>>>>
>>>>> There are actually three similar binding stuck in source tracking table.
>>>>> vmstat output also confirms separate memory allocation for three entries in
>>>>> source tracking table:
>>>>>
>>>>> #  vmstat -z | egrep 'ITEM|^pf'
>>>>> ITEM                   SIZE  LIMIT     USED     FREE      REQ FAIL SLEEP
>>>>> pf mtags:                48,      0,       0,       0,       0,   0,   0
>>>>> pf states:              296, 8000005,       0,    1313,    2279,   0,   0
>>>>> pf state keys:           88,      0,       0,    2655,    4558,   0,   0
>>>>> pf source nodes:        136, 1500025,       3,     142,       7,   0,   0
>>>>> pf table entries:       160, 800000,       4,     121,      47,   0,   0
>>>>> pf table counters:       64,      0,       0,       0,       0,   0,   0
>>>>> pf frags:               112,      0,       0,       0,       0,   0,   0
>>>>> pf frag entries:         40, 100000,       0,       0,       0,   0,   0
>>>>> pf state scrubs:         40,      0,       0,       0,       0,   0,   0
>>>>>
>>>>>
>>>>> I can reproduce this behavior by reloading pf.conf and running traffic through
>>>>> the box and get a new entry added to source tracking table.
>>>>>
>>>>> Here is the nat rule:
>>>>>
>>>>> # pfctl -vsn
>>>>> nat on em0 inet from <internal-net> to any -> <external-net> round-robin sticky-address
>>>>>      [ Evaluations: 368       Packets: 50        Bytes: 2084        States: 0     ]
>>>>>      [ Inserted: uid 0 pid 6418 State Creations: 28    ]
>>>>>
>>>>> and timers:
>>>>>
>>>>> # pfctl -st
>>>>> tcp.first                    10s
>>>>> tcp.opening                  10s
>>>>> tcp.established            4200s
>>>>> tcp.closing                  10s
>>>>> tcp.finwait                  15s
>>>>> tcp.closed                   10s
>>>>> tcp.tsdiff                   30s
>>>>> udp.first                    60s
>>>>> udp.single                   30s
>>>>> udp.multiple                 60s
>>>>> icmp.first                   20s
>>>>> icmp.error                   10s
>>>>> other.first                  60s
>>>>> other.single                 30s
>>>>> other.multiple               60s
>>>>> frag                         30s
>>>>> interval                     30s
>>>>> adaptive.start                0 states
>>>>> adaptive.end                  0 states
>>>>> src.track                  3600s
>>>>>
>>>>> Any ideas if this behavior is expected?
>>>>>
>>>>>>>>>> Babak
>>>> _______________________________________________
>>>> freebsd-pf at freebsd.org mailing list
>>>> https://lists.freebsd.org/mailman/listinfo/freebsd-pf
>>>> To unsubscribe, send any mail to "freebsd-pf-unsubscribe at freebsd.org"



More information about the freebsd-pf mailing list