ipfw Table Organization

Michael Sierchio kudzu at tenebras.com
Tue Aug 24 22:31:16 UTC 2021


On Tue, Aug 24, 2021 at 2:47 PM Tim Daneliuk via freebsd-questions <
freebsd-questions at freebsd.org> wrote:

> Is there any particular advantage - performance or otherwise - to breaking
> up
> a large ipfw table into smaller tables?
>
> We have a few firewalls approaching 100,000 rules for blocking addresses
> and CIDR blocks.


Do you really mean 100,000 firewall rules?  100,000 CIDR blocks is not
a problem.  You should probably consolidate CIDR blocks before adding them
to a
table, because it's a longest-prefix-match.


> The IPS are read from separate text files in a loop
> in the firewall init code, but are all written to a single table.


I have a framework that collects IPs and CIDR blocks from various sources
(for blocking).
Two tables are used for this – so I can atomically replace the table
contents via table swap.
None of this is done in the firewall init code, it's all done via a
cronjob.  I use the table arg to
store an integer that says what the source was.  The firewall init script
only gets invoked at
startup, or when rules change.

This
> is easy to maintain, but the concern is that we may be clobbering runtime
> performance.
>

Did you know you can add an entire file to a table, if the lines consist of

<CIDR> <Table arg>

?

Empirically, this works for up to 8192 entries, so I split the file into
files of that size,
add them, then delete the splits.  My pcengines box has

CPU: AMD GX-412TC SOC                                (998.15-MHz K8-class
CPU)


*root at hearst:/usr/src 210#* ipfw table reject list | wc -l

   99787

Something with decent power could easily filter 250,000 CIDR blocks.


More information about the freebsd-questions mailing list