Re: Performance issues with vnet jails + epair + bridge

From: Miroslav Lachman <000.fbsd_at_quip.cz>
Date: Thu, 12 Sep 2024 18:00:14 UTC
On 12/09/2024 19:16, Sad Clouds wrote:
> Hi, I'm using FreeBSD-14.1 and on this particular system I only have a
> single physical network interface, so I followed instructions for
> networking vnet jails via epair and bridge, e.g.
> 
> devel
> {
>          vnet;
>          vnet.interface = "e0b_devel";
>          exec.prestart += "/jails/jib addm devel genet0";
>          exec.poststop += "/jails/jib destroy devel";
> }
> 
> The issue is bulk TCP performance throughput between this jail and the
> host is quite poor, with one CPU spinning 100% in kernel and others
> sitting mostly idle.
> 
> It seems there is some lock contention somewhere, but I'm not sure if
> this is around vnet, epair or bridge subsystems. Are there
> other alternatives for vnet jails? Can anyone recommend specific
> deployment scenarios? I've seen references to netgraph which could be
> used with jails. Does it have better performance and scalability and
> could replace epair and bridge combination?


You can try to disable one of (or all of) the following: LRO, TSO, 
RXCSUM, TXCSUM by ifconfig on you NIC.

ifconfig em0 -rxcsum -txcsum -tso -lro" to disable
The same thing without dashes "-" to enable

Use your NIC name instead of em0.
If some part disabled fixes you problem, put it into your rc.conf 
ifconfig line.

Or you can try netgraph buddy

https://github.com/bellhyve/ngbuddy

Kind regards
Miroslav Lachman