Re: 25/100 G performance on freebsd

From: Benoit Chesneau <benoitc_at_enki-multimedia.eu>
Date: Mon, 22 Aug 2022 21:04:48 UTC
For now I didn't choose how to use them but I was thinking to use them as different conduits instead of bonding them since the connection comes via a FO 12. Cards are qlnxe or mlxen cards so I am not sure sr-iov will work unfortunately. Are you using the ports of your cards separately ?

Note that I have also hpe intelX722cards on these machines but they are HPE branded and as you know have a buggy behaviour when using sr-iov. For now they are not plugged to the network and I was thinking to drop them from the machine to reduce the poweer usage. Maybe latest update of the driver fixed it but I'm not sure about thta, I would need to try latest from intel but it is not yet ported and my attempt to do it failed :)
About vale are you connecting the switch to the network using an epair or vether interface?

Benoît Chesneau, Enki Multimedia
—
t. +33608655490

Sent with [Proton Mail](https://proton.me/) secure email.

------- Original Message -------
On Monday, August 15th, 2022 at 12:52, Santiago Martinez <sm@codenetworks.net> wrote:

> Hi Benoit,
>
> Not sure what the environment, is this to host VNF? those 2x25 will be both forwardings or are active/standby).
>
> In my case I use:
>
> * Vale for Inter-VM inside the same host.
>
> * Vale to connect to the external network ( hence a phy interface). In my case Intel 40G NICS.
>
> * SR-IOV for some specific use cases (for example, BNG stress test tools running on Linux).
>
> For JAILS:
>
> * I tend to use just VNET. Can't get more than 7.2Gbps ( >1400b) from an epair without a bridge in the middle.
>
> * Right now I'm doing some tests with RSS enabled, but is not looking good, actually no passing traffic...
>
> If your NICs start to play nice with SR-IOV you can pass a VF to the Jail, some NICs allow creating L2 "high speed" switches in the card ( never used one).
>
> Regarding L3 (in-kernel), the overhead will be bigger than using vale, but then you can leverage multi-path, VXLAN termination, IPFW, PF, dummynet, etc.
>
> Hope it makes sense.
>
> Santi
>
> On 8/13/22 11:20, Benoit Chesneau wrote:
>
>> Santiago thanks for the help.
>>
>> I am curious about your vale setup. Do you have only internal bridges? Do you bridge the NIC interface or are doing L3?
>>
>> Afaik i am trying to dind what would be the most efficient way to use the 25GB interfaces whle isolating the services on them. I very hesitant of the approach and unsure if freebsd these days can fit the bill:
>>
>> * run isolated services over the 2x25G . would jails limit the bandwith?
>> * possibly run bhyve services when linux or else is needed .
>>
>> Would using only L3 routing solve some performances issues?
>>
>> benoit
>>
>> On Wed, Aug 10, 2022 at 23:31, Santiago Martinez <sm@codenetworks.net> wrote:
>>
>>> Hi Benoit, sorry to hear that the SR-IOV still not working on your HW.
>>>
>>> Have you tested the last patch from Intel?
>>>
>>> Regarding Bhyve, you can use Vale switches (based on netmap).
>>> On my machines, i get around ~33Gbps between VM (same local machine), sometimes going towards 40Gbps... ( These are basic tests with iperf3 and TSO/LRO enabled).
>>>
>>> @Michael Dexter is working on a document that contains configuration examples and test results for the different network backend available in bhyve.
>>>
>>> If you need help, let me know and we can set up a call.
>>> Take care.
>>> Santi
>>>
>>> On 8/8/22 08:57, Benoit Chesneau wrote:
>>>
>>>> For some reasons. I can’t use SR-IOV on my freebsd machines (HPE DL160 gen10) with latest 25G HPE branded cards. I opened tickets for that but since then no move happened.
>>>>
>>>> So I wonder id there is a good setup to use these cards with the virtualization. Which kind of performance should I expect using if_bridge? What if i am doing L3 routing instead using epair or tap (for bhyve). Would it work better?
>>>>
>>>> Any hint is welcome,
>>>>
>>>> Benoît