Chelsio netmap support ? (RELENG_11)
Navdeep Parhar
nparhar at gmail.com
Thu Mar 9 18:01:57 UTC 2017
On Wed, Mar 8, 2017 at 6:28 AM, Mike Tancsa <mike at sentex.net> wrote:
> On 3/7/2017 9:08 PM, Navdeep Parhar wrote:
>> On Tue, Mar 7, 2017 at 5:46 PM, Mike Tancsa <mike at sentex.net> wrote:
>>
>>>
>>> # dmesg | grep netm
>>> netmap: loaded module
>>> vcxl0: netmap queues/slots: TX 2/1023, RX 2/1024
>>> vcxl0: 1 txq, 1 rxq (NIC); 1 txq, 1 rxq (TOE); 2 txq, 2 rxq (netmap)
>>> vcxl1: netmap queues/slots: TX 2/1023, RX 2/1024
>>> vcxl1: 1 txq, 1 rxq (NIC); 1 txq, 1 rxq (TOE); 2 txq, 2 rxq (netmap)
>>> igb0: netmap queues/slots: TX 4/1024, RX 4/1024
>>> igb1: netmap queues/slots: TX 4/1024, RX 4/1024
>>>
>>> It maxes out at about 800Kpps with and without netmap. Is there a way
>>
>> Are you actually using a netmap based application that acts as a
>> packet router or is this just the vcxl interface running as a normal
>> ifnet?
>
> the later, vcxl running normal ifnet. I thought there would be a benefit
> to utilizing netmap ? Sorry, this is not clear to me.
The kernel's routing code does not utilize netmap even if it's
available. You'll need something like netmap-fwd for netmap based
routing.
If you're not using netmap there is no need to create the extra vcxl interfaces.
>
>>
>>> to increase the queues for the Chelsio nic, like the onboard igb ?
>>
>> If you're not running a netmap based router get rid of the num_vis=2
>> and simply try with the cxl0/cxl1 interfaces. They should each have 4
>> rxq/4 txq on your system. In case you want to increase the number of
>> queues, use this:
>
> The tests with the regular cxl also show the box topping out at 0.8Mpps
> for forwarding.
I would have expected multiple streams to do better. There is a lot
of information about forwarding on the bsdrp.net website. Have you
tried the tips there? The numbers there are significantly better than
what you observe. I suspect your router is CPU-bound.
https://bsdrp.net/documentation/examples/forwarding_performance_lab_of_a_hp_proliant_dl360p_gen8_with_10-gigabit_with_10-gigabit_chelsio_t540-cr
https://bsdrp.net/documentation/examples/forwarding_performance_lab_of_a_superserver_5018a-ftn4_with_10-gigabit_chelsio_t540-cr
There's a projects/routing branch that does much better than the stock
kernel. I'm not sure what work remains to be done before it can be
merged into head.
https://github.com/ocochard/netbenches/blob/master/Xeon_E5-2650-8Cores-Chelsio_T540-CR/forwarding-pf-ipfw/results/fbsd11-routing.r287531/README.md
Regards,
Navdeep
>
>>
>> The "NIC" queues are the normal tx/rx queues, the "netmap" queues are
>> active when the interface is in netmap mode.
>>
>> Does netsend generate a single flow or multiple flows? If it's a
>> single flow it will use a single queue only.
>
> I think its as a single flow. However, I was using a separate box to
> generate a second flow as well. It still topped out at about 800Kpps
> before dropping packets.
>
> ---Mike
>
>
>
> --
> -------------------
> Mike Tancsa, tel +1 519 651 3400
> Sentex Communications, mike at sentex.net
> Providing Internet services since 1994 www.sentex.net
> Cambridge, Ontario Canada http://www.tancsa.com/
More information about the freebsd-net
mailing list