netmap, netmap-fwd, and how many M packets-per-second?
Jim Thompson
jim at netgate.com
Fri Dec 2 00:54:27 UTC 2016
(I'm not subscribed to -hpc or -performance, so I've trimmed the
recipients.)
You're running iperf3 on an Ivy Bridge Xeon at 2.4GHz.
-N (--no-delay) only applies to TCP, it disables Nagle's algorithm, so it
doesn't apply for "-u" (--udp).
In any case, iperf3 still attempts to use large enough frames to be able to
fill the bandwidth limit (you've requested 'b10000m' = 10Gbps) with a
minimum number of packets. It does this by writing as much data as will
fit in the sockbuf (if it gets back EWOULDBLOCK) in a loop.
https://github.com/esnet/iperf/blob/099244ec686b620393e9845478a554b1c7ca5c8b/src/net.c#L251
A full sized ethernet frame is 1538 bytes on the wire (without any 802.1q
tags).
1,000,000,000 b/s / (1,538 B * 8 b/B) == 81,274 pps
10G is 812,743pps
40G is 3,250,975pps
and you're quite near this (3.2Mpps) with the FreeBSD 11.0 setup. Are you
sure you're not running a WITNESS/DEBUG kernel on 12-CURRENT?
The (version of) netmap-fwd you have is (still) single-threaded, so you get
one core.
You probably also don't have the hw cksum offload parts for netmap that we
did.
You're seeing about 1.5Mpps and at 1538 byte frames (on the wire) that's
around 18.5Gbps.
try pkt_gen, it will generate the small frames you seek.
Cheers,
Jim
On Thu, Dec 1, 2016 at 5:55 PM, Jordan Caraballo <
jordancaraballo87 at gmail.com> wrote:
> Feedback and/or tips and tricks more than welcome.
>
> We are trying to process huge amounts of small (64 bytes) pps through a
> router. So far results have not been as we expected. We have tested
> FreeBSD 10.3, 11.0, 11.0-STABLE, and 12.0-CURRENT with and without
> netmap. Based on netmap documentation we were expecting about 5.0M pps;
> alongside with the routing improvements from the freebsd routing
> proposal a total of 12.0M.
>
> Server Description:
>
> Dell PowerEdge R530 with 2 Intel(R) Xeon(R) E52695 CPU's, 18 cores per
> cpu. Equipped with a Chelsio T-580-CR dual port in an 8x slot.
>
> BIOS tweaks:
>
> Hyperthreading (or Logical Processors) is turned off.
>
> Current results are shown below. Additional configurations can be given
> upon request.
>
> Test Environment:
> 5 clients and 5 servers - 4 Dell C6100 and 2 Dell R420; each one
> equipped with 10G NICS (4 intel 8259X and 6 with mellanox connectx2).
>
> Script that execute the following on each host.
>
> #!/usr/local/bin/bash
> # Iterate through ports and start tests
> for ((i=1;i<=PORTS;i++)); do
> PORT=$(($PORT+1))
> iperf3 -c 172.16.2.$IP -u -b10000m -i 0 -N -l$PKT -t$TIME -P$STREAMS
> -p$PORT &
> #iperf3 -c 172.16.2.$IP -i0 -N -l$PKT -t$TIME -P$STREAMS -p$PORT &
> done
>
> # FreeBSD 10.3 - 4 streams to 80 ports from each client (5)
>
> input (Total) output
> packets errs idrops bytes packets errs bytes colls drops
> 1.9M 0 1.3M 194M 540k 0 57M 0 0
> 2.1M 0 1.5M 216M 556k 0 58M 0 0
> 1.8M 0 1.3M 192M 553k 0 58M 0 0
> 1.7M 0 1.1M 174M 542k 0 57M 0 0
> 1.9M 0 1.4M 204M 537k 0 56M 0 0
> 1.6M 0 1.1M 171M 550k 0 58M 0 0
> 1.6M 0 1.1M 173M 546k 0 57M 0 0
> 1.7M 0 1.1M 176M 564k 0 59M 0 0
> 2.0M 0 1.5M 212M 543k 0 57M 0 0
> 2.1M 0 1.5M 219M 557k 0 58M 0 0
> 1.9M 0 1.4M 205M 547k 0 57M 0 0
> 1.7M 0 1.2M 179M 553k 0 58M 0 0
>
> # FreeBSD 11.0 - 4 streams to 80 ports from each client (5)
>
> input (Total) output
> packets errs idrops bytes packets errs bytes colls drops
> 3.1M 0 1.8M 326M 1.3M 0 134M 0 0
> 2.6M 0 1.5M 269M 1.1M 0 116M 0 0
> 2.7M 0 1.5M 285M 1.2M 0 127M 0 0
> 2.4M 0 1.3M 257M 1.1M 0 119M 0 0
> 2.7M 0 1.5M 287M 1.3M 0 134M 0 0
> 2.5M 0 1.3M 262M 1.2M 0 127M 0 0
> 2.1M 0 1.1M 224M 1.0M 0 108M 0 0
> 2.7M 0 1.4M 285M 1.4M 0 143M 0 0
> 2.6M 0 1.3M 272M 1.3M 0 136M 0 0
> 2.5M 0 1.4M 265M 1.1M 0 120M 0 0
> # FreeBSD 11.0-STABLE - 4 streams to 80 ports from each client (5)
>
> input (Total) output
> packets errs idrops bytes packets errs bytes colls drops
> 1.9M 0 849k 195M 1.0M 0 107M 0 0
> 1.9M 0 854k 196M 1.0M 0 106M 0 0
> 1.9M 0 851k 196M 1.0M 0 107M 0 0
> 1.9M 0 851k 196M 1.0M 0 107M 0 0
> 1.9M 0 851k 196M 1.0M 0 107M 0 0
> 1.9M 0 852k 196M 1.0M 0 107M 0 0
> 1.9M 0 847k 195M 1.0M 0 107M 0 0
> 1.9M 0 836k 195M 1.0M 0 107M 0 0
> 1.9M 0 843k 195M 1.0M 0 107M 0 0
> # FreeBSD 12.0-CURRENT - 4 streams to 80 ports from each client (5)
>
> input (Total) output
> packets errs idrops bytes packets errs bytes colls drops
> 1.1M 259 0 115M 1.1M 0 115M 0 0
> 1.2M 273 0 124M 1.2M 0 124M 0 0
> 1.1M 200 0 112M 1.1M 0 112M 0 0
> 1.2M 290 0 122M 1.2M 0 122M 0 0
> 1.0M 132 0 107M 1.0M 0 107M 0 0
> 1.1M 303 0 118M 1.1M 0 118M 0 0
> 1.1M 278 0 112M 1.1M 0 112M 0 0
> 1.2M 243 0 122M 1.2M 0 122M 0 0
> 1.1M 168 0 112M 1.1M 0 112M 0 0
> 1.1M 161 0 112M 1.1M 0 112M 0 0
> # FreeBSD 12.0-CURRENT + Netmap - 4 streams to 80 ports from each client
> (5)
>
> input (Total) output
> packets errs idrops bytes packets errs bytes colls drops
> 1.4M 10 0 144M 1.4M 0 144M 0 0
> 1.5M 6 0 159M 1.5M 0 159M 0 0
> 1.4M 5 0 144M 1.4M 0 144M 0 0
> 1.5M 14 0 158M 1.5M 0 158M 0 0
> 1.4M 5 0 151M 1.4M 0 151M 0 0
> 1.4M 10 0 152M 1.4M 0 152M 0 0
> 1.4M 12 0 148M 1.4M 0 148M 0 0
> 1.5M 9 0 155M 1.5M 0 155M 0 0
> 1.4M 23 0 151M 1.4M 0 151M 0 0
> 1.4M 11 0 151M 1.4M 0 151M 0 0
> # FreeBSD 12.0-CURRENT + Netmap + Tuning - 4 streams to 80 ports from
> each client (5)
>
> input (Total) output
> packets errs idrops bytes packets errs bytes colls drops
> 1.4M 15 0 145M 1.4M 0 145M 0 0
> 1.5M 18 0 157M 1.5M 0 157M 0 0
> 1.5M 10 0 156M 1.5M 0 156M 0 0
> 1.5M 15 0 154M 1.5M 0 154M 0 0
> 1.4M 13 0 146M 1.4M 0 146M 0 0
> 1.5M 15 0 156M 1.5M 0 156M 0 0
> 1.5M 9 0 155M 1.5M 0 155M 0 0
> 1.5M 13 0 153M 1.5M 0 153M 0 0
> 1.4M 14 0 145M 1.4M 0 145M 0 0
> 1.4M 17 0 151M 1.4M 0 151M 0 0
>
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
More information about the freebsd-net
mailing list