Re: Performance test for CUBIC in stable/14

From: Cheng Cui <cc_at_freebsd.org>
Date: Wed, 23 Oct 2024 19:14:08 UTC
I see. The result of `newreno` vs. `cubic` shows non-constant/infrequent
packet
retransmission. So TCP congestion control has little impact on improving the
performance.

The performance bottleneck may come from somewhere else. For example, the
sender CPU shows 97.7% utilization. Would there be any way to reduce CPU
usage?

cc

On Wed, Oct 23, 2024 at 11:04 AM void <void@f-m.fm> wrote:

> On Wed, Oct 23, 2024 at 08:28:01AM -0400, Cheng Cui wrote:
> >The latency does not sound a problem to me. What is the performance of
> >TCP congestion control algorithm `newreno`?
> >
> >In case you need to load `newreno` first.
> >
> >cc@n1:~ % sudo kldload newreno
> >
> >cc@n1:~ % sudo sysctl net.inet.tcp.cc.algorithm=newreno
> >
> >net.inet.tcp.cc.algorithm: cubic -> newreno
> >
> >cc@n1:~ %
> >
> >And let me know the result of `newreno` vs. `cubic`, for example:
> >iperf3 -B ${src} --cport ${tcp_port} -c ${dst} -l 1M -t 20 -i 2 -VC
> newreno
>
> speedtests@vm4-fbsd14s:~ % doas kldload newreno
> speedtests@vm4-fbsd14s:~ % doas sysctl net.inet.tcp.cc.algorithm=newreno
> net.inet.tcp.cc.algorithm: cubic -> newreno
>
> speedtests@vm4-fbsd14s:~ % iperf3 -B 192.168.1.13 --cport 5201 -c
> 192.168.1.232
> -l 1M -t 20 -i 2 -VC newreno
> iperf 3.17.1
> FreeBSD vm4-fbsd14s.home.arpa 14.2-PRERELEASE FreeBSD 14.2-PRERELEASE #0
> stable/14-n269252-e18ba5c5555a-dirty: Mon Oct 21 18:09:22 BST 2024
> root@vm4-fbsd14s.home.arpa:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64
> Control connection MSS 1460
> Time: Wed, 23 Oct 2024 14:41:11 UTC
> Connecting to host 192.168.1.232, port 5201
>        Cookie: tvrlkd2axzx24uui7gglzk4ni66ib7qy4kxa
>        TCP MSS: 1460 (default)
> [  5] local 192.168.1.13 port 5201 connected to 192.168.1.232 port 5201
> Starting Test: protocol: TCP, 1 streams, 1048576 byte blocks, omitting 0
> seconds,
> 20 second test, tos 0
> [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> [  5]   0.00-2.01   sec   137 MBytes   572 Mbits/sec    0    629 KBytes
>
> [  5]   2.01-4.13   sec   159 MBytes   628 Mbits/sec    0    928 KBytes
>
> [  5]   4.13-6.12   sec   192 MBytes   809 Mbits/sec    0   1.16 MBytes
>
> [  5]   6.12-8.08   sec   153 MBytes   656 Mbits/sec    0   1.33 MBytes
>
> [  5]   8.08-10.08  sec   176 MBytes   737 Mbits/sec    0   1.51 MBytes
>
> [  5]  10.08-12.13  sec   211 MBytes   864 Mbits/sec    0   1.69 MBytes
>
> [  5]  12.13-14.04  sec   138 MBytes   606 Mbits/sec   73   1.01 MBytes
>
> [  5]  14.04-16.02  sec   155 MBytes   657 Mbits/sec    0   1.21 MBytes
>
> [  5]  16.02-18.10  sec   168 MBytes   678 Mbits/sec    0   1.39 MBytes
>
> [  5]  18.10-20.13  sec   188 MBytes   777 Mbits/sec    0   1.56 MBytes
>
> - - - - - - - - - - - - - - - - - - - - - - - - -
> Test Complete. Summary Results:
> [ ID] Interval           Transfer     Bitrate         Retr
> [  5]   0.00-20.13  sec  1.64 GBytes   699 Mbits/sec   73 sender
> [  5]   0.00-20.14  sec  1.64 GBytes   698 Mbits/sec receiver
> CPU Utilization: local/sender 97.7% (0.0%u/97.7%s), remote/receiver 19.5%
> (1.6%u/17.9%s)
> snd_tcp_congestion newreno
> rcv_tcp_congestion newreno
>
> iperf Done.
>
> ======================================
>
> speedtests@vm4-fbsd14s:~ % doas sysctl net.inet.tcp.cc.algorithm=cubic
> net.inet.tcp.cc.algorithm: newreno -> cubic
>
> speedtests@vm4-fbsd14s:~ % iperf3 -B 192.168.1.13 --cport 5201 -c
> 192.168.1.232
> -l 1M -t 20 -i 2 -VC cubic
> iperf 3.17.1
> FreeBSD vm4-fbsd14s.home.arpa 14.2-PRERELEASE FreeBSD 14.2-PRERELEASE #0
> stable/14-n269252-e18ba5c5555a-dirty: Mon Oct 21 18:09:22 BST 2024
> root@vm4-fbsd14s.home.arpa:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64
> Control connection MSS 1460
> Time: Wed, 23 Oct 2024 14:51:30 UTC
> Connecting to host 192.168.1.232, port 5201
>        Cookie: wp5nkovyy5pwzqos4lsdlqv4loccl6iu5kdv
>        TCP MSS: 1460 (default)
> [  5] local 192.168.1.13 port 5201 connected to 192.168.1.232 port 5201
> Starting Test: protocol: TCP, 1 streams, 1048576 byte blocks,
> omitting 0 seconds, 20 second test, tos 0
> [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> [  5]   0.00-2.03   sec   184 MBytes   762 Mbits/sec    0    752 KBytes
>
> [  5]   2.03-4.07   sec   198 MBytes   811 Mbits/sec    0   1.05 MBytes
>
> [  5]   4.07-6.13   sec   193 MBytes   787 Mbits/sec    0   1.28 MBytes
>
> [  5]   6.13-8.08   sec   203 MBytes   874 Mbits/sec    0   1.48 MBytes
>
> [  5]   8.08-10.13  sec   192 MBytes   786 Mbits/sec    0   1.65 MBytes
>
> [  5]  10.13-12.13  sec   156 MBytes   653 Mbits/sec   44   1.40 MBytes
>
> [  5]  12.13-14.13  sec   167 MBytes   703 Mbits/sec   16   1.04 MBytes
>
> [  5]  14.13-16.07  sec   167 MBytes   721 Mbits/sec    0   1.25 MBytes
>
> [  5]  16.07-18.02  sec   114 MBytes   490 Mbits/sec    0   1.37 MBytes
>
> [  5]  18.02-20.04  sec   173 MBytes   719 Mbits/sec    0   1.53 MBytes
>
> - - - - - - - - - - - - - - - - - - - - - - - - -
> Test Complete. Summary Results:
> [ ID] Interval           Transfer     Bitrate         Retr
> [  5]   0.00-20.04  sec  1.71 GBytes   731 Mbits/sec   60
> sender
> [  5]   0.00-20.05  sec  1.71 GBytes   730 Mbits/sec
> receiver
> CPU Utilization: local/sender 97.6% (0.0%u/97.6%s), remote/receiver
> 20.5% (1.8%u/18.6%s)
> snd_tcp_congestion cubic
> rcv_tcp_congestion cubic
>
> iperf Done.
> speedtests@vm4-fbsd14s:~ %
>
> --
>
>

-- 
Best Regards,
Cheng Cui