TCP RACK performance
hiren panchasara
hiren at strugglingcoder.info
Tue Oct 2 21:35:38 UTC 2018
Unsure if your questions got answered but this is a more appropriate
list for such questions.
Interesting results. People working on or testing rack don't really use
plain newreno as congestion control afaik. Which might be why they
didn't notice this - is my speculation.
Default Linux uses Cubic as CC. See if switching to cubic helps on
FreeBSD too.
Cheers,
Hiren
On 09/11/18 at 05:41P, Chenyang Zhong wrote:
> Hi,
>
> I am really excited to see that @rrs from Netflix is adding TCP RACK
> and High Precision Timer System to the kernel, so I built a kernel
> (r338543) and ran some test.
>
> I used the following kernel config, as suggested in commit rS334804.
>
> makeoptions WITH_EXTRA_TCP_STACKS=1
> options TCPHPTS
>
> After booting the new kernel, I loaded the tcp_rack.ko,
> # kldload tcp_rack
>
> and checked the sysctl to make sure rack is there.
> # sysctl net.inet.tcp.functions_available
> net.inet.tcp.functions_available:
> Stack D Alias PCB count
> freebsd * freebsd 3
> rack rack 0
>
> I ran the first test with the default stack. I was running iperf3 over
> a wireless network where rtt was fluctuating but no packet loss. Here
> is a ping result summary. The average and stddev of rtt is relatively
> high.
>
> 39 packets transmitted, 39 packets received, 0.0% packet loss
> round-trip min/avg/max/stddev = 1.920/40.206/124.094/39.093 ms
>
> Here is the iperf3 result of a 30-second benchmark.
>
> [ ID] Interval Transfer Bitrate Retr
> [ 5] 0.00-30.00 sec 328 MBytes 91.8 Mbits/sec 62 sender
> [ 5] 0.00-30.31 sec 328 MBytes 90.9 Mbits/sec receiver
>
> Then I switched to the new RACK stack.
> # sysctl net.inet.tcp.functions_default=rack
> net.inet.tcp.functions_default: freebsd -> rack
>
> There was a 10% - 15% performance loss after running the same iperf3
> benchmark. Also, the number of retransmissions increased dramatically.
>
> [ ID] Interval Transfer Bitrate Retr
> [ 5] 0.00-30.00 sec 286 MBytes 79.9 Mbits/sec 271 sender
> [ 5] 0.00-30.30 sec 286 MBytes 79.0 Mbits/sec receiver
>
> I then ran iperf3 on a Linux machine with kernel 4.15, which uses RACK
> by default. I verified that through sysctl:
>
> # sysctl net.ipv4.tcp_recovery
> net.ipv4.tcp_recovery = 1
>
> The iperf3 result showed the same speed with the default freebsd
> stack, and the number of retransmission matched the RACK stack on
> freebsd.
>
> [ ID] Interval Transfer Bandwidth Retr
> [ 4] 0.00-30.00 sec 330 MBytes 92.3 Mbits/sec 270 sender
> [ 4] 0.00-30.00 sec 329 MBytes 92.1 Mbits/sec receiver
>
> I am not sure whether the performance issue is related to my
> configuration or to the new implementation of RACK on FreeBSD. I am
> glad to offer more information if anyone is interested. Thanks again
> for all the hard work. I cannot wait to see TCP BBR on FreeBSD.
>
> Best,
> Chenyang
> _______________________________________________
> freebsd-current at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe at freebsd.org"
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 603 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-transport/attachments/20181001/dfdf20c2/attachment.sig>
More information about the freebsd-transport
mailing list