Re: FreeBSD TCP (with iperf3) comparison with Linux
- In reply to: Randall Stewart : "Re: FreeBSD TCP (with iperf3) comparison with Linux"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Thu, 29 Jun 2023 14:03:32 UTC
Randall, Thank you for your response. I set below sysctls sysctl kern.ipc.maxsockbuf=18874368 • 18MB sysctl net.inet.tcp.sendbuf_max=18874368 sysctl net.inet.tcp.recvbuf_max=18874368 Loaded Cubic CC algorithm, 1. Load the module using kldload cc_cubic 2. Set the cc algorithm by sysctl net.inet.tcp.cc.algorithm=cubic Below is the command I used to run iperf3 server: iperf3 -s -B 192.168.1.8 client: iperf3 –c 192.168.1.8 -B 192.168.2.8 -t 180 -w 16m -C cubic This should set the socket buffers for the connections to 16MB and run the tests for 180 secs. Hope this is clear. Regards Murali From: Randall Stewart <rrs@netflix.com> Date: Thursday, 29 June 2023 at 5:16 PM To: Murali Krishnamurthy <muralik1@vmware.com> Cc: freebsd-transport@FreeBSD.org <freebsd-transport@FreeBSD.org> Subject: Re: FreeBSD TCP (with iperf3) comparison with Linux Greetings Murali: So I am unclear from your “Socket buffer” as to what you did.. Did you set both the send and receive windows to 16Mbps with the SO_SNDBUF and SO_RCVBUF options? Or were you just using auto-scaling to have the socket buffers advance? Thanks R On Jun 29, 2023, at 5:51 AM, Murali Krishnamurthy <muralik1@vmware.com<mailto:muralik1@vmware.com>> wrote: Hello FreeBSD Transport experts, We are evaluating performance of FreeBSD 13 VM on ESX hypervisor in long RTT setup and happened to compare the performance with Linux VM with same hypervisor. We see a substantially better performance with Linux getting close to the BDP limit, whereas BSD 13 not filling up the pipe enough. We are trying to figure out what could lead to such a huge difference and feel we could be missing something here. Could you please help us to know if there is a way to make it perform better? Setup details: We have 2 ESX hypervisors where 2 VMs (one FreeBSD 13 and one Ubuntu 23.04/Linux kernel 6.2) were launched on each hypervisor. Then we ran iperf between, 1. BSD 13 <-> BSD 13 2. Ubuntu <-> Ubuntu Even though the network environment were same in both cases, we see Ubuntu performing much better. Below are connection parameters: Socket buffer: 16MB TCP CC Algo: Cubic. We used this as this is suitable for Long Fat Networks. Ping RTT: 100 ms between the two end points. We kept all other parameters to default on both Linux and BSD. BDP for 16MB Socket buffer: 16 MB * (1000 ms * 100ms latency) * 8 bits/ 1024 = 1.25 Gbps Ubuntu consistently hits around 1 Gbps Bitrate almost reaching the BDP limit. FreeBSD 13 shows a Bit rate between the range of 300-600 Mbps only. So it seems to be doing half as good as Linux. For lower socket buffer of 4MB, both FreeBSD and Linux seem to do same and able to meet BDP of 300 Mbps consistently. Larger socket buffer seems to have an issue. Please let us know if there are ways to fine tune the system parameters to make BSD perform better. Or any other suggestions/queries welcome. Regards Murali ------ Randall Stewart rrs@netflix.com<mailto:rrs@netflix.com>