From nobody Wed Oct 23 21:43:21 2024 X-Original-To: freebsd-net@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4XYjGP5SsXz5ZjGf for ; Wed, 23 Oct 2024 21:43:25 +0000 (UTC) (envelope-from void@f-m.fm) Received: from fhigh-a3-smtp.messagingengine.com (fhigh-a3-smtp.messagingengine.com [103.168.172.154]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4XYjGN6sBkz4ZLD for ; Wed, 23 Oct 2024 21:43:24 +0000 (UTC) (envelope-from void@f-m.fm) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=f-m.fm header.s=fm3 header.b=q2z8ImfJ; dkim=pass header.d=messagingengine.com header.s=fm3 header.b=U1eWDxRM; spf=pass (mx1.freebsd.org: domain of void@f-m.fm designates 103.168.172.154 as permitted sender) smtp.mailfrom=void@f-m.fm; dmarc=pass (policy=none) header.from=f-m.fm Received: from phl-compute-05.internal (phl-compute-05.phl.internal [10.202.2.45]) by mailfhigh.phl.internal (Postfix) with ESMTP id B0570114016C for ; Wed, 23 Oct 2024 17:43:23 -0400 (EDT) Received: from phl-mailfrontend-02 ([10.202.2.163]) by phl-compute-05.internal (MEProxy); Wed, 23 Oct 2024 17:43:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=f-m.fm; h=cc :content-type:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:subject :subject:to:to; s=fm3; t=1729719803; x=1729806203; bh=H0mpX96msi o0xBJWrwxOxNdyV8PWOiZqIC4v3ETmkvs=; b=q2z8ImfJI/KxM6HVPip4DX+HZ9 x7Yzk8qn+9hL9qw3jYZt4Agu55yhMQvEO0OGvvy8yxPM1NlqqvXq9qOfijjo6Vrn Xf+WKtsQSYHr7/+BqhpEllCm3dXePd24zTNUMzANc8842EeVOMzJqhzSmhbOb5kM 5J4BCHl8//a62lan1QAU5xnmEvU7+fapJCo69LrygkjzeklDylWQt/mgUVhEiT9m 7W0tM0HbtJmjCE5a6o21RBmdWYNYIMhfb19tk4aoxh9JsgZtS7BL7q0bCgb1HEbi FmOSZg8bzz5CJIKLSxfKMAjP2fdOgA51/dhvKdklVN5vWV89Wo7+VeFOq6RQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-type:content-type:date:date :feedback-id:feedback-id:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:subject:subject:to :to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; t=1729719803; x=1729806203; bh=H0mpX96msio0xBJWrwxOxNdyV8PW OiZqIC4v3ETmkvs=; b=U1eWDxRMtokCNKoYdJD6D1i8Qw1YRItnAh0FUz4/ZKN7 izn/yNHyF9yKYRUVjokxaY16oIIsZB6kTKCNxwVmAkYyvkKyJaGLfQVZ2k0bZRGT LMkf+o6yZlNhjVFRt4nuH1R8LxRqkwpWkIUObHmUtHLcO0O9mOkEW1BisJGNzDII AqzpNmQlGTYgzVWbrJs+B4uZNEKCo+bwwVK/wuQky+oxn9WdkbV7lSkIdJeqEr7/ LlXRANUmWbv9iuf9W8OoIsUywtSwE+XGjLmKD3Ky4p1NUrkWSPBbI9TpQv3/ZJtW WbyArvJS/xD/oSBcUwx7gICqljhWPeeoJaYTUiny2w== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeeftddrvdeikedgtddvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucenucfjughrpeffhffvuf fkfhggtggujgesthdtredttddtvdenucfhrhhomhepvhhoihguuceovhhoihgusehfqdhm rdhfmheqnecuggftrfgrthhtvghrnhepkeeluddvlefhieelfefggffhffektdehleelgf dugfdvgeekjeejuddtheehgfeunecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghm pehmrghilhhfrhhomhepvhhoihgusehfqdhmrdhfmhdpnhgspghrtghpthhtohepuddpmh houggvpehsmhhtphhouhhtpdhrtghpthhtohepfhhrvggvsghsugdqnhgvthesfhhrvggv sghsugdrohhrgh X-ME-Proxy: Feedback-ID: i2541463c:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA for ; Wed, 23 Oct 2024 17:43:23 -0400 (EDT) Date: Wed, 23 Oct 2024 22:43:21 +0100 From: void To: freebsd-net@freebsd.org Subject: Re: Performance test for CUBIC in stable/14 Message-ID: References: List-Id: Networking and TCP/IP with FreeBSD List-Archive: https://lists.freebsd.org/archives/freebsd-net List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-net@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: X-Spamd-Result: default: False [-3.56 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-0.98)[-0.980]; NEURAL_HAM_SHORT(-0.98)[-0.980]; MID_RHS_NOT_FQDN(0.50)[]; DMARC_POLICY_ALLOW(-0.50)[f-m.fm,none]; R_DKIM_ALLOW(-0.20)[f-m.fm:s=fm3,messagingengine.com:s=fm3]; R_SPF_ALLOW(-0.20)[+ip4:103.168.172.128/27]; MIME_GOOD(-0.10)[text/plain]; RCVD_IN_DNSWL_LOW(-0.10)[103.168.172.154:from]; FREEMAIL_FROM(0.00)[f-m.fm]; RCVD_TLS_LAST(0.00)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_THREE(0.00)[3]; RCPT_COUNT_ONE(0.00)[1]; ARC_NA(0.00)[]; FROM_HAS_DN(0.00)[]; FREEMAIL_ENVFROM(0.00)[f-m.fm]; PREVIOUSLY_DELIVERED(0.00)[freebsd-net@freebsd.org]; TO_DN_NONE(0.00)[]; FROM_EQ_ENVFROM(0.00)[]; DKIM_TRACE(0.00)[f-m.fm:+,messagingengine.com:+]; MLMMJ_DEST(0.00)[freebsd-net@freebsd.org]; MISSING_XM_UA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; ASN(0.00)[asn:209242, ipnet:103.168.172.0/24, country:US]; DWL_DNSWL_NONE(0.00)[messagingengine.com:dkim] X-Rspamd-Queue-Id: 4XYjGN6sBkz4ZLD X-Spamd-Bar: --- On Wed, Oct 23, 2024 at 03:14:08PM -0400, Cheng Cui wrote: >I see. The result of `newreno` vs. `cubic` shows non-constant/infrequent >packet >retransmission. So TCP congestion control has little impact on improving the >performance. > >The performance bottleneck may come from somewhere else. For example, the >sender CPU shows 97.7% utilization. Would there be any way to reduce CPU >usage? There are 11 VMs running on the bhyve server. None of them are very busy but the server shows % uptime 9:54p.m. up 8 days, 6:08, 22 users, load averages: 0.82, 1.25, 1.74 The test vm vm4-fbsd14s: % uptime 9:55PM up 2 days, 3:12, 5 users, load averages: 0.35, 0.31, 0.21 It has % sysctl hw.ncpu hw.ncpu: 8 and avail memory = 66843062272 (63746 MB) so it's not short of resources. A test just now gave these results: - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-20.04 sec 1.31 GBytes 563 Mbits/sec 0 sender [ 5] 0.00-20.06 sec 1.31 GBytes 563 Mbits/sec receiver CPU Utilization: local/sender 94.1% (0.1%u/94.1%s), remote/receiver 15.5% (1.5%u/13.9%s) snd_tcp_congestion cubic rcv_tcp_congestion cubic iperf Done. so I'm not sure how the utilization figure was synthesised, unless it's derived from something like 'top' where 1.00 is 100%. Load when running the test got to 0.83 as observed in 'top' in another terminal. Five mins after the test, load in the vm is: 0.32, 0.31, 0.26 on the bhyve host: 0.39, 0.61, 1.11 Before we began testing, I was looking at the speed issue as being caused by something to do with interrupts and/or polling, and/or HZ, somehting that linux handles differently and gives better results on the same bhyve host. Maybe rebuilding the kernel with a different scheduler on both the host and the freebsd vms will give a better result for freebsd if tweaking sysctls doesn't make much of a difference. In terms of real-world bandwidth, I found that the combination of your modified cc_cubic + rack gave the best results in terms of overall throughput in a speedtest context, although it's slower to get to its max throughput than cubic alone. I'm still testing with a webdav/rsync context (cubic against cubic+rack) The next lot of testing after changing the scheduler will be on a KVM host, with various *BSDs as guests. There may be a tradeoff of stability against speed I guess. --