From nobody Sat Aug 28 03:37:47 2021 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 1B3AD1788FAD for ; Sat, 28 Aug 2021 03:38:06 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from vtr.rulingia.com (vtr.rulingia.com [IPv6:2001:19f0:5801:ebe:5400:1ff:fe53:30fd]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA512 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "vtr.rulingia.com", Issuer "R3" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 4GxMjn0Ghrz4wZ5 for ; Sat, 28 Aug 2021 03:38:04 +0000 (UTC) (envelope-from peter@rulingia.com) Received: from server.rulingia.com (2001-44b8-31fc-0d00-b87b-26bc-93d4-d5e3.static.ipv6.internode.on.net [IPv6:2001:44b8:31fc:d00:b87b:26bc:93d4:d5e3]) by vtr.rulingia.com (8.16.1/8.15.2) with ESMTPS id 17S3btPY084958 (version=TLSv1.3 cipher=AEAD-AES256-GCM-SHA384 bits=256 verify=OK); Sat, 28 Aug 2021 13:38:00 +1000 (AEST) (envelope-from peter@rulingia.com) DKIM-Filter: OpenDKIM Filter v2.10.3 vtr.rulingia.com 17S3btPY084958 X-Bogosity: Ham, spamicity=0.000000 Received: from server.rulingia.com (localhost.rulingia.com [127.0.0.1]) by server.rulingia.com (8.16.1/8.16.1) with ESMTPS id 17S3bl4j093736 (version=TLSv1.3 cipher=AEAD-AES256-GCM-SHA384 bits=256 verify=NO); Sat, 28 Aug 2021 13:37:47 +1000 (AEST) (envelope-from peter@server.rulingia.com) Received: (from peter@localhost) by server.rulingia.com (8.16.1/8.16.1/Submit) id 17S3bll7093735; Sat, 28 Aug 2021 13:37:47 +1000 (AEST) (envelope-from peter) Date: Sat, 28 Aug 2021 13:37:47 +1000 From: Peter Jeremy To: Johannes Totz Cc: freebsd-fs@freebsd.org Subject: Re: ZFS on high-latency devices Message-ID: References: List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="sc46ZKMZH17jPmMr" Content-Disposition: inline In-Reply-To: X-PGP-Key: http://www.rulingia.com/keys/peter.pgp X-Rspamd-Queue-Id: 4GxMjn0Ghrz4wZ5 X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=pass (policy=quarantine) header.from=rulingia.com; spf=pass (mx1.freebsd.org: domain of peter@rulingia.com designates 2001:19f0:5801:ebe:5400:1ff:fe53:30fd as permitted sender) smtp.mailfrom=peter@rulingia.com X-Spamd-Result: default: False [-5.89 / 15.00]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; FREEFALL_USER(0.00)[peter]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+mx]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.20)[multipart/signed,text/plain]; RCVD_COUNT_THREE(0.00)[3]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MID_RHS_MATCH_FROMTLD(0.00)[]; NEURAL_HAM_SHORT(-0.99)[-0.993]; RCPT_COUNT_TWO(0.00)[2]; DMARC_POLICY_ALLOW(-0.50)[rulingia.com,quarantine]; SIGNED_PGP(-2.00)[]; FROM_EQ_ENVFROM(0.00)[]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+,1:+,2:~]; ASN(0.00)[asn:20473, ipnet:2001:19f0:5800::/38, country:US]; RCVD_TLS_ALL(0.00)[] X-ThisMailContainsUnwantedMimeParts: N --sc46ZKMZH17jPmMr Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On 2021-Aug-20 00:16:28 +0100, Johannes Totz wrote: >Do you have geli included in those perf tests? Any difference if you=20 >leave it out? Yes, I mentioned geli (and I also have IPSEC, which I forgot to mention). I haven't tried taking them out but dd(1) tests suggest they aren't a problem. >What's making the throughout slow? zfs issuing a bunch of small writes=20 >and then trying to read something (unrelated)? Is there just not enough=20 >data to be written to saturate the link? At least from eyeballing gstat, there are basically no reads involved in the zfs recv. The problem seems to be that writes aren't evenly spread across all the vdevs, combined with very long delays associated with flushing snapshots. I have considered instrumenting ggate{c,d} to see if I can identify any issues. >Totally random thought: there used to be a vdev cache (not sure if=20 >that's still around) that would inflate read requests to hopefully drag=20 >in more data that might be useful soon. ZFS includes that functionality itself. >Have you tried hastd? I haven't but hastd also uses GEOM_GATE so I wouldn't expect significantly different behaviour. --=20 Peter Jeremy --sc46ZKMZH17jPmMr Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQKTBAEBCgB9FiEE7rKYbDBnHnTmXCJ+FqWXoOSiCzQFAmEpr4VfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldEVF QjI5ODZDMzA2NzFFNzRFNjVDMjI3RTE2QTU5N0EwRTRBMjBCMzQACgkQFqWXoOSi CzSVew//fZTheVP8ILPkpNBjJN7E5qEn3UPZFydrf0Og2oF/vHf9KYcWgM1f9w3m c27H31kI1I0025/XXBE8fAv4XsjvhfaNI3bIzq4M9fNMMErPkGAqTd/396GJIoNW LYcqpjOGGDWmZWtB1WwVOwLYsPPjOGpbt8cQ6YDf1USGWN0U9KmBzfBryZvGluaH CVDQlxXQSr+mWfKE0dGhgfYPK5pwxv0WJRPTMLOnnyf2BeoLl5OxCfdTaO9mpB4v 5xoJCfHKh3udX9MJXtOaKboSUBw9t5pQhcluqPwoYoQk2c0kyawxUITfBGZqejGU 9BU3yD/l6m+8Mln7P8vE9oPhYnpE7ikZ3tW9YT0dmlYFjrjG0gQKysWc6G2RFJ7Q 2iRbOghT/Cc5wu59sNj/enfsCZ3E6apk7RhDXEzd8bNgwadD+vC3SF58lNyz0ItM zRxvjR/1BGMeIFG/ksMDmPLnvLhCD6P/O+OJ2oYWiu2FYROFyUBkBPT7+OS9qgJg 59RLb8iT4ITixf3XF8mHHXfyl+OMaJSh/dSkOnRNPep84CiZdA1ncpdxtH/dDuOr tpg4S1qs1AzE1zJB2+rpmc+YpJ9ueRZkqu2btcrL6twrUfTa0PWBu/yOhCsbq7oD B43grMzLA/CJS7A4v9SRbsdqUsN9B7utXqKYgjuFFad2BkmRAXg= =IEmC -----END PGP SIGNATURE----- --sc46ZKMZH17jPmMr--