From nobody Thu Aug 19 23:16:28 2021 X-Original-To: freebsd-fs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 54120177568D for ; Thu, 19 Aug 2021 23:16:52 +0000 (UTC) (envelope-from freebsd-fs@m.gmane-mx.org) Received: from ciao.gmane.io (ciao.gmane.io [116.202.254.214]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4GrLJ32wfZz4SS8 for ; Thu, 19 Aug 2021 23:16:51 +0000 (UTC) (envelope-from freebsd-fs@m.gmane-mx.org) Received: from list by ciao.gmane.io with local (Exim 4.92) (envelope-from ) id 1mGrH9-0008fG-Qf for freebsd-fs@freebsd.org; Fri, 20 Aug 2021 01:16:43 +0200 X-Injected-Via-Gmane: http://gmane.org/ To: freebsd-fs@freebsd.org From: Johannes Totz Subject: Re: ZFS on high-latency devices Date: Fri, 20 Aug 2021 00:16:28 +0100 Message-ID: References: List-Id: Filesystems List-Archive: https://lists.freebsd.org/archives/freebsd-fs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-fs@freebsd.org Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.0.1 Content-Language: en-GB In-Reply-To: X-Rspamd-Queue-Id: 4GrLJ32wfZz4SS8 X-Spamd-Bar: / Authentication-Results: mx1.freebsd.org; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=jo-t.de (policy=none); spf=pass (mx1.freebsd.org: domain of freebsd-fs@m.gmane-mx.org designates 116.202.254.214 as permitted sender) smtp.mailfrom=freebsd-fs@m.gmane-mx.org X-Spamd-Result: default: False [0.30 / 15.00]; RCVD_TLS_LAST(0.00)[]; ARC_NA(0.00)[]; DMARC_POLICY_SOFTFAIL(0.10)[jo-t.de : SPF not aligned (relaxed), No valid DKIM,none]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; FROM_HAS_DN(0.00)[]; MV_CASE(0.50)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; TO_DN_NONE(0.00)[]; NEURAL_HAM_LONG(-0.89)[-0.891]; RCPT_COUNT_ONE(0.00)[1]; R_SPF_ALLOW(-0.20)[+mx]; NEURAL_HAM_SHORT(-0.91)[-0.913]; FORGED_SENDER(0.30)[johannes@jo-t.de,freebsd-fs@m.gmane-mx.org]; R_DKIM_NA(0.00)[]; MIME_TRACE(0.00)[0:+]; ASN(0.00)[asn:24940, ipnet:116.202.0.0/16, country:DE]; FROM_NEQ_ENVFROM(0.00)[johannes@jo-t.de,freebsd-fs@m.gmane-mx.org]; FORGED_MUA_THUNDERBIRD_MSGID_UNKNOWN(2.50)[]; RCVD_COUNT_TWO(0.00)[2] X-Spam: Yes X-ThisMailContainsUnwantedMimeParts: N On 19/08/2021 10:37, Peter Jeremy wrote: > I'm looking at backing up my local ZFS pools to a remote system > without the remote system having access to the pool content. The > options for the backup pools seem to be: > a) Use OpenZFS native encryption with ZFS running on the remote system > backed up using "zfs send --raw". > b) Run ZFS over geli locally over geom_gate[1] to the remote system. > > The first approach removes RTT as an issue but requires that the local > pools first be converted to native encryption - a process that seems > to be generously defined as "very difficult". > > The second approach removes the need to encrypt the local pool but > is far more sensitive to RTT issues and I've been unable to get > a reasonable throughput. The main problems I've found are: > * Even with a quite high write aggregation limit, I still get lots of > relatively small writes. > * Snapshot boundaries appear to wait for all queued writes to be flushed. > > I've found https://openzfs.org/wiki/ZFS_on_high_latency_devices but I > can't get the procedure to work. "zfs send" of a zvol seems to bear > very little resemblance to a "zfs send" of a "normal" filesystem. > Sending a zvol, I can't get ndirty _down_ to the suggested 70-80%, > whereas with (eg) my mail spool, I can't get ndirty _up_ to the > suggested 70-80%. And most of the suggested adjustments are system- > wide so the suggested changes are likely to adversely impact local > ZFS performance. > > Does anyone have any suggestions as to a way forward? Either a > reliable process to encrypt an existing pool or a way to improve > throughput doing "zfs recv" to a pool with a high RTT. > Do you have geli included in those perf tests? Any difference if you leave it out? What's making the throughout slow? zfs issuing a bunch of small writes and then trying to read something (unrelated)? Is there just not enough data to be written to saturate the link? Totally random thought: there used to be a vdev cache (not sure if that's still around) that would inflate read requests to hopefully drag in more data that might be useful soon. Have you tried hastd?