ZFS on high-latency devices

From: Peter Jeremy <peter_at_rulingia.com>
Date: Thu, 19 Aug 2021 09:37:39 UTC
I'm looking at backing up my local ZFS pools to a remote system
without the remote system having access to the pool content.  The
options for the backup pools seem to be:
a) Use OpenZFS native encryption with ZFS running on the remote system
   backed up using "zfs send --raw".
b) Run ZFS over geli locally over geom_gate[1] to the remote system.

The first approach removes RTT as an issue but requires that the local
pools first be converted to native encryption - a process that seems
to be generously defined as "very difficult".

The second approach removes the need to encrypt the local pool but
is far more sensitive to RTT issues and I've been unable to get
a reasonable throughput.  The main problems I've found are:
* Even with a quite high write aggregation limit, I still get lots of
  relatively small writes.
* Snapshot boundaries appear to wait for all queued writes to be flushed.

I've found https://openzfs.org/wiki/ZFS_on_high_latency_devices but I
can't get the procedure to work.  "zfs send" of a zvol seems to bear
very little resemblance to a "zfs send" of a "normal" filesystem.
Sending a zvol, I can't get ndirty _down_ to the suggested 70-80%,
whereas with (eg) my mail spool, I can't get ndirty _up_ to the
suggested 70-80%.  And most of the suggested adjustments are system-
wide so the suggested changes are likely to adversely impact local
ZFS performance.

Does anyone have any suggestions as to a way forward?  Either a
reliable process to encrypt an existing pool or a way to improve
throughput doing "zfs recv" to a pool with a high RTT.

-- 
Peter Jeremy