zfs newbie
David Christensen
dpchrist at holgerdanske.com
Wed Sep 8 01:28:44 UTC 2021
On 9/7/21 3:17 PM, Doug Denault wrote:
>
> Following the default 12.2 zfs install I got one pool (zroot) and a
> dataset for each of the traditional mount points. So zfs list shows:
>
> NAME USED AVAIL REFER MOUNTPOINT
> zroot 279G 6.75T 88K /zroot
> zroot/ROOT 1.74G 6.75T 88K none
> zroot/ROOT/default 1.74G 6.75T 1.74G /
> zroot/tmp 176K 6.75T 176K /tmp
> zroot/usr 277G 6.75T 88K /usr
> zroot/usr/home 276G 6.75T 276G /usr/home
> zroot/usr/ports 88K 6.75T 88K /usr/ports
> zroot/usr/src 670M 6.75T 670M /usr/src
> zroot/var 47.5M 6.75T 88K /var
> zroot/var/audit 88K 6.75T 88K /var/audit
> zroot/var/crash 88K 6.75T 88K /var/crash
> zroot/var/log 820K 6.75T 820K /var/log
> zroot/var/mail 46.3M 6.75T 46.3M /var/mail
> zroot/var/tmp 88K 6.75T 88K /var/tmp
>
> I had consultant configure another server for us. He set up the disk
> array with one dataset. so zfs list on this system give:
>
> NAME USED AVAIL REFER MOUNTPOINT
> zroot 2.65G 13.2T 2.62G legacy
>
> From a sysadmin view I rather like the multiple datasets. Are there
> advantages to one over the other?
I have a SOHO LAN with one primary FreeBSD 12.2 server (CVS and Samba)
and various Windows, macOS, iOS, and Debian clients.
As another reader mentioned, you can set ZFS properties differently on
different datasets.
You can also apply different disaster preparedness/ recovery policies to
different datasets -- e.g. snapshots and replication.
However, more datasets means more work and more complexity. Few of the
standard ZFS CLI tools work recursively on nested datasets. For
example, how to do you make a tree of 10 nested datasets read-only with
one shell command? Or, make them read-write? Or, replicate them to
another pool? Or do today's backup replication job when datasets have
been added, removed, and/or renamed since yesterday's? Or, selectively
destroy old snapshots? Performing these use-cases by hand is tedious
and error prone. Automating them is non-trivial. I would estimate the
system administration complexity of nested ZFS datasets as O(N*log(N)).
But, my primary comment on your ZFS listings is that you put root on a
6.75T pool (!) and your consultant put root on a 13.2T pool (!). It is
my practice to keep my OS instances small enough to fit onto a single
"16 GB" device, and to put my data on RAID in a file server. This
allows me to quickly, easily, and reliably take and restore raw binary
images of the OS devices. How are you going to backup and restore your
OS images?
David
More information about the freebsd-questions
mailing list