zfs newbie

Dan Langille dan at langille.org
Tue Sep 14 22:48:12 UTC 2021


— 
Dan Langille
http://langille <http://langille/>.org/





> On Sep 14, 2021, at 6:42 PM, DTD <support at safeport.com> wrote:
> 
> On Tue, 14 Sep 2021, Dan Langille wrote:
> 
>> DTD wrote on 9/7/21 5:51 PM:
>>> Following the default 12.2 zfs install I got one pool (zroot) and a dataset for each of the traditional mount points. So zfs list shows:
>>> NAME                 USED  AVAIL  REFER  MOUNTPOINT
>>> zroot                279G  6.75T    88K  /zroot
>>> zroot/ROOT          1.74G  6.75T    88K  none
>>> zroot/ROOT/default  1.74G  6.75T  1.74G  /
>>> zroot/tmp            176K  6.75T   176K  /tmp
>>> zroot/usr            277G  6.75T    88K  /usr
>>> zroot/usr/home       276G  6.75T   276G  /usr/home
>>> zroot/usr/ports       88K  6.75T    88K  /usr/ports
>>> zroot/usr/src        670M  6.75T   670M  /usr/src
>>> zroot/var           47.5M  6.75T    88K  /var
>>> zroot/var/audit       88K  6.75T    88K  /var/audit
>>> zroot/var/crash       88K  6.75T    88K  /var/crash
>>> zroot/var/log        820K  6.75T   820K  /var/log
>>> zroot/var/mail      46.3M  6.75T  46.3M  /var/mail
>>> zroot/var/tmp         88K  6.75T    88K  /var/tmp
>>> I had consultant configure another service for us. He set up the disk array with one dataset. so zfs list on this system give:
>>> NAME    USED  AVAIL  REFER  MOUNTPOINT
>>> zroot  2.65G  13.2T  2.62G  legacy
>>> From a sysadmin view I rather like the multiple datasets. Are there advantages to one over the other?
>> 
>> I see no advantages to me in the single dataset.
>> 
>> What do you see from zpool status? I'm wondering if this is not directly on hardware, such as a VM under VMware.
>> 
> Nop, no VM. I have three system that I did a take all options zfs install on. I do not remember which one I posted so here is a matching set:
> 
>> freebsd-version
> 12.1-RELEASE-p8
> 
>> zfs list
> NAME                 USED  AVAIL  REFER  MOUNTPOINT
> zroot               90.6G   801G    88K  /zroot
> zroot/ROOT          89.9G   801G    88K  none
> zroot/ROOT/default  89.9G   801G  89.9G  /
> zroot/tmp            156K   801G   156K  /tmp
> zroot/usr            723M   801G    88K  /usr
> zroot/usr/home      18.7M   801G  18.7M  /usr/home
> zroot/usr/ports       88K   801G    88K  /usr/ports
> zroot/usr/src        704M   801G   704M  /usr/src
> zroot/var           21.7M   801G    88K  /var
> zroot/var/audit       88K   801G    88K  /var/audit
> zroot/var/crash       88K   801G    88K  /var/crash
> zroot/var/log       21.3M   801G  21.3M  /var/log
> zroot/var/mail        88K   801G    88K  /var/mail
> zroot/var/tmp         88K   801G    88K  /var/tmp
> 
>> zpool status
>  pool: zroot
> state: ONLINE
>  scan: none requested
> config:
> 
>        NAME        STATE     READ WRITE CKSUM
>        zroot       ONLINE       0     0     0
>          mirror-0  ONLINE       0     0     0
>            ada0p3  ONLINE       0     0     0
>            ada1p3  ONLINE       0     0     0
> 
> errors: No known data errors


So two drives and all in one filesystem.

That's very odd.  I have no idea why anyone would do that. You loose more than a few feature doing it that way.

Boot environments for one.


More information about the freebsd-questions mailing list