Dualboot and ZFS
Trond Endrestøl
Trond.Endrestol at fagskolen.gjovik.no
Tue Jan 16 12:12:02 UTC 2018
On Tue, 16 Jan 2018 18:28+0700, Victor Sudakov wrote:
> Trond Endrest?l wrote:
> >
> > I couldn't resist attempting a proof of concept, so here it is.
>
> Before I follow your steps, two comments:
> >
> > !!! Show the resulting disklabel !!!
> >
> > [script]# gpart show ada0s3
> > => 0 67108864 ada0s3 BSD (32G)
> > 0 58720256 1 freebsd-zfs (28G)
> > 58720256 8388608 2 freebsd-swap (4.0G)
>
> How funny! I did not even know that fstype in the disklabel can be "ZFS". I
> have only seen "swap" and "4.2BSD" so far.
>
> $ gpart add -t freebsd-zfs md0s1 && disklabel md0s1
> md0s1a added
> # /dev/md0s1:
> 8 partitions:
> # size offset fstype [fsize bsize bps/cpg]
> a: 4095 0 ZFS
> c: 4095 0 unused 0 0 # "raw" part, don't edit
> $
>
> [dd]
>
> >
> > !!! Create our zpool, YMMV !!!
> >
> > !!! Create our initial BE, YMMV !!!
>
> Do you know how to create a beadm-friendly zroot manually (like the one
> created automatically by bsdinstall)?
I have created my own recipe based on the guides published elsewhere,
including those on the FreeBSD wiki, and as usual I have applied
thoughts from my own lurid mind.
Have a look at my files at
https://ximalas.info/~trond/create-zfs/canmount/
I create the disk layout manually, see 00-create-gpart-layout-UEFI.txt
and 00-create-gpart-layout.txt for some ideas.
I use a SysV approach when creating the ZFS filesystem layout and
installing the system, i.e. lots and lots of environment variables.
See 01-create-zfs-layout.sh, 02-temp-mountpoints.sh,
03b-install-stable-9-10-11-or-head.sh, and 04-final-mountpoints.sh.
For special cases such as my mail server, I edited
01-create-zfs-layout.sh to suit the two pools, one for the system and
another one for user data. All mail related filesystems ended up in
the data pool.
Between steps 3 and 4, I edit various files, set the root password and
the timezone, ensuring sendmail's files are up & running in /etc/mail,
all done from within chroot $DESTDIR.
I know some like to use snapshots and clones as a safety belt before
they upgrade their main BE.
I do the opposite, I create a snapshot and a clone, install the new
world and kernel into the clone, merge config files, update the bootfs
pool property, and reboot into the new clone. To me, this saves time
while giving me plenty of seatbelts to boot from should I need to.
Running -CURRENT on some of my VMs has forced me to use my old clones
to recover from clang bugs, etc.
On VMs I restrain the number of snapshots/clones/BE to 3. The current
BE and the 2 previous ones, and the snapshots that tie them all
together. Physical systems usually have more than enough storage, and
I clean up the long list of BEs about once a year (zfs promote, zfs
destroy -Rv).
Here's a good execise on creating snapshots and clones, and how to
clean them up:
https://ximalas.info/2015/06/23/an-exercise-on-zfs-clones/
--
Trond.
More information about the freebsd-questions
mailing list