ZFS on root, DR techniques.
Danny Carroll
fbsd at dannysplace.net
Thu Aug 26 10:52:09 UTC 2010
Hiya All,
I'm busy building a machine that uses ZFS only.
It's using gpart vdevs and I've been able to create a ZFS mirror root
easily thanks to pjd@'s article at
http://blogs.freebsdish.org/pjd/2010/08/06/from-sysinstall-to-zfs-only-configuration/
This machine will eventually go into a data centre and I am thinking a
little about data recovery.
With UFS it's quite trivial to do a dump/restore and I'd like to have a
clear idea of how to do this with ZFS. I am sure smarter ppl than I
have figured this out so feel free to chime in at any time and tell me
where to find instructions from the the original zfs DR wheel (nothing
is as yet in the handbook.
Here are my inital thoughts - I've not had time check if this can
actually be done (never used a fixit boot before):
I have a (gzipped) file containing a snapshot of my daily DR located at
"backuserver:/client-root-snapshot-daily.gz"
I have a fresh piece of hardware with my mirrored root disks ready to go.
I boot off the dvd1 iso and select fixit.
I use gpart to label the drives as per the original instructions:
dd if=/dev/zero of=/dev/ada0 count=79
gpart create -s GPT ada0
gpart add -b 34 -s 128 -t freebsd-boot ada0
gpart add -s 2g -t freebsd-swap -l swap0 ada0
gpart add -t freebsd-zfs -l system0 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
dd if=/dev/zero of=/dev/ada1 count=79
gpart create -s GPT ada1
gpart add -b 34 -s 128 -t freebsd-boot ada1
gpart add -s 2g -t freebsd-swap -l swap1 ada1
gpart add -t freebsd-zfs -l system1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
I then re-create the swap mirror.
gmirror label -F -h -b round-robin swap /dev/gpt/swap0
gmirror insert -h -p 1 swap /dev/gpt/swap1
Then I create the ZFS pool as it was:
zpool create -O mountpoint=/mnt -O atime=off -O setuid=off -O
canmount=off system /dev/gpt/system0
zpool attach system /dev/gpt/system1 /dev/gpt/system0
zfs create -o mountpoint=legacy -o setuid=on system/root
zpool set bootfs=system/root system
zfs create -o compress=lzjb system/tmp
chmod 1777 /mnt/tmp
zfs create -o canmount=off system/usr
zfs create -o setuid=on system/usr/local
zfs create -o compress=gzip system/usr/src
zfs create -o compress=lzjb system/usr/obj
zfs create -o compress=gzip system/usr/ports
zfs create -o compress=off system/usr/ports/distfiles
zfs create -o canmount=off system/var
zfs create -o compress=gzip system/var/log
zfs create -o compress=lzjb system/var/audit
zfs create -o compress=lzjb system/var/tmp
chmod 1777 /mnt/var/tmp
zfs create -o canmount=off system/usr/home
zfs create system/usr/home/pjd
Then I receive the file from the backup server:
ssh backuserver "gzcat client-root-snapshot-daily.gz" | zfs recv -F
system
Reboot and all is good.
There are a few questions I have....
Does the fixit image contain enough to be able to do this?
- can I enable networking?
- are zfs and gpart utils already there?
- if not, what about the livefs image?
Do I really need to recreate the while ZFS tree again or will the
snapshot do that?
Should I be backing up the ZFS properties of each ZFS fs so I know what
needs to be set on the newly created filesystems? Or is the snapshot
file enough?
Is there a better way to do this?
I am very interested to hear what others think... Eventually I'd like
to submit what I figure out to the doc maintainers for inclusion in the
handbook perhaps.
-D
More information about the freebsd-fs
mailing list