HAST, ucarp, and ZFS
Freddie Cash
fjwcash at gmail.com
Mon Mar 1 20:02:14 UTC 2010
Perhaps it's just a misunderstanding on my part of the layering involved,
but I'm having an issue with the sample ucarp_up.sh script on the HAST wiki
page.
Here's the test setup that I have:
hast1:
glabel 4x 2 GB virtual disks (label/disk01 --> label/disk04)
hast.conf create 4 resources (disk01 --> disk04, using the glabelled
disks)
zpool create hapool raidz1 hast/disk01 .. hast/disk04
hast2:
glabel 4x 2 GB virtual disks (label/disk01 --> label/disk04)
hast.conf create 4 resources (disk01 --> disk04)
So far so good. On hast1, I have a working ZFS pool, I can create data,
filessytems, etc, and watch network traffic as it syncs to hast2.
I can manually down hast1 and switch hast2 to "primary" and import the
hapool. I can create data, filesystems, etc. And I can manually bring
hast1 online and set it to secondary, and watch it sync back.
Where I'm stuck is how to modify the ucarp_up.sh script to work with
multiple hast resources. Do I just edit it to handle each of the 4 hast
resources in turn, or am I missing something simple, like that there should
only be a single hast resource? I'm guess it's a simple "edit the script to
suit my setup" issue, but wanted to double-check.
The production server I want to use this with has 24 harddrives in it,
configured into multiple raidz2 vdevs, as part of a single ZFS pool. Which
will mean 24 separate hast resources, if I understand things correctly.
--
Freddie Cash
fjwcash at gmail.com
More information about the freebsd-fs
mailing list