ZFS HAST config preference
Daniel Kalchev
daniel at digsys.bg
Tue Apr 5 15:23:03 UTC 2011
This is more of an proof of concept question:
I am building an redundant cluster of blade servers, and toy with the
idea to use HAST and ZFS for the storage.
Blades will work in pairs and each pair will provide various services,
from SQL databases, to hosting virtual machines (jails and otherwise).
Each pair will use CARP for redundancy.
My original idea was to set up blades so that they run HAST on pairs of
disks, and run ZFS in number of mirror vdevs on top of HAST. The ZFS
pool will exist only on the master HAST node. Let's call this setup1.
Or, I could use ZFS volumes and run HAST on top of these. This means,
that on each blade, I will have an local ZFS pool. Let's call this setup2.
Third idea would be to have the blades completely diskless, boot from
separate boot/storage server and mount filesystems or iSCSI volumes as
needed from the storage server. HAST might not be necessary here. ZFS
pool will exist on the storage server only. Let's call this setup3.
While setup1 is most straightforward, it has some drawbacks:
- disks handled by HAST need to be either identical or have matching
partitions created;
- the 'spare' blade would do nothing, as it's disk subsystem will be
gone as long as it is HAST slave. As the blades are quite powerful (4x8
core AMD) that would be wasteful, at least in the beginning.
With setup2, I can get away with different size disks in each blade. All
blades can also be used for whatever additional processing, shared data
will be only presented by HAST to whichever node needs it, for
"important" services. One drawback here:
- can't just pull off one of the blades, without stopping/transferring
first all of it's services.
It seems that in larger scale, setup3 would be best. I am not yet here,
although close (the storage server is missing).
HAST replication speed should not be an issue, there is 10Gbit network
between the blade servers.
Has anyone already setup something similar? What was the experience?
There were recently some bugs that sort of plagued setup1, but these
seem to be resolved now.
Daniel
More information about the freebsd-stable
mailing list