HAST on ZFS with CARP failover

From: Robert Fitzpatrick <robert_at_webtent.org>
Date: Tue, 12 Mar 2024 21:03:11 UTC
I was looking for suggestions for this scenario. I have the following 
two server to test this with in the lab:

Lenovo x3550 M5 using M5210 RAID controller 16 cores total from dual CPU 
and 32GB
IBM x3550 M4 using M5110 RAID controller 12 cores total from dual CPU 
and 32GB

I have some of these M4 servers in production for years now on Linux 
with mdadm used only for KVM virtualization, needing to replace drives 
maybe 4 or 5 times in 2 of 3 servers over 3-5 years. Linux RAID always 
allowed me to hot-swap by disk clone of another drive structure and 
re-insert without issue except once, which turned out to be a server 
issue. However, I have a lot of FreeBSD VM and db server using ZFS. I'm 
a huge fan and want to use ZFS more.

I am planning to use these two to both test bhyve as an alternative VM 
host as well as HAST high availability storage. The hardware RAID would 
be disabled in favor of JBOD, I believe that will be the only option 
with these controllers. I'll also use emulated NVMe vs VirtIO as well as 
raw files versus zvols. These are the docs I've based my setup on...

https://klarasystems.com/articles/virtualization-showdown-freebsd-bhyve-linux-kvm/
https://forums.freebsd.org/threads/hast-and-zfs-with-carp-failover.29639/

So, my questions...

Would there be any reason not to use virtual machines as the HAST hosts, 
instead using direct ZFS pools?
Would this setup using JBOD disks be OK for production?
If I use ZFS for storage, is it more beneficial to use one pool for each 
VM raw file or one pool for all raw files? I would assume separate to 
enable snapshots per VM raw disk?

Thanks for any suggestions, recommendation or guidance.

-- 
Robert