ZFS l2arc and HAST ? newbie question
Thomas Steen Rasmussen
thomas at gibfest.dk
Tue Jun 15 13:21:47 UTC 2010
Hello list,
I am playing with HAST in order to build some redundant storage
for a mailserver, using ZFS as the filesystem.
I have the following zpool layout before stating the HAST experiments:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2 ONLINE 0 0 0
label/hd4 ONLINE 0 0 0
label/hd5 ONLINE 0 0 0
label/hd6 ONLINE 0 0 0
label/hd7 ONLINE 0 0 0
logs ONLINE 0 0 0
mirror ONLINE 0 0 0
label/ssd0s1 ONLINE 0 0 0
label/ssd1s1 ONLINE 0 0 0
cache
label/ssd0s2 ONLINE 0 0 0
label/ssd1s2 ONLINE 0 0 0
As I understand it, to accomplish this with HAST I will need to make a
HAST resource for each physical disk, like so:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2 ONLINE 0 0 0
hast/hahd4 ONLINE 0 0 0
hast/hahd5 ONLINE 0 0 0
hast/hahd6 ONLINE 0 0 0
hast/hahd7 ONLINE 0 0 0
But what about slog and cache devices, currently on SSD disks for
performance reasons ? It doesn't really make sense to synchronize
a cache disk over the network, does it ?
Could I build the zpool with the SSD disks directly (without
HAST) and would ZFS survive an export/import on the other host,
when the cache disks are suddently different ? I am thinking cache
only here, not slog.
Do SSD l2arc / slog even make any sense when I am "deliberately"
slowing down the filsystem with network redundancy anyway ?
Oh, and is there any problems using labels for HAST devices ? My
controller likes to give new device names to disks now and then,
and it has been a blessing to use labels instead of device names,
so I'd like to continue doing that when using HAST.
If needed, any testing on my part will unfortunately have to wait a
couple of days for the MFC of the HAST fix from yesterday, as the SEQ
issue is preventing me from further experiments with HAST for now.
Thank you for any input, and _THANK YOU_ for the work on both ZFS
and HAST, their combined awesomeness is reaching epic proportions.
Best regards
Thomas Steen Rasmussen
More information about the freebsd-fs
mailing list