ZFS RaidZ2 with 24 drives?
Solon Lutz
solon at pyro.de
Tue Dec 15 23:53:17 UTC 2009
> I deployed using the two configurations you see above. Both machines
> have a pair of Areca 1231ML RAID controllers with super-sized BBWC
> (battery backed write cache). On back01, each controller presents a 12-
> disk RAID-5 array and ZFS concatenates them into the zpool you see
> above. On back02, the RAID controller is configured in JBOD mode and
> disks are pooled as shown.
Why concatenate them into one pool and give up the redundancy?
I have the same setup: Areca 24-port RAID6 (24x 500gb)
NAME STATE READ WRITE CKSUM
temp ONLINE 0 0 24
da0 ONLINE 0 0 48
And it very nearly killed itself after 28 months of flawless duty...
All went fine until 4 drives disconnected themselves from the Areca due
to faulty SATA-cables. This crashed the Areca in such a way, that I had
to disconnect the battery module from the controller in order to get it
initialized during boot-up.
Cache gone - ZFS unable to mount 10TB pool - scrub failed - I/O errors
This was three months ago and if I hadn't found an extremly skilled person
who was able to manually find and distinguish between good and corrupted
meta-data sets, replicate them in their proper spots and zero out corrupt
transaction ids - I would have lost 10TB of data. (No backups - to expensive)
Why do you use JBOD? You can configure a passthrough for all drives,
explicitly degrading the Areca to a dumb sata controller...
Best regards,
Solon Lutz
+-----------------------------------------------+
| Pyro.Labs Berlin - Creativity for tomorrow |
| Wasgenstrasse 75/13 - 14129 Berlin, Germany |
| www.pyro.de - phone + 49 - 30 - 48 48 58 58 |
| info at pyro.de - fax + 49 - 30 - 80 94 03 52 |
+-----------------------------------------------+
More information about the freebsd-fs
mailing list