ZFS w/failing drives - any equivalent of Solaris FMA?
Freddie Cash
fjwcash at gmail.com
Fri Sep 12 17:12:31 UTC 2008
On September 12, 2008 09:32 am Jeremy Chadwick wrote:
> For home use, sure. Since most home/consumer systems do not include
> hot-swappable drive bays, rebooting is required. Although more and
> more consumer motherboards are offering AHCI -- which is the only
> reliable way you'll get that capability with SATA.
>
> In my case with servers in a co-lo, it's not acceptable. Our systems
> contain SATA backplanes that support hot-swapping, and it works how it
> should (yank the disk, replace with a new one) on Linux -- there is no
> need to do a bunch of hoopla like on FreeBSD. On FreeBSD, with that
> hoopla, also take the risk of inducing a kernel panic. That risk does
> not sit well with me, but thankfully I've only been in that situation
> (replacing a bad disk + using hot-swapping) once -- and it did work.
Hrm, is this with software RAID or hardware RAID?
With our hardware RAID systems, the process has always been the same,
regardless of which OS (Windows 2003 Servers, Debian Linux, FreeBSD) is
on the system:
- go into RAID management GUI, remove drive
- pull dead drive from system
- insert new drive into system
- go into RAID management GUI, make sure it picked up new drive and
started the rebuild
We've been lucky so far, and not had to do any drive replacements on our
non-ZFS software RAID systems (md on Debian, gmirror on FreeBSD). I'm
not looking forward to a drive failing, as these systems have
non-hot-pluggable SATA setups.
On the ZFS systems, we just "zpool offline" the drive, physically replace
the drive, and "zpool replace" the drive. On one system, this was done
via hot-pluggable SATA backplane, on another, it required a reboot.
--
Freddie Cash
fjwcash at gmail.com
More information about the freebsd-hackers
mailing list