ZFS pool faulted (corrupt metadata) but the disk data appears ok...
Ben RUBSON
ben.rubson at gmail.com
Fri Feb 2 22:34:13 UTC 2018
On 02 Feb 2018 21:48, Michelle Sullivan wrote:
> Ben RUBSON wrote:
>
>> So disks died because of the carrier, as I assume the second unscathed
>> server was OK...
>
> Pretty much.
>
>> Heads must have scratched the platters, but they should have been
>> parked, so... Really strange.
>
> You'd have thought... though 2 of the drives look like it was wear and
> wear issues (the 2 not showing red lights) just not picked up on the
> periodic scrub.... Could be that the recovery showed that one up... you
> know - how you can have an array working fine, but one disk dies then
> others fail during the rebuild because of the extra workload.
Yes... To try to mitigate this, when I add a new vdev to a pool, I spread
the new disks I have among the existing vdevs, and construct the new vdev
with the remaining new disk(s) + other disks retrieved from the other
vdevs. Thus, when possible, avoiding vdevs with all disks at the same
runtime.
However I only use mirrors, applying this with raid-Z could be a little bit
more tricky...
Ben
More information about the freebsd-fs
mailing list