Re: Does a failed separate ZIL disk mean the entire zpool is lost?

From: Allan Jude <allanjude_at_freebsd.org>
Date: Mon, 09 Sep 2024 17:31:44 UTC
As the last person mentioned, you should be able to import with the -m 
flag, and only lose about 5 seconds worth of writes.

The pool is already partially imported at boot by the other mechanisms, 
you might need to disable that to prevent the partial import at boot, so 
you can do the manual import.

On 2024-09-09 12:20 p.m., infoomatic wrote:
> did you use two mirrored ZIL devices?
>
> You can "zpool import -m", but you will probably be confronted with some
> errors - you will probably lose the data the ZIL has not committed, but
> most of your data in your pool should be there
>
>
> On 09.09.24 17:51, andy thomas wrote:
>> A server I look after had a 65TB ZFS RAIDz1 pool with 8 x 8TB hard disks
>> plus one hot spare and separate ZFS intent log (ZIL) and L2ARC cache
>> disks that used a pair of 256GB SSDs. This ran really well for 6 years
>> until 2 weeks ago, when the main cooling system in the data centre where
>> it was installed failed and the backup cooling system failed to start 
>> up.
>>
>> The upshot was the ZIL SSD went short-circuit across its power
>> connector, shorting out the server's PSUs and shutting down the server.
>> After replacing the failed SSD and verifying all the spinning hard disks
>> and the cache SSD are undamaged, attempts to import the pool fail with
>> the following message:
>>
>> NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP DEDUP
>> HEALTH  ALTROOT
>> clustor2      -      -      -        -         -      - -      -
>> UNAVAIL  -
>>
>> Does this mean the pool's contents are now lost and unrecoverable?
>>
>> Andy
>>
>