ZFS: Suspended Pool due to allegedly uncorrectable I/O error

From: Pamela Ballantyne <boyvalue_at_gmail.com>
Date: Mon, 19 Aug 2024 21:19:50 UTC
Hi,

So, this is long, so here's TL;DR:  ZFS suspended a pool for presumably
good reasons, but on reboot, there didn't seem to be any good reason for it.

As a background, I'm an early ZFS adopter of ZFS. I have a remote server
running ZFS
continuously since late 2010, 24x7. I also use ZFS on my home machines.
While I do not
claim to be a ZFS expert, I've managed to handle the various issues that
have come up over
the years and haven't had to ask for help from the experts.

But now I am completely baffled and would appreciate any help, advice,
pointers, links, whatever.

On Sunday Morning, 08/11, I upgraded the server from 12.4-RELEASE-p9 to
13.3-RELEASE-p5.
The upgrade went smoothly; there was no problem, and the server worked
flawlessly post-upgrade.

On Thursday evening, 8/15, the server became unreachable. It would still
respond to pings via
the IP address, but that was it.  I used to be able to access the server
via IPMI, but that ability disappeared
several company mergers ago. The current NOC staff sent me a screenshot of
the server output,
which showed repeated messages saying:

"Solaris: WARNING: Pool 'zroot' has encountered an uncorrectable I/O
failure and has been suspended."

There had been no warnings in the log files, nothing. There was no sign
from the S.M.A.R.T. monitoring system, nothing.

It's a simple mirrored setup with just two drives. So I expected a
catastrophic hardware failure. Maybe the HBA had
failed (this is on a SuperMicro Blade server), or both drives had manage to
die at the same time.

Without any way to log in remotely, I requested a reboot.  The server
rebooted without errors. I could
ssh into my account and poke around.  Everything was normal. There were no
log entries related to the crash. I realize post-crash
there would be no filesystem to write to, but there was still nothing
leading up to it - no hardware or disk-related
messages of any kind.  The only sign of any problem I could find was 2
checksum errors listed on only one of the
drives in the mirror when I did zpool status.

I ran a scrub, which completed without any problem or error. About 30
minutes after the scrub, the
two checksum errors disappeared without manually clearing them. I've run
some drive tests and
they both pass with flying colors. And it's now been a few days and the
system has been performing flawlessly.

So, I am completely flummoxed. I am trying to understand why the pool was
suspended when it looks like
something ZFS should have easily handled. I've had complete drive failures,
and ZFS just kept on going.
Is there any bug or incompatibility in 13.3-p5?  Is this something that
will recur on each full moon?

So thanks in advance for any advice, shared experiences, or whatever you
can offer.

Best,
Pammy