ASC/ASCQ Review

From: Warner Losh <imp_at_bsdimp.com>
Date: Thu, 13 Jul 2023 19:14:20 UTC
Greetings,

i've been looking closely at failed drives for $WORK lately. I've noticed
that a lot of errors that kinda sound like fatal errors have SS_RDEF set on
them.

What's the process for evaluating whether those error codes are worth
retrying. There are several errors that we seem to be seeing (preliminary
read of the data) before the drive gives up the ghost altogether. For those
cases, I'd like to post more specific lists. Should I do that here?

Independent of that, I may want to have a more aggressive 'fail fast'
policy than is appropriate for my work load (we have a lot of data that's a
copy of a copy of a copy, so if we lose it, we don't care: we'll just
delete any files we can't read and get on with life, though I know others
will have a more conservative attitude towards data that might be precious
and unique). I can set the number of retries lower, I can do some other
hacks for disks that tell the disk to fail faster, but I think part of the
solution is going to have to be failing for some sense-code/ASC/ASCQ tuples
that we don't want to fail in upstream or the general case. I was thinking
of identifying those and creating a 'global quirk table' that gets applied
after the drive-specific quirk table that would let $WORK override the
defaults, while letting others keep the current behavior. IMHO, it would be
better to have these separate rather than in the global data for tracking
upstream...

Is that clear, or should I give concrete examples?

Comments?

Warner