add BIO_NORETRY flag, implement support in ata_da, use in ZFS vdev_geom
Scott Long
scottl at samsco.org
Sat Nov 25 10:54:06 UTC 2017
> On Nov 24, 2017, at 10:17 AM, Andriy Gapon <avg at FreeBSD.org> wrote:
>
>
>>> IMO, this is not optimal. I'd rather pass BIO_NORETRY to the first read, get
>>> the error back sooner and try the other disk sooner. Only if I know that there
>>> are no other copies to try, then I would use the normal read with all the retrying.
>>>
>>
>> I agree with Warner that what you are proposing is not correct. It weakens the
>> contract between the disk layer and the upper layers, making it less clear who is
>> responsible for retries and less clear what “EIO” means. That contract is already
>> weak due to poor design decisions in VFS-BIO and GEOM, and Warner and I
>> are working on a plan to fix that.
>
> Well... I do realize now that there is some problem in this area, both you and
> Warner mentioned it. But knowing that it exists is not the same as knowing what
> it is :-)
> I understand that it could be rather complex and not easy to describe in a short
> email…
>
There are too many questions to ask, I will do my best to keep the conversation
logical. First, how do you propose to distinguish between EIO due to a lengthy
set of timeouts, vs EIO due to an immediate error returned by the disk hardware?
CAM has an extensive table-driven error recovery protocol who’s purpose is to
decide whether or not to do retries based on hardware state information that is
not made available to the upper layers. Do you have a test case that demonstrates
the problem that you’re trying to solve? Maybe the error recovery table is wrong
and you’re encountering a case that should not be retried. If that’s what’s going on,
we should fix CAM instead of inventing a new work-around.
Second, what about disk subsystems that do retries internally, out of the control
of the FreeBSD driver? This would include most hardware RAID controllers.
Should what you are proposing only work for a subset of the kinds of storage
systems that are available and in common use?
Third, let’s say that you run out of alternate copies to try, and as you stated
originally, that will force you to retry the copies that had returned EIO. How
will you know when you can retry? How will you know how many times you
will retry? How will you know that a retry is even possible? Should the retries
be able to be canceled?
Why is overloading EIO so bad? brelse() will call bdirty() when a BIO_WRITE
command has failed with EIO. Calling bdirty() has the effect of retrying the I/O.
This disregards the fact that disk drivers only return EIO when they’ve decided
that the I/O cannot be retried. It has no termination condition for the retries, and
will endlessly retry I/O in vain; I’ve seen this quite frequently. It also disregards
the fact that I/O marked as B_PAGING can’t be retried in this fashion, and will
trigger a panic. Because we pretend that EIO can be retried, we are left with
a system that is very fragile when I/O actually does fail. Instead of adding
more special cases and blurred lines, I want to go back to enforcing strict
contracts between the layers and force the core parts of the system to respect
those contracts and handle errors properly, instead of just retrying and
hoping for the best.
> But then, this flag is optional, it's off by default and no one is forced to
> used it. If it's used only by ZFS, then it would not be horrible.
> Unless it makes things very hard for the infrastructure.
> But I am circling back to not knowing what problem(s) you and Warner are
> planning to fix.
>
Saying that a feature is optional means nothing; while consumers of the API
might be able to ignore it, the producers of the API cannot ignore it. It is
these producers who are sick right now and should be fixed, instead of
creating new ways to get even more sick.
Scott
More information about the freebsd-geom
mailing list