Re: Everchanging bytes at the end of mirror disks

From: Warner Losh <imp_at_bsdimp.com>
Date: Sun, 11 Dec 2022 15:08:55 UTC
On Sun, Dec 11, 2022 at 1:45 AM Artem Kuchin <artemkuchin76@gmail.com>
wrote:

> 11.12.2022 11:22, Warner Losh пишет:
>
>
>
> On Sat, Dec 10, 2022, 11:52 PM Artem Kuchin <artemkuchin76@gmail.com>
> wrote:
>
>> Hello!
>>
>> I am writing a small utility for myseld and part of it is comparing
>> gmirror disks. After running some tests i realized that some bytes at
>> the very end of disks are constantly changing.
>>
>
> The last sector has metadata about the mirror and about the mirror
> element.  It's this latter data that differs.
>
>
> Thank you for reply. Then there are several question
>
> 1) Last SECTOR is not always 512KB or is it? Do i need to get block size
> from diskinfo and subtract its size from disk size?
>

diskinfo(1) will tell you, it's returned with the DIOCGSECTORSIZE ioctl.


> 2) Why its content  is changing so often? On every write? How often? The
> only place to look for description is the gmirror sources?
>
When a mirror breaks (that is, writes can happen to one side but not the
other), we need to know right away which side is the more current one. The
gmirror does this by modifying the metadata to record how many writes have
happened to each mirror member (one reason that write is so expensive).

> It does not look good to me, but maybe i am wrong? Also, does it mean no
go for gmirror on ssd?

No. It's fine. All SSDs in the past 15-20 years have wear leveling (and
nearly all for an additional 10 years before that). It's quite hard to wear
out a device by repeated writing to one sector. You effectively have to
write the same amount of data you would if you were writing to multiple
sectors. SSDs are rated in 'drive writes per day': how many times you can
write to all the sectors of a drive, every day, for the warranty period of
the device. This is between 0.3 and 5 typically (though exceptions exist).
Any extra writes will be several orders of magnitude below this threshold
for all but the most insane write patterns (eg write all the odd sectors,
randomly, then write all the even sectors randomly, repeatedly). And if you
are doing an insane amount of writing, you likely wouldn't be using
gmirror.... It at most doubles the traffic to the drive, but if you have a
64k block size to UFS, you'd typically see only a few percent increase. So
unless you are writing your data to the drives at rates approaching the
endurance limit of the drive, this extra write won't be an issue.[*]

Warner

[*] It would theoretically be helpful,though, if gmirror could add an extra
N sectors to match the underlying physical hardware page sizes, but the
experiments I've done I've not been able to see a speed increase....