Re: FreeBSD 13.2-STABLE can not boot from damaged mirror AND pool stuck in "resilver" state even without new devices.
Date: Sun, 07 Jan 2024 15:50:19 UTC
On 1/7/2024 10:38, Miroslav Lachman wrote: > On 07/01/2024 15:49, Lev Serebryakov wrote: >> On 05.01.2024 18:28, Lev Serebryakov wrote: >> >>> After that my server fails to boot, gtpzfsboot from second disk >>> (ada1) reports several "zio_read error: 5" and >>> >>> ZFS: i/o error - all block copies unavailable >>> ZFS: can't read MOS of pool zroot >>> >>> after that. >> I've re-created pool from scratch >> >> zpool create znewroot ada0p3 && zfs send zroot | zfs receive >> znewroot && zpool destroy zroot && zpool attach znewroot ada0p3 ada1p3 >> >> but gptzfsboot still can not boot from it with same diagnostics :-( > > How large are your disks in a question? > > I was bitten by this not a long time ago when migrating my 2TB pool by > zfs send to larger disks (4TB), then I see the error: > > ZFS: i/o error - all block copies unavailable > ZFS: can't read MOS of pool zroot > > ..... > > It can also be avoided if your machine supports EFI boot, but my HP > Microserver Gen 8 does not support it. > > Kind regards > Miroslav Lachman Yes, gptzfsboot (non-EFI) uses BIOS calls to read the headers and ultimately the kernel; if the pool you wish boot from is not contained entirely within the first 2TB of the drive (which can certainly happen on a disk larger if you are not segregating the boot pool separate from the rest of the space on a drive larger than that) it can fail to load because some of the data it has to read is beyond the 32 bit block offset that can be used. If you have (for example) 4Tb drives the way to avoid this is to have the boot pool be of sufficient size to hold the operating system and then a second pool (that can be of any size) which holds all your other data, with the boot pool comprised of vdevs that have all blocks with in the first 2Tb. -- Karl Denninger karl@denninger.net /The Market Ticker/ /[S/MIME encrypted email preferred]/