[Bug 263473] ZFS drives fail to mount datasets when rebooting - 13.1-RC4

From: <bugzilla-noreply_at_freebsd.org>
Date: Tue, 03 Jan 2023 05:14:02 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=263473

Xin LI <delphij@FreeBSD.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |delphij@FreeBSD.org

--- Comment #18 from Xin LI <delphij@FreeBSD.org> ---
(In reply to virushuo from comment #17)

Some additional details (I've talked with the reporter over Telegram):

Both old and new systems have on board RAID controller; the old system was
flashed to IT mode, the new system was Dell H730, and the owner chooses to not
flash it to avoid bricking it.

On the old system, / was ZFS (using two disks in a mirrored zpool); the new
system was using a mirrored UFS for /.

The disk array showed up as NETAPP DS424IOM6; it was connected to the same HBA
moved from the old system to the new system.

We observed that the ses(4) device for the NetApp disk array appeared pretty
*late* at boot time, which was after /etc/rc.d/fsck and the disks only showed
up after that.  In the current RC order, /etc/rc.d/zfs runs much earlier, so it
died with:

cannot import '<pool0>': no such pool or dataset
        Destroy and re-create the pool from
        a backup source.
cannot import '<pool1>': no such pool or dataset
        Destroy and re-create the pool from
        a backup source.
cachefile import failed, retrying
nvpair_value_nvlist(nvp, &rv) == 0 (0x16 == 0)
ASSERT at
/usr/src/sys/contrib/openzfs/module/nvpair/fnvpair.c:586:fnvpair_value_nvlist()
pid 48 (zpool), jid 0, uid 0: exited on signal 6
Abort trap

-- 
You are receiving this mail because:
You are the assignee for the bug.