Re: Unable to replace drive in raidz1
- In reply to: mike tancsa : "Re: Unable to replace drive in raidz1"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Fri, 06 Sep 2024 15:56:25 UTC
On Fri, Sep 6, 2024 at 8:44 AM mike tancsa <mike@sentex.net> wrote: > On 9/6/2024 10:24 AM, Chris Ross wrote: > > NAME STATE READ WRITE CKSUM > tank DEGRADED 0 0 0 > raidz1-0 DEGRADED 0 0 0 > da3 FAULTED 0 0 0 external device fault > da1 ONLINE 0 0 0 > da2 ONLINE 0 0 0 > raidz1-1 ONLINE 0 0 0 > diskid/DISK-K1GMBN9D ONLINE 0 0 0 > diskid/DISK-K1GMEDMD ONLINE 0 0 0 > diskid/DISK-K1GMAX1D ONLINE 0 0 0 > raidz1-2 ONLINE 0 0 0 > diskid/DISK-3WJDHJ2J ONLINE 0 0 0 > diskid/DISK-3WK3G1KJ ONLINE 0 0 0 > diskid/DISK-3WJ7ZMMJ ONLINE 0 0 0 > > > I would triple check to see what the devices are that are part of the > pool. I wish there was a way to tell zfs to only display one or the > other. So list out what diskid/DISK-K1GMBN9D, diskid/DISK-K1GMEDMD... to > diskid/DISK-3WJ7ZMMJ are in terms of /dev/da* actually are. I have some > controllers that will re-order the disks on every reboot. glabel status > and camcontrol devlist should help verify > You can't tell ZFS specifically to use one form of GEOM ID vs another, but you can tell the whole system which GEOM IDs to not use. Add the following to /boot/loader.conf: kern.geom.label.disk_ident.enable="0" # Disable the auto-generated Disk IDs for disks kern.geom.label.gptid.enable="0" # Disable the auto-generated GPT UUIDs for disks kern.geom.label.ufsid.enable="0" # Disable the auto-generated UFS UUIDs for filesystems The first line will remove the diskid/DISK-* entries and show the device nodes (daX). The other two lines remove GPT and UFS UUIDs as well. All my ZFS systems have those entries in loader.conf, as I prefer to use GPT partition labels in my pools (gpt/label-name) where I list which specific JBOD chassis and drive bay the HD is located. That way, it doesn't matter if the device nodes are renumbered, as the labels don't change. Makes it much easier to find the specific drive to be replaced, whether in my home server with 6 drives or my backups servers at work with multiple JBODs and 92 drives. -- Freddie Cash fjwcash@gmail.com