Re: Having a disk double listes in a zraid3 pool
Date: Mon, 17 Jul 2023 12:10:02 UTC
On Mon, Jul 17, 2023 at 01:59:32PM +0200, Willem Jan Withagen wrote: > Hi, > > I admit it is on Linux, but still hope to find the answer here... > > When Replacing a broken disk I ended up with 2 times the same disk How did you replace the broken disk? The correct way is to issue 1) zpool offline, 2) replace physical disk and 3) gpart backup good_disk|gpart restore -F new_disk 4) zpool replace Also, don't forget to boot code on the EFI partition > and thus the zraid is in DEGRADED state: > pool: zfs-data > state: DEGRADED > status: One or more devices has experienced an unrecoverable error. An > attempt was made to correct the error. Applications are unaffected. > action: Determine if the device needs to be replaced, and clear the errors > using 'zpool clear' or replace the device with 'zpool replace'. > see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P > scan: resilvered 1.09G in 00:01:10 with 0 errors on Mon Jul 17 13:36:10 > 2023 > config: > > NAME STATE READ WRITE CKSUM > zfs-data DEGRADED 0 0 0 > raidz3-0 DEGRADED 0 0 0 > sdb ONLINE 0 0 0 > sdd ONLINE 0 0 0 > sde ONLINE 0 0 0 > sdf ONLINE 0 0 0 > sdg ONLINE 0 0 0 > sdh ONLINE 0 0 0 > sdi ONLINE 0 0 0 > sdj ONLINE 0 0 0 > sdk ONLINE 0 0 0 > sdl OFFLINE 0 0 0 > sdl ONLINE 0 0 0 > raidz3-1 ONLINE 0 0 0 > sdm ONLINE 0 0 0 > sdn ONLINE 0 0 0 > sdo ONLINE 0 0 0 > sdq ONLINE 0 0 5 > sdp ONLINE 0 0 5 > sdr ONLINE 0 0 0 > sds ONLINE 0 0 0 > sdt ONLINE 0 0 0 > sdu ONLINE 0 0 0 > sdv ONLINE 0 0 0 > sdw ONLINE 0 0 0 > sdx ONLINE 0 0 0 > > errors: No known data errors > > Any idea how to fix this? > > Regards, > --WjW > -- Julien Cigar Belgian Biodiversity Platform (http://www.biodiversity.be) PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0 No trees were killed in the creation of this message. However, many electrons were terribly inconvenienced.