Re: cannot remove/detach missing disk from zmirror: no valid replicas
Date: Tue, 03 May 2022 12:43:00 UTC
02.05.2022 12:23, Eugene M. Zheganin wrote: > Hello, > > > Any chance I can sole this without replacing ? (Yeah, I accidentally issued "attach" instead of "replace" being fully confident that I will be able to "detach" later. I couldn't be more wrong.) > > > [root@replica:~]# zpool status > pool: zfsroot > state: DEGRADED > status: One or more devices is currently being resilvered. The pool will > continue to function, possibly in a degraded state. > action: Wait for the resilver to complete. > scan: resilver in progress since Thu Jan 1 03:00:06 1970 > 188G scanned at 122B/s, 21.7G issued at 14B/s, 2.03T total > 21.8G resilvered, 1.04% done, no estimated completion time > config: > > NAME STATE READ WRITE CKSUM > zfsroot DEGRADED 0 0 0 > mirror-0 DEGRADED 0 0 0 > gpt/zfsroot1 ONLINE 0 0 0 (resilvering) > gpt/zfsroot0 UNAVAIL 0 0 0 cannot open > diskid/DISK-31P58VAASp3 ONLINE 0 0 0 (resilvering) > > errors: 6 data errors, use '-v' for a list > [root@replica:~]# zpool detach zfsroot gpt/zfsroot0 > cannot detach gpt/zfsroot0: no valid replicas > [root@replica:~]# uname -a > FreeBSD replica.scorista.ru 13.1-RC5 FreeBSD 13.1-RC5 releng/13.1-n250141-2e9ad6042be GENERIC amd64 > [root@replica:~]# zpool remove zfsroot gpt/zfsroot0 > cannot remove gpt/zfsroot0: operation not supported on this type of pool > [root@replica:~]# zpool split -R /newroot zfsroot newroot diskid/DISK-31P58VAASp3 > Unable to split zfsroot: pool is busy > > [root@replica:~]# Reproducing... # truncate -s 5G file1.img file2.img file3.img # zpool create ztest mirror $(realpath file1.img) $(realpath file2.img) # dd if=/dev/urandom bs=1m count=$((4*1024)) of=/ztest/file 4096+0 records in 4096+0 records out 4294967296 bytes transferred in 26.212189 secs (163853817 bytes/sec) # zpool list ztest NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT ztest 4.50G 4.00G 509M - - 64% 88% 1.00x ONLINE - # zpool export ztest # rm file2.img # zpool import -d . ztest # zpool status -v ztest pool: ztest state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q config: NAME STATE READ WRITE CKSUM ztest DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 /home/eugen/file1.img ONLINE 0 0 0 1055108590663069279 UNAVAIL 0 0 0 was /home/eugen/file2.img # zpool attach ztest $(realpath file1.img) $(realpath file3.img) && zpool status -v pool: ztest state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Tue May 3 12:38:54 2022 4.00G scanned at 4.00G/s, 16.3M issued at 16.3M/s, 4.00G total 12.3M resilvered, 0.40% done, 00:04:10 to go config: NAME STATE READ WRITE CKSUM ztest DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 /home/eugen/file1.img ONLINE 0 0 0 1055108590663069279 UNAVAIL 0 0 0 was /home/eugen/file2.img /home/eugen/file3.img ONLINE 0 0 0 (resilvering) errors: No known data errors So, how did you achieve your situation when you have 6 data errors and two parts of mirror in both ONLINE and (resilvering) state? I suspect "6 data errors" are your problem.