zpool (online|replace|labelclear) issues, -f option also failing
Ronald Klop
ronald-lists at klop.ws
Wed Sep 28 10:57:43 UTC 2016
Hi,
As a start you can use these in /boot/loader.conf to prevent the confusion
about gptid or disk_ident. I disabled gptid at my computer. But if I
understand you would like to disable disk_ident. For ZFS it should not
matter what you use.
$ sysctl kern.geom.label
kern.geom.label.disk_ident.enable: 1
kern.geom.label.gptid.enable: 0
kern.geom.label.gpt.enable: 1
kern.geom.label.ufs.enable: 1
kern.geom.label.ufsid.enable: 1
kern.geom.label.reiserfs.enable: 1
kern.geom.label.ntfs.enable: 1
kern.geom.label.msdosfs.enable: 1
kern.geom.label.iso9660.enable: 1
kern.geom.label.ext2fs.enable: 1
kern.geom.label.debug: 0
Further. Does ZFS see 14989197580381994958 and
gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 as the same disk? Zpool replace
also has an option to replace the disk 'with itself'. Just provide it one
parameter like this:
# zpool replace tank 14989197580381994958
or
# zpool replace tank gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6
Does that help?
Oh, while I read your mail again. You have 2 GB swap configured on the
disk so wiping 2MB at the start of the disk does not wipe the freebsd-zfs
metadata of the da14p2 partition. Try wiping 3GB from the start and end of
the disk and repartition it.
Success.
Ronald.
On Tue, 27 Sep 2016 22:53:27 +0200, Ultima <ultima1252 at gmail.com> wrote:
> Hello,
>
> I am currently trying to replace a disk that was offlined and getting
> the
> following error:
>
> # zpool replace tank 14989197580381994958
> gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6
> invalid vdev specification
> use '-f' to override the following errors:
> /dev/gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 is part of active pool
> 'tank'
>
> # zpool replace -f tank 14989197580381994958
> gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6
> invalid vdev specification
> the following errors must be manually repaired:
> /dev/gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 is part of active pool
> 'tank'
>
> # zpool status tank
> pool: tank
> state: DEGRADED
> status: One or more devices has been taken offline by the administrator.
> Sufficient replicas exist for the pool to continue functioning in a
> degraded state.
> action: Online the device using 'zpool online' or replace the device with
> 'zpool replace'.
> scan: resilvered 1.10T in 9h4m with 0 errors on Tue Sep 20 00:33:32
> 2016
> config:
>
> NAME STATE READ WRITE
> CKSUM
> tank DEGRADED 0 0
> 0
> raidz2-0 ONLINE 0 0 0
> gptid/8bdbd180-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8c4df91d-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8ccf21a3-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8d5521cb-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8de13b47-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8e842f92-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> raidz2-1 DEGRADED 0 0 0
> gptid/8bba4a82-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8c26d491-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8ca3fea6-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> 14989197580381994958 OFFLINE 0 0 0
> was /dev/diskid/DISK-********p2
> gptid/8db26351-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8e4bfa70-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> raidz2-2 ONLINE 0 0 0
> gptid/8b957b47-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8c0340da-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8c77ddcb-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8cf6b7f1-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8d84b31e-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8e146dad-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> raidz2-3 ONLINE 0 0 0
> gptid/8ebb39df-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8ef49770-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/2f94035d-7e9f-11e6-abe9-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8f69cf08-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8fa7c0a6-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> gptid/8fe7816d-f52a-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
> logs
> gptid/683dc146-f531-11e5-90c5-fcaa14edc6a6 ONLINE 0 0 0
>
> errors: No known data errors
>
> # glabel status | grep da14
> gptid/24a57a9b-84f0-11e6-bbbc-fcaa14edc6a6 N/A da14p1
> gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 N/A da14p2
> diskid/DISK-******** N/A da14
>
> # gpart show da13 da14
> => 40 7814037088 da13 GPT (3.6T)
> 40 4194304 1 freebsd-swap (2.0G)
> 4194344 7809842784 2 freebsd-zfs (3.6T)
>
> => 40 7814037088 da14 GPT (3.6T)
> 40 4194304 1 freebsd-swap (2.0G)
> 4194344 7809842784 2 freebsd-zfs (3.6T)
>
> # uname -a
> FreeBSD S1 12.0-CURRENT FreeBSD 12.0-CURRENT #4 r306300: Sat Sep 24
> 14:24:23 EDT 2016
> root at S1:/usr/src/head/obj/usr/src/head/src/sys/MYKERNEL-NODEBUG
> amd64
>
> I recently offlined the device and after onlining it the label changed to
> geom. After a few reboots the pool started importing by diskid. After
> attempting to offline/online by gptid, would continue to fail with an
> error. I decided try to replace it and is also failing with the error
> above. I also wiped the first & last 2MB of the disk without success. Is
> they're a known issue or perhaps I'm missing something obvious? zpool
> labelclear is also providing a similar error. The -f options are not
> helping.
>
>
> Any ideas what my issue maybe? The error suggests it is currently active
> on
> the pool, however the offline should have changed that status correct?
>
> Ultima
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
More information about the freebsd-current
mailing list