clear old pools remains from active vdevs
Andriy Gapon
avg at FreeBSD.org
Thu Apr 26 07:50:28 UTC 2018
On 26/04/2018 10:28, Eugene M. Zheganin wrote:
> Hello,
>
>
> I have some active vdev disk members that used to be in pool that clearly have
> not beed destroyed properly, so I'm seeing in a "zpool import" output something
> like
>
>
> # zpool import
> pool: zroot
> id: 14767697319309030904
> state: UNAVAIL
> status: The pool was last accessed by another system.
> action: The pool cannot be imported due to damaged devices or data.
> see: http://illumos.org/msg/ZFS-8000-EY
> config:
>
> zroot UNAVAIL insufficient replicas
> mirror-0 UNAVAIL insufficient replicas
> 5291726022575795110 UNAVAIL cannot open
> 2933754417879630350 UNAVAIL cannot open
>
> pool: esx
> id: 8314148521324214892
> state: UNAVAIL
> status: The pool was last accessed by another system.
> action: The pool cannot be imported due to damaged devices or data.
> see: http://illumos.org/msg/ZFS-8000-EY
> config:
>
> esx UNAVAIL insufficient replicas
> mirror-0 UNAVAIL insufficient replicas
> 10170732803757341731 UNAVAIL cannot open
> 9207269511643803468 UNAVAIL cannot open
>
>
> is there any _safe_ way to get rid of this ? I'm asking because a gptzfsboot
> loader in recent -STABLE stumbles upon this and refuses to boot the system
> (https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=227772). The workaround is to
> use the 11.1 loader, but I'm afraid this behavior will now be the intended one.
You can try to use zdb -l to find the stale labels.
And then zpool labelclear to clear them.
--
Andriy Gapon
More information about the freebsd-stable
mailing list