Re: ZFS: How may I get rid of a zpool that has no extant devices?

From: Andriy Gapon <avg_at_FreeBSD.org>
Date: Fri, 28 Jan 2022 07:58:59 UTC
On 28/01/2022 05:51, David Wolfskill wrote:
> TL;DR: I had created a "zroot" zpool in an attempt to get a new machine
> booting from ZFS.  I gave up on that (for reasons that aren't important
> for this discussion), sliced and partitioned the first drive (ada0),
> then madea raidz1 pool of the remaining 5 drives; the zpool is called
> "tank" (which is mostly a poudriere scratchpad).
> 
> Now "tank" seems fine, but "zroot" shows up as (allegedly) "importable"
> but UNAVAIL; anything I try to do with it generates some form of
> "no such pool" whine.
> 
> How may I make "zroot" disappear?

There are two possibilities, either that pool is in zpool.cache or it is in some 
stale pool label(s) on disks.
Depending on the case, the solution would be different.
For the zpool.cache case, you can simply remove the file and then regenerate.
The other case is more complex.  In the past I dealt with it with a careful use 
of dd.

> root@freetest:/boot # zfs list
> NAME             USED  AVAIL  REFER  MOUNTPOINT
> tank            30.6G  3.57T  12.6G  /tank
> tank/poudriere  17.8G  3.57T  17.8G  /tank/poudriere
> root@freetest:/boot # zpool status
>    pool: tank
>   state: ONLINE
>    scan: none requested
> config:
> 
>          NAME        STATE     READ WRITE CKSUM
>          tank        ONLINE       0     0     0
>            raidz1-0  ONLINE       0     0     0
>              ada1    ONLINE       0     0     0
>              ada2    ONLINE       0     0     0
>              ada3    ONLINE       0     0     0
>              ada4    ONLINE       0     0     0
>              ada5    ONLINE       0     0     0
> 
> errors: No known data errors
> root@freetest:/boot # zpool import
>     pool: zroot
>       id: 16397883415809375312
>    state: UNAVAIL
>   status: One or more devices are missing from the system.
>   action: The pool cannot be imported. Attach the missing
>          devices and try again.
>     see: http://illumos.org/msg/ZFS-8000-3C
>   config:
> 
>          zroot                     UNAVAIL  insufficient replicas
>            raidz1-0                UNAVAIL  insufficient replicas
>              6484790396862720571   UNAVAIL  cannot open
>              14408271149544307738  UNAVAIL  cannot open
>              2973420537959971822   UNAVAIL  cannot open
>              17206168682675537956  UNAVAIL  cannot open
>              16237056652067533889  UNAVAIL  cannot open
> root@freetest:/boot # zpool destroy zroot
> cannot open 'zroot': no such pool
> 
> I am willing to back up tank, destroy the whole mess, and restore it;
> the machine is still in its 'shakedown" phase, and is destined to become
> my new build machine (so it should spend most of its time powered off).
> 
> That said, if there's a (sane) way to clean this up without the backup/
> restore, I'd appreciate knowing about it.
> 
> Thanks!
> 
> Peace,
> david


-- 
Andriy Gapon