Re: ZFS: How may I get rid of a zpool that has no extant devices?

From: Freddie Cash <fjwcash_at_gmail.com>
Date: Fri, 28 Jan 2022 17:06:59 UTC
On Thu, Jan 27, 2022 at 7:51 PM David Wolfskill <david@catwhisker.org>
wrote:

> TL;DR: I had created a "zroot" zpool in an attempt to get a new machine
> booting from ZFS.  I gave up on that (for reasons that aren't important
> for this discussion), sliced and partitioned the first drive (ada0),
> then madea raidz1 pool of the remaining 5 drives; the zpool is called
> "tank" (which is mostly a poudriere scratchpad).
>

Did you do a "zpool destroy zroot" before partitioning the devices for use
in the tank pool?  If not, that's why it thinks the zroot pool is still
"available" as it sees the old ZFS label on the devices.


> Now "tank" seems fine, but "zroot" shows up as (allegedly) "importable"
> but UNAVAIL; anything I try to do with it generates some form of
> "no such pool" whine.
>
> How may I make "zroot" disappear?
>
> root@freetest:/boot # zfs list
> NAME             USED  AVAIL  REFER  MOUNTPOINT
> tank            30.6G  3.57T  12.6G  /tank
> tank/poudriere  17.8G  3.57T  17.8G  /tank/poudriere
> root@freetest:/boot # zpool status
>   pool: tank
>  state: ONLINE
>   scan: none requested
> config:
>
>         NAME        STATE     READ WRITE CKSUM
>         tank        ONLINE       0     0     0
>           raidz1-0  ONLINE       0     0     0
>             ada1    ONLINE       0     0     0
>             ada2    ONLINE       0     0     0
>             ada3    ONLINE       0     0     0
>             ada4    ONLINE       0     0     0
>             ada5    ONLINE       0     0     0
>
> errors: No known data errors
> root@freetest:/boot # zpool import
>    pool: zroot
>      id: 16397883415809375312
>   state: UNAVAIL
>  status: One or more devices are missing from the system.
>  action: The pool cannot be imported. Attach the missing
>         devices and try again.
>    see: http://illumos.org/msg/ZFS-8000-3C
>  config:
>
>         zroot                     UNAVAIL  insufficient replicas
>           raidz1-0                UNAVAIL  insufficient replicas
>             6484790396862720571   UNAVAIL  cannot open
>             14408271149544307738  UNAVAIL  cannot open
>             2973420537959971822   UNAVAIL  cannot open
>             17206168682675537956  UNAVAIL  cannot open
>             16237056652067533889  UNAVAIL  cannot open
> root@freetest:/boot # zpool destroy zroot
> cannot open 'zroot': no such pool
>
> I am willing to back up tank, destroy the whole mess, and restore it;
> the machine is still in its 'shakedown" phase, and is destined to become
> my new build machine (so it should spend most of its time powered off).
>
> That said, if there's a (sane) way to clean this up without the backup/
> restore, I'd appreciate knowing about it.
>

If nothing else works, the nuclear option is to run "zpool labelclear" on
each of the devices used for the zroot pool.  That will remove the ZFS
label from the disks, clear the GPT partition table and backup table, and
render the device essentially "unformatted", ready for use.

As you aren't using ZFS on ada0, you might be able to run "zpool
labelclear" on just that device to make it disappear, as you have proper
ZFS labels on the other drives for tank.

-- 
Freddie Cash
fjwcash@gmail.com