ZFS pools in "trouble"

Willem Jan Withagen wjw at digiware.nl
Wed Feb 26 17:09:53 UTC 2020


Hi,

I'm using my pools in perhaps a rather awkward way as underlying storage 
for my ceph cluster:
	1 disk per pool, with log and cache on SSD

For one reason or another one of the servers has crashed ad does not 
really want to read several of the pools:
----
   pool: osd_2
  state: UNAVAIL
Assertion failed: (reason == ZPOOL_STATUS_OK), file 
/usr/src/cddl/contrib/opensolaris/cmd/zpool/zpool_main.c, line 5098.
Abort (core dumped)
----

The code there is like:
----
         default:
                 /*
                  * The remaining errors can't actually be generated, yet.
                  */
                 assert(reason == ZPOOL_STATUS_OK);

----
And this on already 3 disks.
Running:
FreeBSD 12.1-STABLE (GENERIC) #0 r355208M: Fri Nov 29 10:43:47 CET 2019

Now this is a test cluster, so no harm there in matters of data loss.
And the ceph cluster probably can rebuild everything if I do not lose 
too many disk.

But the problem also lies in the fact that not all disk are recognized 
by the kernel, and not all disk end up mounted. So I need to remove a 
pool first to get more disks online.

Is there anything I can do the get them back online?
Or is this a lost cause?

--WjW


More information about the freebsd-fs mailing list