ZFS unable to import pool

Gena Guchin ggulchin at icloud.com
Wed Apr 23 14:10:46 UTC 2014


looks like this is what I did :(

On Apr 23, 2014, at 5:03 AM, Johan Hendriks <joh.hendriks at gmail.com> wrote:

> 
> op 23-04-14 14:00, Hugo Lombard schreef:
>> On Wed, Apr 23, 2014 at 12:18:37PM +0200, Johan Hendriks wrote:
>>> Did you in the past add an extra disk to the pool?
>>> This could explain the whole issue as the pool is missing a whole vdev.
>>> 
>> I agree that there's a vdev missing...
>> 
>> I was able to "simulate" the current problematic import state (sans
>> failed "disk7", since that doesn't seem to be the stumbling block) by
>> adding 5 disks [1] to get to here:
>> 
>>   # zpool status test
>>     pool: test
>>    state: ONLINE
>>     scan: none requested
>>   config:
>>   	  NAME        STATE     READ WRITE CKSUM
>> 	  test        ONLINE       0     0     0
>> 	    raidz1-0  ONLINE       0     0     0
>> 	      md3     ONLINE       0     0     0
>> 	      md4     ONLINE       0     0     0
>> 	      md5     ONLINE       0     0     0
>> 	      md6     ONLINE       0     0     0
>> 	      md7     ONLINE       0     0     0
>> 	    raidz1-2  ONLINE       0     0     0
>> 	      md8     ONLINE       0     0     0
>> 	      md9     ONLINE       0     0     0
>> 	      md10    ONLINE       0     0     0
>> 	      md11    ONLINE       0     0     0
>> 	      md12    ONLINE       0     0     0
>> 	  logs
>> 	    md1s1     ONLINE       0     0     0
>> 	  cache
>> 	    md1s2     ONLINE       0     0     0
>>      errors: No known data errors
>>   #
>> 
>> Then exporting it, and removing md8-md12, which results in:
>> 
>>   # zpool import
>>      pool: test
>>        id: 8932371712846778254
>>     state: UNAVAIL
>>    status: One or more devices are missing from the system.
>>    action: The pool cannot be imported. Attach the missing
>> 	  devices and try again.
>>      see: http://illumos.org/msg/ZFS-8000-6X
>>    config:
>>   	  test         UNAVAIL  missing device
>> 	    raidz1-0   ONLINE
>> 	      md3      ONLINE
>> 	      md4      ONLINE
>> 	      md5      ONLINE
>> 	      md6      ONLINE
>> 	      md7      ONLINE
>> 	  cache
>> 	    md1s2
>> 	  logs
>> 	    md1s1      ONLINE
>>   	  Additional devices are known to be part of this pool, though their
>> 	  exact configuration cannot be determined.
>>   #
>> 
>> One more data point:  In the 'zdb -l' output on the log device it shows
>> 
>>   vdev_children: 2
>> 
>> for the pool consisting of raidz1 + log + cache, but it shows
>> 
>>   vdev_children: 3
>> 
>> for the pool with raidz1 + raidz1 + log + cache.  The pool in the
>> problem report also shows 'vdev_children: 3' [2]
>> 
>> 
>> 
>> [1] Trying to add a single device resulted in zpool add complaining
>> with:
>> 
>>   mismatched replication level: pool uses raidz and new vdev is disk
>> 
>> and trying it with three disks said:
>> 
>>   mismatched replication level: pool uses 5-way raidz and new vdev uses 3-way raidz
>> 
>> 
>> [2] http://lists.freebsd.org/pipermail/freebsd-fs/2014-April/019340.html
>> 
> But you can force it....
> If you force it, it will add a vdev not the same as the current vdev. So you will have a raidz1 and a single no parity vdev in the pool. If you destroy the single disk vdev then you will get a pool which can not be repaired as far as I know.
> 
> regards
> Johan
> 
> 
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"



More information about the freebsd-fs mailing list