Corrupt zfs-pool after ex/import
Markus Teichmann
jmt at lf28.net
Sat Dec 15 16:09:26 UTC 2012
I lost a single device zfs pool with export/import.
history:
I had a zfs root pool and wanted to move it on a mirrored pool. So I
moved the root environment from single device pool sys to the new pool
sys0. Than I booted on the new pool sys0 and renamed the old pool sys to
sys1. Next I booted sys1 and renamed sys0 to sys. This was the last time
the old pool was accessible.
There was one problem when I booted sys1. Zpool showed me 3 pools, sys,
sys0 and sys1. So the renaming from sys to sys1 had some failures. I
exported the pool sys and then renamed sys0 to sys. Next time I booted
the new pool and the old was lost.
All the happens on 9-RELEASE. Now I installed 10-CURRENT, but the
result is the same.
Meanwhile I renamed the new mirrored pool sys back to sys0. So I can
work on the old pool without naming problems. The old pool still belongs
to the name sys.
What I've done till now:
#zpool import
pool: sys
id: 874712540419822651
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-5E
config:
sys FAULTED corrupted data
14109631078429946324 FAULTED corrupted data
#zdb -l /dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b
--------------------------------------------
LABEL 0
--------------------------------------------
version: 28
name: 'sys'
state: 1
txg: 840285
pool_guid: 874712540419822651
hostid: 4266313884
hostname: 'mcp.lf28.net'
top_guid: 14109631078429946324
guid: 14109631078429946324
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 14109631078429946324
path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
phys_path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 34
ashift: 12
asize: 1991803142144
is_log: 0
create_txg: 4
--------------------------------------------
LABEL 1
--------------------------------------------
version: 28
name: 'sys'
state: 1
txg: 840285
pool_guid: 874712540419822651
hostid: 4266313884
hostname: 'mcp.lf28.net'
top_guid: 14109631078429946324
guid: 14109631078429946324
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 14109631078429946324
path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
phys_path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 34
ashift: 12
asize: 1991803142144
is_log: 0
create_txg: 4
--------------------------------------------
LABEL 2
--------------------------------------------
version: 28
name: 'sys'
state: 1
txg: 840285
pool_guid: 874712540419822651
hostid: 4266313884
hostname: 'mcp.lf28.net'
top_guid: 14109631078429946324
guid: 14109631078429946324
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 14109631078429946324
path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
phys_path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 34
ashift: 12
asize: 1991803142144
is_log: 0
create_txg: 4
--------------------------------------------
LABEL 3
--------------------------------------------
version: 28
name: 'sys'
state: 1
txg: 840285
pool_guid: 874712540419822651
hostid: 4266313884
hostname: 'mcp.lf28.net'
top_guid: 14109631078429946324
guid: 14109631078429946324
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 14109631078429946324
path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
phys_path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 34
ashift: 12
asize: 1991803142144
is_log: 0
create_txg: 4
#zdb -e sys
Configuration for import:
vdev_children: 1
version: 28
pool_guid: 874712540419822651
name: 'sys'
txg: 840285
state: 1
hostid: 4266313884
hostname: 'mcp.lf28.net'
vdev_tree:
type: 'root'
id: 0
guid: 874712540419822651
children[0]:
type: 'disk'
id: 0
guid: 14109631078429946324
phys_path:
'/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 34
ashift: 12
asize: 1991803142144
is_log: 0
create_txg: 4
path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
zdb: can't open 'sys': Input/output error
#zdb -Fe sys
Configuration for import:
vdev_children: 1
version: 28
pool_guid: 874712540419822651
name: 'sys'
txg: 840285
state: 1
hostid: 4266313884
hostname: 'mcp.lf28.net'
vdev_tree:
type: 'root'
id: 0
guid: 874712540419822651
children[0]:
type: 'disk'
id: 0
guid: 14109631078429946324
phys_path:
'/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 34
ashift: 12
asize: 1991803142144
is_log: 0
create_txg: 4
path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
zdb: can't open 'sys': Input/output error
#zdb -Xe sys
Configuration for import:
vdev_children: 1
version: 28
pool_guid: 874712540419822651
name: 'sys'
txg: 840285
state: 1
hostid: 4266313884
hostname: 'mcp.lf28.net'
vdev_tree:
type: 'root'
id: 0
guid: 874712540419822651
children[0]:
type: 'disk'
id: 0
guid: 14109631078429946324
phys_path:
'/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
whole_disk: 1
metaslab_array: 30
metaslab_shift: 34
ashift: 12
asize: 1991803142144
is_log: 0
create_txg: 4
path: '/dev/gptid/3094758a-13a1-11e2-a7c0-bc5ff437dd0b'
zdb: can't open 'sys': Too many open files
Now I need some help/advice to get the pool working again. It does not
seem to be a hardware issue, because the disk is only some month old and
there are no messages from the kernel.
Any ideas are welcome.
Markus Teichmann
PS: Sorry for my broken English, I'll try my very best...
More information about the freebsd-fs
mailing list