ZFS Kernel Panic on 10.0-RELEASE
Mike Carlson
mike at bayphoto.com
Tue Jun 3 00:37:38 UTC 2014
On 6/2/2014 5:29 PM, Steven Hartland wrote:
>
> ----- Original Message ----- From: "Mike Carlson" <mike at bayphoto.com>
> To: "Steven Hartland" <killing at multiplay.co.uk>; <freebsd-fs at freebsd.org>
> Sent: Monday, June 02, 2014 11:57 PM
> Subject: Re: ZFS Kernel Panic on 10.0-RELEASE
>
>
>> On 6/2/2014 2:15 PM, Steven Hartland wrote:
>>> ----- Original Message ----- From: "Mike Carlson" <mike at bayphoto.com>
>>>
>>>>> Thats the line I gathered it was on but no I need to know what the
>>>>> value
>>>>> of vd is, so what you need to do is:
>>>>> print vd
>>>>>
>>>>> If thats valid then:
>>>>> print *vd
>>>>>
>>>> It reports:
>>>>
>>>> (kgdb) print *vd
>>>> No symbol "vd" in current context.
>>>
>>> Dam optimiser :(
>>>
>>>> Should I rebuild the kernel with additional options?
>>>
>>> Likely wont help as kernel with zero optimisations tends to fail
>>> to build in my experience :(
>>>
>>> Can you try applying the attached patch to your src e.g.
>>> cd /usr/src
>>> patch < zfs-dsize-dva-check.patch
>>>
>>> The rebuild, install the kernel and then reproduce the issue again.
>>>
>>> Hopefully it will provide some more information on the cause, but
>>> I suspect you might be seeing the effect os have some corruption.
>>
>> Well, after building the kernel with your patch, installing it and
>> booting off of it, the system does not panic.
>>
>> It reports this when I mount the filesystem:
>>
>> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648
>> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648
>> Solaris: WARNING: dva_get_dsize_sync(): bad DVA 131241:2147483648
>>
>> Here is the results, I can now mount the file system!
>>
>> root at working-1:~ # zfs set canmount=on zroot/data/working
>> root at working-1:~ # zfs mount zroot/data/working
>> root at working-1:~ # df
>> Filesystem 1K-blocks Used Avail Capacity
>> Mounted on
>> zroot 2677363378 1207060 2676156318
>> 0% /
>> devfs 1 1 0 100% /dev
>> /dev/mfid10p1 253911544 2827824 230770800 1%
>> /dump
>> zroot/home 2676156506 188 2676156318
>> 0% /home
>> zroot/data 2676156389 71 2676156318
>> 0% /mnt/data
>> zroot/usr/ports/distfiles 2676246609 90291 2676156318
>> 0% /mnt/usr/ports/distfiles
>> zroot/usr/ports/packages 2676158702 2384 2676156318
>> 0% /mnt/usr/ports/packages
>> zroot/tmp 2676156812 493 2676156318
>> 0% /tmp
>> zroot/usr 2679746045 3589727 2676156318
>> 0% /usr
>> zroot/usr/ports 2676986896 830578 2676156318
>> 0% /usr/ports
>> zroot/usr/src 2676643553 487234 2676156318
>> 0% /usr/src
>> zroot/var 2676650671 494353 2676156318
>> 0% /var
>> zroot/var/crash 2676156388 69 2676156318
>> 0% /var/crash
>> zroot/var/db 2677521200 1364882 2676156318
>> 0% /var/db
>> zroot/var/db/pkg 2676198058 41740 2676156318
>> 0% /var/db/pkg
>> zroot/var/empty 2676156387 68 2676156318
>> 0% /var/empty
>> zroot/var/log 2676168522 12203 2676156318
>> 0% /var/log
>> zroot/var/mail 2676157043 725 2676156318
>> 0% /var/mail
>> zroot/var/run 2676156508 190 2676156318
>> 0% /var/run
>> zroot/var/tmp 2676156389 71 2676156318
>> 0% /var/tmp
>> zroot/data/working 7664687468 4988531149 2676156318 65%
>> /mnt/data/working
>> root at working-1:~ # ls /mnt/data/working/
>> DONE_ORDERS DP2_CMD NEW_MULTI_TESTING PROCESS
>> RECYCLER XML_NOTIFICATIONS XML_REPORTS
>
> That does indeed seem to indicated some on disk corruption.
>
> There are a number of cases in the code which have a similar check but
> I'm afraid I don't know the implications of the corruption your
> seeing but others may.
>
> The attached updated patch will enforce the safe panic in this case
> unless the sysctl vfs.zfs.recover is set to 1 (which can also now be
> done on the fly).
>
> I'd recommend backing up the data off the pool and restoring it else
> where.
>
> It would be interesting to see the output of the following command
> on your pool:
> zdb -uuumdC <pool>
>
> Regards
> Steve
I'm applying that patch and rebuilding the kernel again
Here is the output from zdb -uuumdC:
zroot:
version: 28
name: 'zroot'
state: 0
txg: 13
pool_guid: 9132288035431788388
hostname: 'amnesia.discdrive.bayphoto.com'
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 9132288035431788388
children[0]:
type: 'raidz'
id: 0
guid: 15520162542638044402
nparity: 2
metaslab_array: 31
metaslab_shift: 36
ashift: 9
asize: 9894744555520
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 4289437176706222104
path: '/dev/gpt/disk0'
phys_path: '/dev/gpt/disk0'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 5369387862706621015
path: '/dev/gpt/disk1'
phys_path: '/dev/gpt/disk1'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 456749962069636782
path: '/dev/gpt/disk2'
phys_path: '/dev/gpt/disk2'
whole_disk: 1
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 3809413300177228462
path: '/dev/gpt/disk3'
phys_path: '/dev/gpt/disk3'
whole_disk: 1
create_txg: 4
children[4]:
type: 'disk'
id: 4
guid: 4978694931676882497
path: '/dev/gpt/disk4'
phys_path: '/dev/gpt/disk4'
whole_disk: 1
create_txg: 4
children[5]:
type: 'disk'
id: 5
guid: 17831739822150458220
path: '/dev/gpt/disk5'
phys_path: '/dev/gpt/disk5'
whole_disk: 1
create_txg: 4
children[6]:
type: 'disk'
id: 6
guid: 1286918567594965543
path: '/dev/gpt/disk6'
phys_path: '/dev/gpt/disk6'
whole_disk: 1
create_txg: 4
children[7]:
type: 'disk'
id: 7
guid: 7958718879588658810
path: '/dev/gpt/disk7'
phys_path: '/dev/gpt/disk7'
whole_disk: 1
create_txg: 4
children[8]:
type: 'disk'
id: 8
guid: 18392960683862755998
path: '/dev/gpt/disk8'
phys_path: '/dev/gpt/disk8'
whole_disk: 1
create_txg: 4
children[9]:
type: 'disk'
id: 9
guid: 13046629036569375198
path: '/dev/gpt/disk9'
phys_path: '/dev/gpt/disk9'
whole_disk: 1
create_txg: 4
children[10]:
type: 'disk'
id: 10
guid: 10604061156531251346
path: '/dev/gpt/disk11'
phys_path: '/dev/gpt/disk11'
whole_disk: 1
create_txg: 4
I find it strange that it says version 28 when it was upgraded to
version 5000
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6054 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20140602/0dbfc88c/attachment.bin>
More information about the freebsd-fs
mailing list