ZFS Kernel Panic on 10.0-RELEASE
Steven Hartland
killing at multiplay.co.uk
Mon Jun 2 20:44:18 UTC 2014
----- Original Message -----
From: "Mike Carlson" <mike at bayphoto.com>
To: "Steven Hartland" <killing at multiplay.co.uk>; <freebsd-fs at freebsd.org>
Sent: Monday, June 02, 2014 9:15 PM
Subject: Re: ZFS Kernel Panic on 10.0-RELEASE
> On 6/2/2014 1:06 PM, Steven Hartland wrote:
>> I don't have a core.0.txt, I only have:
>>
>> ~/p/z/dump> ls -al
>> total 347690
>> drwxr-xr-x 3 mikec wheel 8 Jun 2 03:25 .
>> drwxr-xr-x 4 mikec wheel 5 Jun 2 10:44 ..
>> drwxrwxr-x 2 mikec operator 2 Jun 2 03:07 .snap
>> -rw-r--r-- 1 mikec wheel 2 Jun 2 03:24 bounds
>> -rw------- 1 mikec wheel 446 Jun 2 03:24 info.0
>> lrwxr-xr-x 1 mikec wheel 6 Jun 2 03:25 info.last ->
>> info.0
>> -rw------- 1 mikec wheel 3469885440 Jun 2 03:25 vmcore.0
>> lrwxr-xr-x 1 mikec wheel 8 Jun 2 03:25 vmcore.last
>> -> vmcore.0
>>
>> But, here is the kgdb output (with backtrace):
>>
>> ~/p/z/dump> cat ../kgdb_backtrace.txt
>> <118>root@:/ # zfs set canmount=on zroot/data/working
>> <118>root@:/ # zfs mount zroot/data/working
>>
>>
>> Fatal trap 12: page fault while in kernel mode
>> cpuid = 14; apic id = 22
>> fault virtual address = 0x4a0
>> fault code = supervisor read data, page not present
>> instruction pointer = 0x20:0xffffffff8185a39f
>> stack pointer = 0x28:0xfffffe1834608570
>> frame pointer = 0x28:0xfffffe18346085b0
>> code segment = base 0x0, limit 0xfffff, type 0x1b
>> = DPL 0, pres 1, long 1, def32 0, gran 1
>> processor eflags = interrupt enabled, resume, IOPL = 0
>> current process = 2 (txg_thread_enter)
>> trap number = 12
>> panic: page fault
>> cpuid = 14
>> KDB: stack backtrace:
>> #0 0xffffffff808e7ee0 at kdb_backtrace+0x60
>> #1 0xffffffff808af9c5 at panic+0x155
>> #2 0xffffffff80c8e7b2 at trap_fatal+0x3a2
>> #3 0xffffffff80c8ea89 at trap_pfault+0x2c9
>> #4 0xffffffff80c8e216 at trap+0x5e6
>> #5 0xffffffff80c754b2 at calltrap+0x8
>> #6 0xffffffff8182eb5a at dsl_dataset_block_kill+0x3a
>> #7 0xffffffff8182b967 at dnode_sync+0x237
>> #8 0xffffffff81823fcb at dmu_objset_sync_dnodes+0x2b
>> #9 0xffffffff81823e4d at dmu_objset_sync+0x1ed
>> #10 0xffffffff8183829a at dsl_pool_sync+0xca
>> #11 0xffffffff81853a4e at spa_sync+0x52e
>> #12 0xffffffff8185c925 at txg_sync_thread+0x375
>> #13 0xffffffff80881a9a at fork_exit+0x9a
>> #14 0xffffffff80c759ee at fork_trampoline+0xe
>> Uptime: 26m15s
>> Dumping 3309 out of 98234
>> MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91%
>>
>> Reading symbols from /boot/kernel/zfs.ko.symbols...done.
>> Loaded symbols for /boot/kernel/zfs.ko.symbols
>> Reading symbols from /boot/kernel/opensolaris.ko.symbols...done.
>> Loaded symbols for /boot/kernel/opensolaris.ko.symbols
>> #0 doadump (textdump=<value optimized out>) at pcpu.h:219
>> 219 __asm("movq %%gs:%1,%0" : "=r" (td)
>> (kgdb) backtrace
>> #0 doadump (textdump=<value optimized out>) at pcpu.h:219
>> #1 0xffffffff808af640 in kern_reboot (howto=260) at
>> /usr/src/sys/kern/kern_shutdown.c:447
>> #2 0xffffffff808afa04 in panic (fmt=<value optimized out>) at
>> /usr/src/sys/kern/kern_shutdown.c:754
>> #3 0xffffffff80c8e7b2 in trap_fatal (frame=<value optimized out>,
>> eva=<value optimized out>) at /usr/src/sys/amd64/amd64/trap.c:882
>> #4 0xffffffff80c8ea89 in trap_pfault (frame=0xfffffe18346084c0,
>> usermode=0) at /usr/src/sys/amd64/amd64/trap.c:699
>> #5 0xffffffff80c8e216 in trap (frame=0xfffffe18346084c0) at
>> /usr/src/sys/amd64/amd64/trap.c:463
>> #6 0xffffffff80c754b2 in calltrap () at
>> /usr/src/sys/amd64/amd64/exception.S:232
>> #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=0xfffff80041835000,
>> bp=0xfffffe001b8a1780)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c:1635
>> #8 0xffffffff8182eb5a in dsl_dataset_block_kill
>> (ds=0xfffff800410fec00, bp=0xfffffe001b8a1780,
>> tx=0xfffff8004faa0600, async=0)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dataset.c:129
>> #9 0xffffffff8182b967 in dnode_sync (dn=0xfffff8004fe626c0,
>> tx=0xfffff8004faa0600) at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dnode_sync.c:128
>> #10 0xffffffff81823fcb in dmu_objset_sync_dnodes
>> (list=0xfffff80041956b10, newlist=<value optimized out>, tx=<value
>> optimized out>)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:945
>> #11 0xffffffff81823e4d in dmu_objset_sync (os=0xfffff80041956800,
>> pio=0xfffff800418c43b0, tx=0xfffff8004faa0600)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:1062
>> #12 0xffffffff8183829a in dsl_pool_sync (dp=0xfffff8004183c000,
>> txg=<value optimized out>)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c:413
>> #13 0xffffffff81853a4e in spa_sync (spa=0xfffff80041835000,
>> txg=3373534) at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6410
>> #14 0xffffffff8185c925 in txg_sync_thread (arg=0xfffff8004183c000)
>> at
>> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:515
>> #15 0xffffffff80881a9a in fork_exit (callout=0xffffffff8185c5b0
>> <txg_sync_thread>, arg=0xfffff8004183c000, frame=0xfffffe1834608ac0)
>> at /usr/src/sys/kern/kern_fork.c:995
>> #16 0xffffffff80c759ee in fork_trampoline () at
>> /usr/src/sys/amd64/amd64/exception.S:606
>> #17 0x0000000000000000 in ?? ()
>> Current language: auto; currently minimal
>>
>>
>> If anyone wants to help out and check out the vmcore file, email me
>> off the list and I'll provide a S3 url of the tar'd + xz file.
>>
>>
> Output of "frame 7":
>
> (kgdb) frame 7
> #7 0xffffffff8185a39f in bp_get_dsize_sync (spa=0xfffff80041835000,
> bp=0xfffffe001b8a1780)
> at
> /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa_misc.c:1635
> 1635 dsize = (asize >> SPA_MINBLOCKSHIFT) *
> vd->vdev_deflate_ratio;
>
> Is that what you were looking for?
Thats the line I gathered it was on but no I need to know what the value
of vd is, so what you need to do is:
print vd
If thats valid then:
print *vd
Given the panic I'm kind of expecting garbage or null (0x00)
> I'm not familar with this process, so I hope this does not become too
> painful in pulling the details out.
No problem, everyone has to learn some time ;-)
Regards
Steve
More information about the freebsd-fs
mailing list