[Bug 266014] panic: corrupted zfs dataset (zfs issue)
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Wed, 26 Oct 2022 00:18:08 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=266014 --- Comment #7 from Duncan <dpy@pobox.com> --- (In reply to Graham Perrin from comment #6) The replication I beleive works fine, as long as one doesn't then try to mount the dataset. I will check this properly and perphaps try setting up another machine to run the panics on. It is a bit of a pain to keep knocking over my main server. I should get back to this within the week. I did get a different type of crash dump, I believe from the mount and it is different, i.e. Unread portion of the kernel message buffer: panic: VERIFY3(sa.sa_magic == SA_MAGIC) failed (1122422741 == 3100762) cpuid = 5 time = 1666406111 KDB: stack backtrace: #0 0xffffffff80c694a5 at kdb_backtrace+0x65 #1 0xffffffff80c1bb5f at vpanic+0x17f #2 0xffffffff84ff4f4a at spl_panic+0x3a #3 0xffffffff851948f8 at zpl_get_file_info+0x1d8 #4 0xffffffff85060388 at dmu_objset_userquota_get_ids+0x298 #5 0xffffffff85073f24 at dnode_setdirty+0x34 #6 0xffffffff8504bd49 at dbuf_dirty+0x9d9 #7 0xffffffff85061fc0 at dmu_objset_space_upgrade+0x40 #8 0xffffffff85060a5f at dmu_objset_id_quota_upgrade_cb+0x14f #9 0xffffffff85061eaf at dmu_objset_upgrade_task_cb+0x7f #10 0xffffffff84ff6a0f at taskq_run+0x1f #11 0xffffffff80c7da81 at taskqueue_run_locked+0x181 #12 0xffffffff80c7ed92 at taskqueue_thread_loop+0xc2 #13 0xffffffff80bd8a9e at fork_exit+0x7e #14 0xffffffff810885ee at fork_trampoline+0xe Uptime: 13m13s (ada0:ahcich1:0:0:0): spin-down (ada1:ahcich2:0:0:0): spin-down (ada2:ahcich3:0:0:0): spin-down (ada3:ahcich4:0:0:0): spin-down Dumping 13911 out of 130858 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% __curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55 55 __asm("movq %%gs:%P1,%0" : "=r" (td) : "n" (offsetof(struct pcpu, (kgdb) #0 __curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55 #1 doadump (textdump=<optimized out>) at /usr/src/sys/kern/kern_shutdown.c:399 #2 0xffffffff80c1b75c in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:487 #3 0xffffffff80c1bbce in vpanic ( fmt=0xffffffff85250fe8 "VERIFY3(sa.sa_magic == SA_MAGIC) failed (%llu == %llu)\n", ap=<optimized out>) at /usr/src/sys/kern/kern_shutdown.c:920 #4 0xffffffff84ff4f4a in spl_panic (file=<optimized out>, func=<optimized out>, line=<unavailable>, fmt=<unavailable>) at /usr/src/sys/contrib/openzfs/module/os/freebsd/spl/spl_misc.c:107 #5 0xffffffff851948f8 in zpl_get_file_info (bonustype=<optimized out>, data=0xfffffe035db250c0, zoi=0xfffffe027e72bc50) at /usr/src/sys/contrib/openzfs/module/zfs/zfs_quota.c:89 #6 0xffffffff85060388 in dmu_objset_userquota_get_ids ( dn=0xfffff8160ebcf660, before=before@entry=1, tx=<optimized out>, tx@entry=0xfffff80ec760a100) at /usr/src/sys/contrib/openzfs/module/zfs/dmu_objset.c:2215 #7 0xffffffff85073f24 in dnode_setdirty (dn=0xfffff8160ebcf660, tx=0xfffff80ec760a100) at /usr/src/sys/contrib/openzfs/module/zfs/dnode.c:1691 #8 0xffffffff8504bd49 in dbuf_dirty (db=0xfffff8160ebd3b90, db@entry=0x0, tx=tx@entry=0xfffff8160ebd3b90) at /usr/src/sys/contrib/openzfs/module/zfs/dbuf.c:2367 #9 0xffffffff8504c074 in dmu_buf_will_dirty_impl (db_fake=<optimized out>, flags=<optimized out>, flags@entry=9, tx=0xfffff8160ebd3b90, tx@entry=0xfffff80ec760a100) at /usr/src/sys/contrib/openzfs/module/zfs/dbuf.c:2517 #10 0xffffffff8504aea2 in dmu_buf_will_dirty (db_fake=<unavailable>, tx=<unavailable>, tx@entry=0xfffff80ec760a100) at /usr/src/sys/contrib/openzfs/module/zfs/dbuf.c:2523 #11 0xffffffff85061fc0 in dmu_objset_space_upgrade ( os=os@entry=0xfffff80408629800) at /usr/src/sys/contrib/openzfs/module/zfs/dmu_objset.c:2328 #12 0xffffffff85060a5f in dmu_objset_id_quota_upgrade_cb ( os=0xfffff80408629800) at /usr/src/sys/contrib/openzfs/module/zfs/dmu_objset.c:2385 #13 0xffffffff85061eaf in dmu_objset_upgrade_task_cb (data=0xfffff80408629800) at /usr/src/sys/contrib/openzfs/module/zfs/dmu_objset.c:1447 #14 0xffffffff84ff6a0f in taskq_run (arg=0xfffff801e5ab5300, pending=<unavailable>) at /usr/src/sys/contrib/openzfs/module/os/freebsd/spl/spl_taskq.c:315 #15 0xffffffff80c7da81 in taskqueue_run_locked ( queue=queue@entry=0xfffff80116004300) at /usr/src/sys/kern/subr_taskqueue.c:477 #16 0xffffffff80c7ed92 in taskqueue_thread_loop (arg=<optimized out>, arg@entry=0xfffff801dfb570d0) at /usr/src/sys/kern/subr_taskqueue.c:794 #17 0xffffffff80bd8a9e in fork_exit ( callout=0xffffffff80c7ecd0 <taskqueue_thread_loop>, arg=0xfffff801dfb570d0, frame=0xfffffe027e72bf40) at /usr/src/sys/kern/kern_fork.c:1093 #18 <signal handler called> #19 mi_startup () at /usr/src/sys/kern/init_main.c:322 #20 0xffffffff80f791d9 in swapper () at /usr/src/sys/vm/vm_swapout.c:755 #21 0xffffffff80385022 in btext () at /usr/src/sys/amd64/amd64/locore.S:80 ---------------------- I would say this is a similar but different problem. I had months of replicated copies on two different pools. Because I copied (send/recieve) them encrypted and unmounted on the destination, nothing showed up. As soon as I tried to mount them, panic. Currently I have renamed the original dataset (currently unmounted), but I deleted the backups (they wouldn't mount, but I'm sure I can re-create them). I will do more experimintaion when I have a couple of hours spare (within the week) -- You are receiving this mail because: You are the assignee for the bug.