zfs_unlinked_drain "forever"?
Peter Eriksson
pen at lysator.liu.se
Thu Oct 3 08:51:14 UTC 2019
Weee.. I can report that _that_ “zfs mount” of that filesystem took ~18 hours. Now it is continuing with the rest…
# df -h | wc -l
11563
(It’s currently mounting about 1 filesystem per second so at that pace it’ll be done in… 12 hours).
- Peter
> On 3 Oct 2019, at 09:51, Peter Eriksson <pen at lysator.liu.se> wrote:
>
> Just upgraded and rebooted one of our servers from 11.2 to 11.3-RELEASE-p3 and now it seems “stuck” at mounting the filesystems…
>
> “stuck” as in it _is_ doing something:
>
>> # zpool iostat 10
>> capacity operations bandwidth
>> pool alloc free read write read write
>> ---------- ----- ----- ----- ----- ----- -----
>> DATA2 351T 11.3T 5 517 25.3K 4.97M
>> DATA3 71.4T 73.6T 0 62 1.84K 863K
>> DATA4 115T 12.2T 0 0 1.38K 152
>> zroot 34.2G 78.8G 0 61 5.68K 461K
>> ---------- ----- ----- ----- ----- ----- -----
>> DATA2 351T 11.3T 0 272 0 2.46M
>> DATA3 71.4T 73.6T 0 0 0 0
>> DATA4 115T 12.2T 0 0 0 0
>> zroot 34.2G 78.8G 0 47 0 200K
>> ---------- ----- ----- ----- ----- ----- -----
>
> It’s been doing these 272-300 Write IOPS on pool DATA2 since around 15:00 yesterday now and has mounted 3781 filesystems out of 58295...
>
>
> A “procstat -kka” shows one “zfs mount” process currently doing:
>
>> 26508 102901 zfs - mi_switch+0xeb sleepq_wait+0x2c _cv_wait+0x16e txg_wait_synced+0xa5
>> dmu_tx_assign+0x48 zfs_rmnode+0x122 zfs_freebsd_reclaim+0x4e VOP_RECLAIM_APV+0x80 vgonel+0x213
>> vrecycle+0x46 zfs_freebsd_inactive+0xd VOP_INACTIVE_APV+0x80 vinactive+0xf0 vputx+0x2c3
>> zfs_unlinked_drain+0x1b8 zfsvfs_setup+0x5e zfs_mount+0x623 vfs_domount+0x573
>
>
>> # ps auxwww|egrep zfs
>> root 17 0.0 0.0 0 2864 - DL 14:53 7:03.54 [zfskern]
>> root 960 0.0 0.0 104716 31900 - Is 14:55 0:00.05 /usr/sbin/mountd -r -S /etc/exports /etc/zfs/exports
>> root 4390 0.0 0.0 9040 5872 - Is 14:57 0:00.02 /usr/sbin/zfsd
>> root 20330 0.0 0.0 22652 18388 - S 15:07 0:48.90 perl /usr/local/bin/parallel --will-cite -j 40 zfs mount {}
>> root 26508 0.0 0.0 7804 5316 - D 15:09 0:08.58 /sbin/zfs mount DATA2/filur04.it.liu.se/DATA/staff/nikca89
>> root 101 0.0 0.0 20148 14860 u1- I 14:55 0:00.88 /bin/bash /sbin/zfs-speedmount
>> root 770 0.0 0.0 6732 2700 0 S+ 09:45 0:00.00 egrep zfs
>
> (“zfs-speedmount” is a locally developed script that runs multiple “zfs mount” commands in parallel - speeds up mounting of filesystems a lot on this server _a lot_ (normally), since “zfs mount” didn’t use to mount stuff in parallell before, took multiple hours when rebooting servers).
>
>
> A google-search found stuff about zfs_unlinked_drain improvements in Nexenta and Linux ZFS:
>
>> https://github.com/zfsonlinux/zfs/pull/8142/commits <https://github.com/zfsonlinux/zfs/pull/8142/commits>
>
> Anyone know if this (or similar) fixes are in FreeBSD ZFS (11.3-RELEASE-p3)?
>
>
> - Peter
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
More information about the freebsd-fs
mailing list