[Bug 231117] I/O lockups inside bhyve vms
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Thu Mar 14 16:53:32 UTC 2019
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=231117
roel at qsp.nl changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |roel at qsp.nl
--- Comment #18 from roel at qsp.nl ---
Just had this occur again on a VM running under bhyve with 12.0-STABLE, checked
out and compiled 6 days ago (r344917). VM host is running the exact same
kernel. The modifications in zfs_znode.c are present, but we still had an issue
after the system has been running for a couple of days.
VM has >4GB arc_max (so the workaround as described by Kristian doesn't work
for us):
vfs.zfs.arc_min: 903779840
vfs.zfs.arc_max: 7230238720
Hypervisor:
vfs.zfs.arc_min: 8216929792
vfs.zfs.arc_max: 65735438336
Procstat -kk on the bhyve process:
root at cloud02:/home/roel # procstat -kk 18178
PID TID COMM TDNAME KSTACK
18178 101261 bhyve mevent mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a kqueue_kevent+0x297
kern_kevent+0xb5 kern_kevent_generic+0x70 sys_kevent+0x61 amd64_syscall+0x34d
fast_syscall_common+0x101
18178 101731 bhyve vtnet-2:0 tx mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x133
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101
18178 101732 bhyve blk-3:0:0-0 mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x133
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101
18178 101733 bhyve blk-3:0:0-1 mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x133
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101
18178 101734 bhyve blk-3:0:0-2 mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x133
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101
18178 101735 bhyve blk-3:0:0-3 mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x133
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101
18178 101736 bhyve blk-3:0:0-4 mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x133
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101
18178 101737 bhyve blk-3:0:0-5 mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x133
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101
18178 101738 bhyve blk-3:0:0-6 mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x133
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101
18178 101739 bhyve blk-3:0:0-7 mi_switch+0xe2
sleepq_catch_signals+0x405 sleepq_wait_sig+0xf _sleep+0x23a umtxq_sleep+0x133
do_wait+0x427 __umtx_op_wait_uint_private+0x53 amd64_syscall+0x34d
fast_syscall_common+0x101
18178 101740 bhyve vcpu 0 <running>
18178 101741 bhyve vcpu 1 mi_switch+0xe2
sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed
devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f
kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x101
18178 101742 bhyve vcpu 2 mi_switch+0xe2
sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed
devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f
kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x101
18178 101743 bhyve vcpu 3 mi_switch+0xe2
sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed
devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f
kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x101
18178 101744 bhyve vcpu 4 mi_switch+0xe2
sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed
devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f
kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x101
18178 101745 bhyve vcpu 5 mi_switch+0xe2
sleepq_timedwait+0x2f msleep_spin_sbt+0x138 vm_run+0x502 vmmdev_ioctl+0xbed
devfs_ioctl+0xad VOP_IOCTL_APV+0x7c vn_ioctl+0x161 devfs_ioctl_f+0x1f
kern_ioctl+0x26d sys_ioctl+0x15d amd64_syscall+0x34d fast_syscall_common+0x101
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-virtualization
mailing list