bhyve: centos 7.1 with multiple virtual processors
Andriy Gapon
avg at FreeBSD.org
Tue Jun 23 08:06:08 UTC 2015
On 23/06/2015 10:26, Neel Natu wrote:
> Hi Andriy,
>
> On Mon, Jun 22, 2015 at 11:45 PM, Andriy Gapon <avg at freebsd.org> wrote:
>> On 23/06/2015 05:37, Neel Natu wrote:
>>> Hi Andriy,
>>>
>>> FWIW I can boot up a Centos 7.1 virtual machine with 2 and 4 vcpus
>>> fine on my host with 8 physical cores.
>>>
>>> I have some questions about your setup inline.
>>>
>>> On Mon, Jun 22, 2015 at 4:14 AM, Andriy Gapon <avg at freebsd.org> wrote:
>>>>
>>>> If I run a CentOS 7.1 VM with more than one CPU more often than not it would
>>>> hang on startup and bhyve would start spinning.
>>>>
>>>> The following are the last messages seen in the VM:
>>>>
>>>> Switching to clocksource hpet
>>>> ------------[ cut here ]------------
>>>> WARNING: at kernel/time/clockevents.c:239 clockevents_program_event+0xdb/0xf0()
>>>> Modules linked in:
>>>> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.10.0-229.4.2.el7.x86_64 #1
>>>> Hardware name: BHYVE, BIOS 1.00 03/14/2014
>>>> 0000000000000000 00000000cab5bdb6 ffff88003fc03e08 ffffffff81604eaa
>>>> ffff88003fc03e40 ffffffff8106e34b 80000000000f423f 80000000000f423f
>>>> ffffffff81915440 0000000000000000 0000000000000000 ffff88003fc03e50
>>>> Call Trace:
>>>> <IRQ> [<ffffffff81604eaa>] dump_stack+0x19/0x1b
>>>> [<ffffffff8106e34b>] warn_slowpath_common+0x6b/0xb0
>>>> [<ffffffff8106e49a>] warn_slowpath_null+0x1a/0x20
>>>> [<ffffffff810ce6eb>] clockevents_program_event+0xdb/0xf0
>>>> [<ffffffff810cf211>] tick_handle_periodic_broadcast+0x41/0x50
>>>> [<ffffffff81016525>] timer_interrupt+0x15/0x20
>>>> [<ffffffff8110b5ee>] handle_irq_event_percpu+0x3e/0x1e0
>>>> [<ffffffff8110b7cd>] handle_irq_event+0x3d/0x60
>>>> [<ffffffff8110e467>] handle_edge_irq+0x77/0x130
>>>> [<ffffffff81015cff>] handle_irq+0xbf/0x150
>>>> [<ffffffff81077df7>] ? irq_enter+0x17/0xa0
>>>> [<ffffffff816172af>] do_IRQ+0x4f/0xf0
>>>> [<ffffffff8160c4ad>] common_interrupt+0x6d/0x6d
>>>> <EOI> [<ffffffff8126e359>] ? selinux_inode_alloc_security+0x59/0xa0
>>>> [<ffffffff811de58f>] ? __d_instantiate+0xbf/0x100
>>>> [<ffffffff811de56f>] ? __d_instantiate+0x9f/0x100
>>>> [<ffffffff811de60d>] d_instantiate+0x3d/0x70
>>>> [<ffffffff8124d748>] debugfs_mknod.isra.5.part.6.constprop.15+0x98/0x130
>>>> [<ffffffff8124da82>] __create_file+0x1c2/0x2c0
>>>> [<ffffffff81a6c6bf>] ? set_graph_function+0x1f/0x1f
>>>> [<ffffffff8124dbcb>] debugfs_create_dir+0x1b/0x20
>>>> [<ffffffff8112c1ce>] tracing_init_dentry_tr+0x7e/0x90
>>>> [<ffffffff8112c250>] tracing_init_dentry+0x10/0x20
>>>> [<ffffffff81a6c6d2>] ftrace_init_debugfs+0x13/0x1fd
>>>> [<ffffffff81a6c6bf>] ? set_graph_function+0x1f/0x1f
>>>> [<ffffffff810020e8>] do_one_initcall+0xb8/0x230
>>>> [<ffffffff81a45203>] kernel_init_freeable+0x18b/0x22a
>>>> [<ffffffff81a449db>] ? initcall_blacklist+0xb0/0xb0
>>>> [<ffffffff815f33f0>] ? rest_init+0x80/0x80
>>>> [<ffffffff815f33fe>] kernel_init+0xe/0xf0
>>>> [<ffffffff81614d3c>] ret_from_fork+0x7c/0xb0
>>>> [<ffffffff815f33f0>] ? rest_init+0x80/0x80
>>>> ---[ end trace d5caa1cab8e7e98d ]---
>>>>
>>>
>>> A few questions to narrow this down:
>>> - Is the host very busy when the VM is started (or what is the host
>>> doing when this happened)?
>>
>> The host typically is not heavily loaded. There is X server running and some
>> applications. I'd imagine that those could cause some additional latency but
>> not CPU starvation.
>>
>
> Yup, I agree.
>
> Does this ever happen with a single vcpu guest?
Never seen the problem with a single CPU so far.
Also, never had that problem with FreeBSD guests.
> The other mystery is the NMIs the host is receiving. I (re)verified to
> make sure that bhyve/vmm.ko do not assert NMIs so it has to be
> something else on the host that's doing it ...
But the correlation with the multi-CPU non-FreeBSD guests seems to be significant.
P.S. meanwhile I found this old-ish thread that seems to describe exactly the
problem I am seeing but on real hardware:
http://thread.gmane.org/gmane.linux.kernel/1483297
--
Andriy Gapon
More information about the freebsd-virtualization
mailing list