virtualbox I/O 3 times slower than KVM?

Rusty Nejdl rnejdl at ringofsaturn.com
Tue May 3 02:53:33 UTC 2011


On Mon, 2 May 2011 21:39:38 -0500, Adam Vande More wrote:
> On Mon, May 2, 2011 at 4:30 PM, Ted Mittelstaedt 
> <tedm at mittelstaedt.us>wrote:
>
>> that's sync within the VM.  Where is the bottleneck taking place?  
>> If
>> the bottleneck is hypervisor to host, then the guest to vm write may 
>> write
>> all it's data to a memory buffer in the hypervisor that is then
>> slower-writing it to the filesystem.  In that case killing the guest 
>> without
>> killing the VM manager will allow the buffer to complete emptying 
>> since the
>> hypervisor isn't actually being shut down.
>
>
> No the bottle neck is the emulated hardware inside the VM process
> container.  This is easy to observe, just start a bound process in 
> the VM
> and watch top host side.  Also the hypervisor uses native host IO 
> driver,
> there's no reason for it to be slow.  Since it's the emulated NIC 
> which is
> the bottleneck, there is nothing left to issue the write.  Further 
> empirical
> evidence for this can be seen by by watching gstat on VM running with 
> an md
> or ZVOL backed storage.  I already utilize ZVOL's for this so it was 
> pretty
> easy to confirm no IO occurs when the VM is paused or shutdown.
>
> Is his app going to ever face the extremely bad scenario, though?
>>
>
> The point is it should be relatively easy to induce patterns you 
> expect to
> see in production.  If you can't, I would consider that a problem.  
> Testing
> out theories(performance based or otherwise) on a production system 
> is not a
> good way to keep the continued faith of your clients when the 
> production
> system is a mission critical one.  Maybe throwing more hardware at a 
> problem
> is the first line of defense for some companies, unfortunately I 
> don't work
> for them.  Are they hiring? ;)  I understand the logic of such an 
> approach
> and have even argued for it occasionally.  Unfortunately payroll is 
> already
> in the budget, extra hardware is not even if it would be a net 
> savings.

I'm going to ask a stupid question... are you using bridging for your 
emulated NIC?  At least, that's how I read what you wrote that you are 
starved at the NIC side and I saw a vast performance increase switching 
to bridging.

Sincerely,
Rusty Nejdl


More information about the freebsd-emulation mailing list