PCIe passthrough really that expensive?

Harry Schmalzbauer freebsd at omnilan.de
Wed Jun 7 18:01:16 UTC 2017


 Hello,

some might have noticed my numerous posts recently, mainly in
freebsd-net@, but all around the same story – replacing ESXi. So I hope
nobody minds if I ask for help again to alleviate some of my knowledge
deficiencies about PCIePassThrough.
As last resort for special VMs, I always used to have dedicated NICs via
PCIePassThrough.
But with bhyve (besides other undiscovered strange side effects) I don't
understand the results utilizing bhyve-passthru.

Simple test: Copy iso image from NFSv4 mount via 1GbE (to null).

Host, using if_em (hartwell): 4-8kirqs/s (8 @mtu 1500), system idle ~99-100%
Passing this same hartwell devcie to the guest, running the identical
FreeBSD version like the host, I see 2x8kirqs/s, MTU independent, and
only 80%idle, while almost all cycles are spent in Sys (vmm).
Running the same guest in if_bridge(4)-vtnet(4) or vale(4)-vtnet(4)
deliver identical results: About 80% attainable throughput, only 80%
idle cycles.

So interrupts triggerd by PCI devices, which are controlled via
bhyve-passthru, are as expensive as interrupts triggered by emulated
devices?
I thought I'd save these expensive VM_Exits by using the passthru path.
Completely wrong, is it?

I haven't ever done authoritative ESXi measures, but I remember that
there was a significant saving using VMDirectPath. Big enough that I
never felt the need for measuring. Is there any implementation
difference? Some kind of intermediate interrupt moderation maybe?

Thanks for any hints/links,

-harry


More information about the freebsd-virtualization mailing list