PCIe passthrough really that expensive?

Harry Schmalzbauer freebsd at omnilan.de
Tue Jun 13 16:30:38 UTC 2017


Bezüglich Anish's Nachricht vom 13.06.2017 06:13 (localtime):
> Hi Harry,
>>Any hints highly appreciated!
…
> 
> Now use cpuset to route IRQ#265 to say core 0
> 
> $cpuset -l 0 -x 265
> 
> Again use cpuset to force VM[PID 1222] to run on all core except #0  
> 
> root at svmhost:~ # ps
> 
>  PID TT  STAT    TIME COMMAND
> 
> ....
> 
> 1222  1  I+   5:59.45 bhyve: vm1 (bhyve)
> 
> 
> VM can run on all cores except #0.
> 
> $ cpuset -l 1-3 -p 1222
> 
> 
> You can monitor guest due to interrupts using
> 
> $root at svmhost:~ # bhyvectl --get-stats --vm=<vm name> --cpu=<vcpu> |
> grep external
> 
> vm exits due to external interrupt      27273
> 
> root at svmhost:~ # 

Thank you very much for that detailed explanation.  I didn't thought
that cpuset(1) could also pin IRQ (handlers?) to specific cpus.

In my case, I couldn't get a noticable difference.
Since I have hyperthreading enabled on a single-socket quad core, I
pinned cpu2+3 to the bhyve pid (2vCPUs) and for testing copied two times
the same 8GB file ofer NFS, while having ppt's IRQ handler pinned to
CPU1, CPU4, CPU3 and CPU2.  So two times different hostCPUs than guest
uses and two times the same.  I couldn't see any load nor performance
difference and similar 'vm exits due to external interrupt' count groth.

To name numbers: 1st CPU had about 40k "vm exits due to external
interrupt" per 8GP transfer, the other vCPU ~160k "vm exits due to
external interrupt".

Like mentioned, different host-CPU-pinning didn't influence that noticable.

Thanks four this lesson!

-harry


More information about the freebsd-virtualization mailing list