VMX exit reason=33 and general userboot.so questions

Fabian Freyer fabian.freyer at physik.tu-berlin.de
Tue Feb 20 23:09:17 UTC 2018


Hi!

I’m currently writing a userboot.so-compatible boot loader [1] and am slowly getting to a point were I want to do some testing with running test kernels in bhyve.

At the moment, I’m getting the following error after loading my kernel:

---8< snip
vm exit[0]
        reason          VMX
        rip             0x000000000010000c
        inst_length     0
        status          0
        exit_reason     33
        qualification   0x0000000000000000
        inst_type               0
        inst_error              0
[1]    [PID] abort      bhyve -H -P -s 0,hostbridge -s 31,lpc -c 1 -m 128M testing
--->8 snap

This is the register state I’m setting up before running bhyve:

---8< snip
efer[0]         0x0000000000000000
cr0[0]          0x0000000000000021
cr3[0]          0x0000000000000000
cr4[0]          0x0000000000000000
dr7[0]          0x0000000000000000
rsp[0]          0x0000000000000000
rip[0]          0x000000000010000c
rax[0]          0x000000002badb002
rbx[0]          0x0000000000100952
rcx[0]          0x0000000000000000
rdx[0]          0x0000000000000000
rsi[0]          0x0000000000000000
rdi[0]          0x0000000000000000
rbp[0]          0x0000000000000000
r8[0]           0x0000000000000000
r9[0]           0x0000000000000000
r10[0]          0x0000000000000000
r11[0]          0x0000000000000000
r12[0]          0x0000000000000000
r13[0]          0x0000000000000000
r14[0]          0x0000000000000000
r15[0]          0x0000000000000000
rflags[0]       0x0000000000000002
ds desc[0]      0x0000000000000000/0xffffffff/0x0000c093
es desc[0]      0x0000000000000000/0xffffffff/0x0000c093
fs desc[0]      0x0000000000000000/0xffffffff/0x0000c093
gs desc[0]      0x0000000000000000/0xffffffff/0x0000c093
ss desc[0]      0x0000000000000000/0xffffffff/0x0000c093
cs desc[0]      0x0000000000000000/0xffffffff/0x0000c09b
tr desc[0]      0x0000000000000000/0x00000000/0x00000000
ldtr desc[0]    0x0000000000000000/0x00000000/0x00000000
gdtr[0]         0x0000000000000000/0x00000000
idtr[0]         0x0000000000000000/0x00000000
cs[0]           0x0000
ds[0]           0x0000
es[0]           0x0000
fs[0]           0x0000
gs[0]           0x0000
ss[0]           0x0000
tr[0]           0x0000
ldtr[0]         0x0000
[... omitted some, not sure if relevant]
--->8 snap

And here’s a diff of the register state before and after running bhyve:

---8< snip
--- before        2018-02-20 22:36:16.001919000 +0000
+++ after       2018-02-20 22:36:27.442941000 +0000
--- before    2018-02-20 22:36:16.001919000 +0000
+++ after       2018-02-20 22:36:27.442941000 +0000
@@ -56 +56 @@
-procbased_ctls[0]      0x00000000b5186572
+procbased_ctls[0]      0x00000000f51865f2
@@ -67 +67 @@
-host_cr3[0]            0x0000000000000000
+host_cr3[0]            0x0000000389cac09a
@@ -101,4 +101,4 @@
-exit_reason[0] 0
-rtc nvram[000]: 0x05
-rtc time 0x5: Thu Jan 01 00:00:05 1970
-Capability "hlt_exit" is not set on vcpu 0
+exit_reason[0] 0x80000021
+rtc nvram[000]: 0x26
+rtc time 0x5a8ca2ea: Tue Feb 20 22:36:26 2018
+Capability "hlt_exit" is set on vcpu 0
@@ -106 +106 @@
-Capability "pause_exit" is not set on vcpu 0
+Capability "pause_exit" is set on vcpu 0
@@ -109 +109 @@
-active cpus:    (none)
+active cpus:    0
@@ -125 +125 @@
-number of vm exits for unknown reason          0
+number of vm exits for unknown reason          1
@@ -128 +128 @@
-number of vm exits handled in userspace        0
+number of vm exits handled in userspace        1
@@ -131 +131 @@
-vcpu total runtime                             0
+vcpu total runtime                             11904
@@ -165,3 +165,3 @@
-Number of vpid invalidations done              0
-vcpu migration across host cpus                0
-total number of vm exits                       0
+Number of vpid invalidations done              1
+vcpu migration across host cpus                1
+total number of vm exits                       1
--->8 snap

The code at that RIP point also looks correct:

xxd -s 0x000000000010000c -l 5 /dev/vmm/testing
0010000c: bc00 2000 00

Which decompiles to:
0010000c: bc00200000 mov esp,0x2000

I’ve been looking at what userboot.so, grub2-bhyve and vm_setup_freebsd_registers is doing. However, I left all registers that don’t have a defined state in the multiboot specification [2] in the state they are.

How would I best start debugging this?

Fabian

[1] https://github.com/fabianfreyer/bhyve-multiboot/tree/multiboot/info
[2] https://www.gnu.org/software/grub/manual/multiboot/multiboot.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 882 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freebsd.org/pipermail/freebsd-virtualization/attachments/20180220/e37ac3e4/attachment.sig>


More information about the freebsd-virtualization mailing list