git: 98138bbde032 - main - riscv: Fix pmap_alloc_l2 when it should allocate a new L1 entry
Jessica Clarke
jrtc27 at FreeBSD.org
Mon Aug 9 19:29:23 UTC 2021
The branch main has been updated by jrtc27:
URL: https://cgit.FreeBSD.org/src/commit/?id=98138bbde032e2040af3d158658c497fd3f63f2a
commit 98138bbde032e2040af3d158658c497fd3f63f2a
Author: Jessica Clarke <jrtc27 at FreeBSD.org>
AuthorDate: 2021-08-09 19:28:37 +0000
Commit: Jessica Clarke <jrtc27 at FreeBSD.org>
CommitDate: 2021-08-09 19:28:37 +0000
riscv: Fix pmap_alloc_l2 when it should allocate a new L1 entry
The current code checks the RWX bits are 0 but does not check the V bit
is non-zero, meaning not-yet-allocated L1 entries that are still zero
are regarded as being allocated. This is likely due to copying the arm64
code that checks ATTR_DESC_MASK is L1_TABLE, which emcompasses both the
type and the validity in a single field, and erroneously translating
that to a check of just PTE_RWX being 0 to indicate non-leaf, forgetting
about the V bit. This then results in the following panic:
panic: Fatal page fault at 0xffffffc0005cf292: 0x00000000000050
cpuid = 1
time = 1628379581
KDB: stack backtrace:
db_trace_self() at db_trace_self
db_trace_self_wrapper() at db_trace_self_wrapper+0x38
kdb_backtrace() at kdb_backtrace+0x2c
vpanic() at vpanic+0x148
panic() at panic+0x2a
page_fault_handler() at page_fault_handler+0x1ba
do_trap_supervisor() at do_trap_supervisor+0x7a
cpu_exception_handler_supervisor() at
cpu_exception_handler_supervisor+0x70
--- exception 13, tval = 0x50
pmap_enter_l2() at pmap_enter_l2+0xb2
pmap_enter_object() at pmap_enter_object+0x15e
vm_map_pmap_enter() at vm_map_pmap_enter+0x228
vm_map_insert() at vm_map_insert+0x4ec
vm_map_find() at vm_map_find+0x474
vm_map_find_min() at vm_map_find_min+0x52
vm_mmap_object() at vm_mmap_object+0x1ba
vn_mmap() at vn_mmap+0xf8
kern_mmap() at kern_mmap+0x4c4
sys_mmap() at sys_mmap+0x38
do_trap_user() at do_trap_user+0x208
cpu_exception_handler_user() at cpu_exception_handler_user+0x72
--- exception 8, tval = 0x1dd
Instead, we should just check the V bit, as on amd64, and assert that
any valid L1 entries are not leaves, since an L1 leaf would render the
entire range allocated and thus we should not have attempted to map that
VA in the first place.
Reported by: David Gilbert <dgilbert at daveg.ca>
MFC after: 1 week
Reviewed by: markj, mhorne
Differential Revision: https://reviews.freebsd.org/D31460
---
sys/riscv/riscv/pmap.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/sys/riscv/riscv/pmap.c b/sys/riscv/riscv/pmap.c
index 0bb22bd13bbe..8f76881478b9 100644
--- a/sys/riscv/riscv/pmap.c
+++ b/sys/riscv/riscv/pmap.c
@@ -1348,7 +1348,10 @@ pmap_alloc_l2(pmap_t pmap, vm_offset_t va, struct rwlock **lockp)
retry:
l1 = pmap_l1(pmap, va);
- if (l1 != NULL && (pmap_load(l1) & PTE_RWX) == 0) {
+ if (l1 != NULL && (pmap_load(l1) & PTE_V) != 0) {
+ KASSERT((pmap_load(l1) & PTE_RWX) == 0,
+ ("%s: L1 entry %#lx for VA %#lx is a leaf", __func__,
+ pmap_load(l1), va));
/* Add a reference to the L2 page. */
l2pg = PHYS_TO_VM_PAGE(PTE_TO_PHYS(pmap_load(l1)));
l2pg->ref_count++;
More information about the dev-commits-src-main
mailing list