From nobody Wed Sep 21 09:46:39 2022 X-Original-To: dev-commits-src-all@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4MXYTW6FZ2z4ctbt; Wed, 21 Sep 2022 09:46:39 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4MXYTW5Mfqz3dl7; Wed, 21 Sep 2022 09:46:39 +0000 (UTC) (envelope-from git@FreeBSD.org) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1663753599; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=z8U9UB4+DBXKMWWq2Bq0qhO3bLDEwesMP5V2yPd3OuU=; b=fCwZqsJkXFQ/IKC+dzLla5Mz5/navl7JFPBS6Sli/N6fetTqBjn6EDL7z7dVh6BKyykbYh d9Lwwt9jbHhRoH0hkYr1wn/oJwOKa8Zd8Lrvi4iz0nJhrLQq3RCgkHVtj/mAZc7aWVsbse 1qrfZLGTH2qjtNzVwb+IHtQxOq0KDAKXGvyXHGxpcPRgM7gD/GmJ66e0vozahjVifMGVbs 8VIaRVmuMTpKYnFcQBSPfOELg6LPZTzaagsBwv/Hk+dtNb27mbJSQ54nslHkc1+txtVb8W vbKj1YxA5oSFaDK8DbqQ7qD2UCgThDfhB5EPa8YnRzv3XeObswvFo0HlXrkGNQ== Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4MXYTW4RJNz11yM; Wed, 21 Sep 2022 09:46:39 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.16.1/8.16.1) with ESMTP id 28L9kdP4075984; Wed, 21 Sep 2022 09:46:39 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.16.1/8.16.1/Submit) id 28L9kdYL075983; Wed, 21 Sep 2022 09:46:39 GMT (envelope-from git) Date: Wed, 21 Sep 2022 09:46:39 GMT Message-Id: <202209210946.28L9kdYL075983@gitrepo.freebsd.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-branches@FreeBSD.org From: Andrew Turner Subject: git: 6810f4f4fee9 - stable/13 - arm64: Implement final level only TLB invalidations List-Id: Commit messages for all branches of the src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-all List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-dev-commits-src-all@freebsd.org X-BeenThere: dev-commits-src-all@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: andrew X-Git-Repository: src X-Git-Refname: refs/heads/stable/13 X-Git-Reftype: branch X-Git-Commit: 6810f4f4fee93b53e22f839f6e6dfe687da8c73f Auto-Submitted: auto-generated ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1663753599; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=z8U9UB4+DBXKMWWq2Bq0qhO3bLDEwesMP5V2yPd3OuU=; b=x7viGN4lj5ugygRkPZfProiWUnhacIvEz9pV2CwYds2JjvEszTl608QwmsAmsEiqkZB4il ag5OUV+Fatn8OWxqSrTtzG5SoYV4fF/5uKXkIVG61zjC56fbfPrfr9WJmBJZwIy/pKajCF 0w0PCRzy6MkYpzH7pV0R9h3ydSpRP8S/ZEAMK1wRFOhfAWqE8QXM4AjOPOV26AjcEymShX QSJOdfkO7TUiDe1oTrg0CIe/KdWl1P4QMY+9uDnWt/XX4ywjrx0IU8f2BU7ns2sfw7kjgG lxMybRDnVZKgbNw1XFbz/2i/jmeBqqoUW2l+7mzyahfIEY16DRR1/BPGIuDLwg== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1663753599; a=rsa-sha256; cv=none; b=umXYJcF+29xBYgTK4KC2ZLw9u6png/2gPDaz+yUKLxeJtUSokOeA37Sn99VTAN3sZE5Sg7 Fhiwt1uH47l9qyW1ADfY2Sol38wLrywWe8MExUmY4WD5XSc3ys9AgvIYjt/2qn+wBVHTEC zdVZ1mkI2d6EXL+LGTon3Km1rPAeyFQj4fau4mBgCGleUhuhzRYoWLb2G/jOC27/hceuBC VRafbroOrKKE57uIA/UzF+xc5HgM8wKsGhJnOjFoPWe09kfSOfB4UpA+Lux9ejIsIwVA4c KeiyTUDusYcuf3Jvs3uAtDNWBbzygjvhUgp8zxXuMsXGMemF0iRKiQAObJDhtA== ARC-Authentication-Results: i=1; mx1.freebsd.org; none X-ThisMailContainsUnwantedMimeParts: N The branch stable/13 has been updated by andrew: URL: https://cgit.FreeBSD.org/src/commit/?id=6810f4f4fee93b53e22f839f6e6dfe687da8c73f commit 6810f4f4fee93b53e22f839f6e6dfe687da8c73f Author: Alan Cox AuthorDate: 2021-12-29 07:50:05 +0000 Commit: Andrew Turner CommitDate: 2022-09-21 09:45:52 +0000 arm64: Implement final level only TLB invalidations A feature of arm64's instruction for TLB invalidation is the ability to determine whether cached intermediate entries, i.e., L{0,1,2}_TABLE entries, are invalidated in addition to the final entry, e.g., an L3_PAGE entry. Update pmap_invalidate_{page,range}() to support both types of invalidation, allowing the caller to determine which type of invalidation is performed. Update the callers to request the appropriate type of invalidation. Eliminate redundant TLB invalidations in pmap_abort_ptp() and pmap_remove_l3_range(). Add a comment to pmap_invalidate_all() making clear that it always invalidates entries at all levels. As expected, these changes result in a tiny yet measurable performance improvement. Reviewed by: kib, markj MFC after: 3 weeks Differential Revision: https://reviews.freebsd.org/D33705 (cherry picked from commit 4ccd6c137f5b53361efe54b78b815c7902258572) --- sys/arm64/arm64/pmap.c | 144 +++++++++++++++++++++++++++++++------------------ 1 file changed, 92 insertions(+), 52 deletions(-) diff --git a/sys/arm64/arm64/pmap.c b/sys/arm64/arm64/pmap.c index a8615b0a3902..b8fe716df737 100644 --- a/sys/arm64/arm64/pmap.c +++ b/sys/arm64/arm64/pmap.c @@ -1314,10 +1314,35 @@ SYSCTL_ULONG(_vm_pmap_l2, OID_AUTO, promotions, CTLFLAG_RD, &pmap_l2_promotions, 0, "2MB page promotions"); /* - * Invalidate a single TLB entry. + * If the given value for "final_only" is false, then any cached intermediate- + * level entries, i.e., L{0,1,2}_TABLE entries, are invalidated in addition to + * any cached final-level entry, i.e., either an L{1,2}_BLOCK or L3_PAGE entry. + * Otherwise, just the cached final-level entry is invalidated. */ static __inline void -pmap_invalidate_page(pmap_t pmap, vm_offset_t va) +pmap_invalidate_kernel(uint64_t r, bool final_only) +{ + if (final_only) + __asm __volatile("tlbi vaale1is, %0" : : "r" (r)); + else + __asm __volatile("tlbi vaae1is, %0" : : "r" (r)); +} + +static __inline void +pmap_invalidate_user(uint64_t r, bool final_only) +{ + if (final_only) + __asm __volatile("tlbi vale1is, %0" : : "r" (r)); + else + __asm __volatile("tlbi vae1is, %0" : : "r" (r)); +} + +/* + * Invalidates any cached final- and optionally intermediate-level TLB entries + * for the specified virtual address in the given virtual address space. + */ +static __inline void +pmap_invalidate_page(pmap_t pmap, vm_offset_t va, bool final_only) { uint64_t r; @@ -1326,17 +1351,22 @@ pmap_invalidate_page(pmap_t pmap, vm_offset_t va) dsb(ishst); r = TLBI_VA(va); if (pmap == kernel_pmap) { - __asm __volatile("tlbi vaae1is, %0" : : "r" (r)); + pmap_invalidate_kernel(r, final_only); } else { r |= ASID_TO_OPERAND(COOKIE_TO_ASID(pmap->pm_cookie)); - __asm __volatile("tlbi vae1is, %0" : : "r" (r)); + pmap_invalidate_user(r, final_only); } dsb(ish); isb(); } +/* + * Invalidates any cached final- and optionally intermediate-level TLB entries + * for the specified virtual address range in the given virtual address space. + */ static __inline void -pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) +pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, + bool final_only) { uint64_t end, r, start; @@ -1347,18 +1377,22 @@ pmap_invalidate_range(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) start = TLBI_VA(sva); end = TLBI_VA(eva); for (r = start; r < end; r += TLBI_VA_L3_INCR) - __asm __volatile("tlbi vaae1is, %0" : : "r" (r)); + pmap_invalidate_kernel(r, final_only); } else { start = end = ASID_TO_OPERAND(COOKIE_TO_ASID(pmap->pm_cookie)); start |= TLBI_VA(sva); end |= TLBI_VA(eva); for (r = start; r < end; r += TLBI_VA_L3_INCR) - __asm __volatile("tlbi vae1is, %0" : : "r" (r)); + pmap_invalidate_user(r, final_only); } dsb(ish); isb(); } +/* + * Invalidates all cached intermediate- and final-level TLB entries for the + * given virtual address space. + */ static __inline void pmap_invalidate_all(pmap_t pmap) { @@ -1603,7 +1637,7 @@ pmap_kenter(vm_offset_t sva, vm_size_t size, vm_paddr_t pa, int mode) pa += PAGE_SIZE; size -= PAGE_SIZE; } - pmap_invalidate_range(kernel_pmap, sva, va); + pmap_invalidate_range(kernel_pmap, sva, va, true); } void @@ -1627,7 +1661,7 @@ pmap_kremove(vm_offset_t va) KASSERT(lvl == 3, ("pmap_kremove: Invalid pte level %d", lvl)); pmap_clear(pte); - pmap_invalidate_page(kernel_pmap, va); + pmap_invalidate_page(kernel_pmap, va, true); } void @@ -1653,7 +1687,7 @@ pmap_kremove_device(vm_offset_t sva, vm_size_t size) va += PAGE_SIZE; size -= PAGE_SIZE; } - pmap_invalidate_range(kernel_pmap, sva, va); + pmap_invalidate_range(kernel_pmap, sva, va, true); } /* @@ -1709,7 +1743,7 @@ pmap_qenter(vm_offset_t sva, vm_page_t *ma, int count) va += L3_SIZE; } - pmap_invalidate_range(kernel_pmap, sva, va); + pmap_invalidate_range(kernel_pmap, sva, va, true); } /* @@ -1738,7 +1772,7 @@ pmap_qremove(vm_offset_t sva, int count) va += PAGE_SIZE; } - pmap_invalidate_range(kernel_pmap, sva, va); + pmap_invalidate_range(kernel_pmap, sva, va, true); } /*************************************************** @@ -1826,7 +1860,7 @@ _pmap_unwire_l3(pmap_t pmap, vm_offset_t va, vm_page_t m, struct spglist *free) l1pg = PHYS_TO_VM_PAGE(tl0 & ~ATTR_MASK); pmap_unwire_l3(pmap, va, l1pg, free); } - pmap_invalidate_page(pmap, va); + pmap_invalidate_page(pmap, va, false); /* * Put page on a list so that it is released after @@ -1864,17 +1898,8 @@ pmap_abort_ptp(pmap_t pmap, vm_offset_t va, vm_page_t mpte) struct spglist free; SLIST_INIT(&free); - if (pmap_unwire_l3(pmap, va, mpte, &free)) { - /* - * Although "va" was never mapped, the TLB could nonetheless - * have intermediate entries that refer to the freed page - * table pages. Invalidate those entries. - * - * XXX redundant invalidation (See _pmap_unwire_l3().) - */ - pmap_invalidate_page(pmap, va); + if (pmap_unwire_l3(pmap, va, mpte, &free)) vm_page_free_pages_toq(&free, true); - } } void @@ -2518,7 +2543,7 @@ reclaim_pv_chunk(pmap_t locked_pmap, struct rwlock **lockp) if (pmap_pte_dirty(pmap, tpte)) vm_page_dirty(m); if ((tpte & ATTR_AF) != 0) { - pmap_invalidate_page(pmap, va); + pmap_invalidate_page(pmap, va, true); vm_page_aflag_set(m, PGA_REFERENCED); } CHANGE_PV_LIST_LOCK_TO_VM_PAGE(lockp, m); @@ -2999,7 +3024,7 @@ pmap_remove_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_t sva, * Since a promotion must break the 4KB page mappings before making * the 2MB page mapping, a pmap_invalidate_page() suffices. */ - pmap_invalidate_page(pmap, sva); + pmap_invalidate_page(pmap, sva, true); if (old_l2 & ATTR_SW_WIRED) pmap->pm_stats.wired_count -= L2_SIZE / PAGE_SIZE; @@ -3049,7 +3074,7 @@ pmap_remove_l3(pmap_t pmap, pt_entry_t *l3, vm_offset_t va, PMAP_LOCK_ASSERT(pmap, MA_OWNED); old_l3 = pmap_load_clear(l3); - pmap_invalidate_page(pmap, va); + pmap_invalidate_page(pmap, va, true); if (old_l3 & ATTR_SW_WIRED) pmap->pm_stats.wired_count -= 1; pmap_resident_count_dec(pmap, 1); @@ -3098,7 +3123,7 @@ pmap_remove_l3_range(pmap_t pmap, pd_entry_t l2e, vm_offset_t sva, for (l3 = pmap_l2_to_l3(&l2e, sva); sva != eva; l3++, sva += L3_SIZE) { if (!pmap_l3_valid(pmap_load(l3))) { if (va != eva) { - pmap_invalidate_range(pmap, va, sva); + pmap_invalidate_range(pmap, va, sva, true); va = eva; } continue; @@ -3126,7 +3151,7 @@ pmap_remove_l3_range(pmap_t pmap, pd_entry_t l2e, vm_offset_t sva, */ if (va != eva) { pmap_invalidate_range(pmap, va, - sva); + sva, true); va = eva; } rw_wunlock(*lockp); @@ -3142,15 +3167,21 @@ pmap_remove_l3_range(pmap_t pmap, pd_entry_t l2e, vm_offset_t sva, vm_page_aflag_clear(m, PGA_WRITEABLE); } } - if (va == eva) - va = sva; if (l3pg != NULL && pmap_unwire_l3(pmap, sva, l3pg, free)) { - sva += L3_SIZE; + /* + * _pmap_unwire_l3() has already invalidated the TLB + * entries at all levels for "sva". So, we need not + * perform "sva += L3_SIZE;" here. Moreover, we need + * not perform "va = sva;" if "sva" is at the start + * of a new valid range consisting of a single page. + */ break; } + if (va == eva) + va = sva; } if (va != eva) - pmap_invalidate_range(pmap, va, sva); + pmap_invalidate_range(pmap, va, sva, true); } /* @@ -3205,7 +3236,7 @@ pmap_remove(pmap_t pmap, vm_offset_t sva, vm_offset_t eva) MPASS(pmap != kernel_pmap); MPASS((pmap_load(l1) & ATTR_SW_MANAGED) == 0); pmap_clear(l1); - pmap_invalidate_page(pmap, sva); + pmap_invalidate_page(pmap, sva, true); pmap_resident_count_dec(pmap, L1_SIZE / PAGE_SIZE); pmap_unuse_pt(pmap, sva, pmap_load(l0), &free); continue; @@ -3340,7 +3371,7 @@ retry: if (tpte & ATTR_SW_WIRED) pmap->pm_stats.wired_count--; if ((tpte & ATTR_AF) != 0) { - pmap_invalidate_page(pmap, pv->pv_va); + pmap_invalidate_page(pmap, pv->pv_va, true); vm_page_aflag_set(m, PGA_REFERENCED); } @@ -3405,7 +3436,7 @@ pmap_protect_l2(pmap_t pmap, pt_entry_t *l2, vm_offset_t sva, pt_entry_t mask, * Since a promotion must break the 4KB page mappings before making * the 2MB page mapping, a pmap_invalidate_page() suffices. */ - pmap_invalidate_page(pmap, sva); + pmap_invalidate_page(pmap, sva, true); } /* @@ -3462,7 +3493,7 @@ pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, vm_prot_t prot) MPASS((pmap_load(l1) & ATTR_SW_MANAGED) == 0); if ((pmap_load(l1) & mask) != nbits) { pmap_store(l1, (pmap_load(l1) & ~mask) | nbits); - pmap_invalidate_page(pmap, sva); + pmap_invalidate_page(pmap, sva, true); } continue; } @@ -3503,7 +3534,8 @@ pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, vm_prot_t prot) */ if (!pmap_l3_valid(l3) || (l3 & mask) == nbits) { if (va != va_next) { - pmap_invalidate_range(pmap, va, sva); + pmap_invalidate_range(pmap, va, sva, + true); va = va_next; } continue; @@ -3526,7 +3558,7 @@ pmap_protect(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, vm_prot_t prot) va = sva; } if (va != va_next) - pmap_invalidate_range(pmap, va, sva); + pmap_invalidate_range(pmap, va, sva, true); } PMAP_UNLOCK(pmap); } @@ -3588,7 +3620,13 @@ pmap_update_entry(pmap_t pmap, pd_entry_t *pte, pd_entry_t newpte, * lookup the physical address. */ pmap_clear_bits(pte, ATTR_DESCR_VALID); - pmap_invalidate_range(pmap, va, va + size); + + /* + * When promoting, the L{1,2}_TABLE entry that is being replaced might + * be cached, so we invalidate intermediate entries as well as final + * entries. + */ + pmap_invalidate_range(pmap, va, va + size, false); /* Create the new mapping */ pmap_store(pte, newpte); @@ -4220,7 +4258,7 @@ havel3: if (pmap_pte_dirty(pmap, orig_l3)) vm_page_dirty(om); if ((orig_l3 & ATTR_AF) != 0) { - pmap_invalidate_page(pmap, va); + pmap_invalidate_page(pmap, va, true); vm_page_aflag_set(om, PGA_REFERENCED); } CHANGE_PV_LIST_LOCK_TO_PHYS(&lock, opa); @@ -4235,7 +4273,7 @@ havel3: } else { KASSERT((orig_l3 & ATTR_AF) != 0, ("pmap_enter: unmanaged mapping lacks ATTR_AF")); - pmap_invalidate_page(pmap, va); + pmap_invalidate_page(pmap, va, true); } orig_l3 = 0; } else { @@ -4293,7 +4331,7 @@ validate: if ((orig_l3 & ~ATTR_AF) != (new_l3 & ~ATTR_AF)) { /* same PA, different attributes */ orig_l3 = pmap_load_store(l3, new_l3); - pmap_invalidate_page(pmap, va); + pmap_invalidate_page(pmap, va, true); if ((orig_l3 & ATTR_SW_MANAGED) != 0 && pmap_pte_dirty(pmap, orig_l3)) vm_page_dirty(m); @@ -4462,13 +4500,15 @@ pmap_enter_l2(pmap_t pmap, vm_offset_t va, pd_entry_t new_l2, u_int flags, * Both pmap_remove_l2() and pmap_remove_l3_range() * will leave the kernel page table page zero filled. * Nonetheless, the TLB could have an intermediate - * entry for the kernel page table page. + * entry for the kernel page table page, so request + * an invalidation at all levels after clearing + * the L2_TABLE entry. */ mt = PHYS_TO_VM_PAGE(pmap_load(l2) & ~ATTR_MASK); if (pmap_insert_pt_page(pmap, mt, false)) panic("pmap_enter_l2: trie insert failed"); pmap_clear(l2); - pmap_invalidate_page(pmap, va); + pmap_invalidate_page(pmap, va, false); } } @@ -5640,7 +5680,7 @@ retry: if ((oldpte & ATTR_S1_AP_RW_BIT) == ATTR_S1_AP(ATTR_S1_AP_RW)) vm_page_dirty(m); - pmap_invalidate_page(pmap, pv->pv_va); + pmap_invalidate_page(pmap, pv->pv_va, true); } PMAP_UNLOCK(pmap); } @@ -5747,7 +5787,7 @@ retry: (uintptr_t)pmap) & (Ln_ENTRIES - 1)) == 0 && (tpte & ATTR_SW_WIRED) == 0) { pmap_clear_bits(pte, ATTR_AF); - pmap_invalidate_page(pmap, va); + pmap_invalidate_page(pmap, va, true); cleared++; } else not_cleared++; @@ -5795,7 +5835,7 @@ small_mappings: if ((tpte & ATTR_AF) != 0) { if ((tpte & ATTR_SW_WIRED) == 0) { pmap_clear_bits(pte, ATTR_AF); - pmap_invalidate_page(pmap, pv->pv_va); + pmap_invalidate_page(pmap, pv->pv_va, true); cleared++; } else not_cleared++; @@ -5938,12 +5978,12 @@ pmap_advise(pmap_t pmap, vm_offset_t sva, vm_offset_t eva, int advice) continue; maybe_invlrng: if (va != va_next) { - pmap_invalidate_range(pmap, va, sva); + pmap_invalidate_range(pmap, va, sva, true); va = va_next; } } if (va != va_next) - pmap_invalidate_range(pmap, va, sva); + pmap_invalidate_range(pmap, va, sva, true); } PMAP_UNLOCK(pmap); } @@ -6004,7 +6044,7 @@ restart: (oldl3 & ~ATTR_SW_DBM) | ATTR_S1_AP(ATTR_S1_AP_RO))) cpu_spinwait(); vm_page_dirty(m); - pmap_invalidate_page(pmap, va); + pmap_invalidate_page(pmap, va, true); } PMAP_UNLOCK(pmap); } @@ -6027,7 +6067,7 @@ restart: oldl3 = pmap_load(l3); if ((oldl3 & (ATTR_S1_AP_RW_BIT | ATTR_SW_DBM)) == ATTR_SW_DBM){ pmap_set_bits(l3, ATTR_S1_AP(ATTR_S1_AP_RO)); - pmap_invalidate_page(pmap, pv->pv_va); + pmap_invalidate_page(pmap, pv->pv_va, true); } PMAP_UNLOCK(pmap); } @@ -7185,7 +7225,7 @@ pmap_fault(pmap_t pmap, uint64_t esr, uint64_t far) if ((pte & ATTR_S1_AP_RW_BIT) == ATTR_S1_AP(ATTR_S1_AP_RO)) { pmap_clear_bits(ptep, ATTR_S1_AP_RW_BIT); - pmap_invalidate_page(pmap, far); + pmap_invalidate_page(pmap, far, true); } rv = KERN_SUCCESS; }