git: 43c1eb894a57 - main - vm_object: Fix handling of wired map entries in vm_object_split()
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Fri, 04 Apr 2025 23:25:03 UTC
The branch main has been updated by markj: URL: https://cgit.FreeBSD.org/src/commit/?id=43c1eb894a57ef30562a02708445c512610d4f02 commit 43c1eb894a57ef30562a02708445c512610d4f02 Author: Mark Johnston <markj@FreeBSD.org> AuthorDate: 2025-04-04 20:29:25 +0000 Commit: Mark Johnston <markj@FreeBSD.org> CommitDate: 2025-04-04 23:24:49 +0000 vm_object: Fix handling of wired map entries in vm_object_split() Suppose a vnode is mapped with MAP_PROT and MAP_PRIVATE, mlock() is called on the mapping, and then the vnode is truncated such that the last page of the mapping becomes invalid. The now-invalid page will be unmapped, but stays resident in the VM object to preserve the invariant that a range of pages mapped by a wired map entry is always resident. This invariant is checked by vm_object_unwire(), for example. Then, suppose that the mapping is upgraded to PROT_READ|PROT_WRITE. We will copy the invalid page into a new anonymous VM object. If the process then forks, vm_object_split() may then be called on the object. Upon encountering an invalid page, rather than moving it into the destination object, it is removed. However, this is wrong when the entry is wired, since the invalid page's wiring belongs to the map entry; this behaviour also violates the invariant mentioned above. Fix this by moving invalid pages into the destination object if the map entry is wired. In this case we must not dirty the page, so add a flag to vm_page_iter_rename() to control this. Reported by: syzkaller Reviewed by: dougm, kib MFC after: 2 weeks Differential Revision: https://reviews.freebsd.org/D49443 --- sys/vm/vm_object.c | 11 ++++++++--- sys/vm/vm_page.c | 16 ++++++---------- 2 files changed, 14 insertions(+), 13 deletions(-) diff --git a/sys/vm/vm_object.c b/sys/vm/vm_object.c index 4ab20a86e155..c69fd0d1c161 100644 --- a/sys/vm/vm_object.c +++ b/sys/vm/vm_object.c @@ -1597,16 +1597,21 @@ retry: } /* - * The page was left invalid. Likely placed there by + * If the page was left invalid, it was likely placed there by * an incomplete fault. Just remove and ignore. + * + * One other possibility is that the map entry is wired, in + * which case we must hang on to the page to avoid leaking it, + * as the map entry owns the wiring. This case can arise if the + * backing pager is truncated. */ - if (vm_page_none_valid(m)) { + if (vm_page_none_valid(m) && entry->wired_count == 0) { if (vm_page_iter_remove(&pages, m)) vm_page_free(m); continue; } - /* vm_page_iter_rename() will dirty the page. */ + /* vm_page_iter_rename() will dirty the page if it is valid. */ if (!vm_page_iter_rename(&pages, m, new_object, m->pindex - offidxstart)) { vm_page_xunbusy(m); diff --git a/sys/vm/vm_page.c b/sys/vm/vm_page.c index f351f60f833c..f9653f1d1ec9 100644 --- a/sys/vm/vm_page.c +++ b/sys/vm/vm_page.c @@ -2038,15 +2038,10 @@ vm_page_replace(vm_page_t mnew, vm_object_t object, vm_pindex_t pindex, * * Panics if a page already resides in the new object at the new pindex. * - * Note: swap associated with the page must be invalidated by the move. We - * have to do this for several reasons: (1) we aren't freeing the - * page, (2) we are dirtying the page, (3) the VM system is probably - * moving the page from object A to B, and will then later move - * the backing store from A to B and we can't have a conflict. - * - * Note: we *always* dirty the page. It is necessary both for the - * fact that we moved it, and because we may be invalidating - * swap. + * This routine dirties the page if it is valid, as callers are expected to + * transfer backing storage only after moving the page. Dirtying the page + * ensures that the destination object retains the most recent copy of the + * page. * * The objects must be locked. */ @@ -2087,7 +2082,8 @@ vm_page_iter_rename(struct pctrie_iter *old_pages, vm_page_t m, m->object = new_object; vm_page_insert_radixdone(m, new_object, mpred); - vm_page_dirty(m); + if (vm_page_any_valid(m)) + vm_page_dirty(m); vm_pager_page_inserted(new_object, m); return (true); }