From nobody Sat Jun 01 09:24:46 2024 X-Original-To: dev-commits-src-all@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4Vrvhb0JGtz5MXhq; Sat, 01 Jun 2024 09:24:47 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Vrvhb03W6z4m48; Sat, 1 Jun 2024 09:24:47 +0000 (UTC) (envelope-from git@FreeBSD.org) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1717233887; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=wrFttcU+7c8z+bQGXPKgabhV80vxXoes6OH4Turw6vg=; b=kzZJpF5bEoJ6FCKC6qc4Wh7lHudP7+5Wjg8CWOXkmylOKzzrChyyu2b9Uk8iPPP9zJwQJ3 DJVplWTrwT4g3pQiLg/Nv1HZl3/odEE5tQKHS2/UKacgCJxLlOFSSBLc89KIyIAs852LQB NbCieV7ebPhMcgXJo84Arvdv6TC0/QMDDn20rVBmrCkaSmO34oya2OzxF4ABp/paK8/tfq feKRSShiM7Pid9Se4X5CO9WYGwIlt7heeHQS30+ONyF5WyllgngbbudGChaYIaJiObypyU 3Mh5cVeyLdbZIojo3Jzu/2Qq4mbnSewRo6m9l8/CeNVCsYIYDrKTwDIte/naaQ== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1717233887; a=rsa-sha256; cv=none; b=D8vi427cK9Fup5436xAaJtl7Thd3B4HesZeUVYPmGo79N1n8Q1HRzHSxE89xgSxJpnDAfB s8pXvzDOnHxApTjHY/mgFVqSn6NqgH0iCcRzjZUXdd9qw/N38tJpR4taACq4mWbWCQLXln acUlyhrilA56nqn0I9YICxdzbUB6+5xILX7nTH9LDK///h17qkVfug1+1HsKXJ8SFPUIob wbDS+fXDVWmiOA23O4vZ+aHMdFgQt4ViYZN6yVFrdiFrQgWtStBP/3LSinaDDdaSyLG1HT +nwJ1hVzr4WbWqXzHpMUiqZ83DjZckX7eV+L2gj/Rh8Gnnr++ZW1EIT+uDrYlw== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1717233887; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=wrFttcU+7c8z+bQGXPKgabhV80vxXoes6OH4Turw6vg=; b=SXE1iz6g1AGC3S5Cj67u/l2LkfFsQSCdgutgScJray6Dlx6s1yD0uq25lncDT0z62kIrY/ w8Mc6hTM75XYe20KFmgU8CFNZNJhIfrDz9ye5Cdb8fITvPU3kXIEOWiO05jDz8RjrB2XnN dbYCHM3yZ2xjUhWgCuobNlGVJpw9zXX1wFpGcvRgQg+3JXfjvMMb0Swz497KpPe/R1j5ok Yuz0MmDagQYhxTGuCDR/dCKUCqeqT0dAek+K6+O2hMS8kkuMZXSeNLR9TLdG97mSPFvmy5 wkZUMX2VW3mc6aiF/0f6UxSQfMctykVWORrlGXuLCTSoehkynLUzFryi0WIrfw== Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4VrvhZ6mypz12yX; Sat, 1 Jun 2024 09:24:46 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.17.1/8.17.1) with ESMTP id 4519OktC070917; Sat, 1 Jun 2024 09:24:46 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.17.1/8.17.1/Submit) id 4519OkDS070914; Sat, 1 Jun 2024 09:24:46 GMT (envelope-from git) Date: Sat, 1 Jun 2024 09:24:46 GMT Message-Id: <202406010924.4519OkDS070914@gitrepo.freebsd.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-branches@FreeBSD.org From: Konstantin Belousov Subject: git: 339b47f01985 - stable/14 - x86/iommu: extract useful utilities into x86_iommu.c List-Id: Commit messages for all branches of the src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-all List-Help: List-Post: List-Subscribe: List-Unsubscribe: X-BeenThere: dev-commits-src-all@freebsd.org Sender: owner-dev-commits-src-all@FreeBSD.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: kib X-Git-Repository: src X-Git-Refname: refs/heads/stable/14 X-Git-Reftype: branch X-Git-Commit: 339b47f01985628ccc4c3e9a29c36f26edb034c3 Auto-Submitted: auto-generated The branch stable/14 has been updated by kib: URL: https://cgit.FreeBSD.org/src/commit/?id=339b47f01985628ccc4c3e9a29c36f26edb034c3 commit 339b47f01985628ccc4c3e9a29c36f26edb034c3 Author: Konstantin Belousov AuthorDate: 2024-05-25 00:47:26 +0000 Commit: Konstantin Belousov CommitDate: 2024-06-01 09:24:24 +0000 x86/iommu: extract useful utilities into x86_iommu.c (cherry picked from commit 40d951bc5932deb87635f5c1780a6706d0c7c012) --- sys/conf/files.x86 | 1 + sys/x86/include/iommu.h | 1 + sys/x86/iommu/intel_ctx.c | 25 +++---- sys/x86/iommu/intel_dmar.h | 9 +-- sys/x86/iommu/intel_drv.c | 7 +- sys/x86/iommu/intel_fault.c | 1 + sys/x86/iommu/intel_idpgtbl.c | 77 ++++++++++---------- sys/x86/iommu/intel_intrmap.c | 3 +- sys/x86/iommu/intel_qi.c | 3 +- sys/x86/iommu/intel_quirks.c | 1 + sys/x86/iommu/intel_reg.h | 15 +--- sys/x86/iommu/intel_utils.c | 128 +++------------------------------ sys/x86/iommu/iommu_utils.c | 164 ++++++++++++++++++++++++++++++++++++++++++ sys/x86/iommu/x86_iommu.h | 62 ++++++++++++++++ 14 files changed, 302 insertions(+), 195 deletions(-) diff --git a/sys/conf/files.x86 b/sys/conf/files.x86 index 15781eea8fee..445bbf9091ba 100644 --- a/sys/conf/files.x86 +++ b/sys/conf/files.x86 @@ -320,6 +320,7 @@ x86/iommu/intel_intrmap.c optional acpi iommu pci x86/iommu/intel_qi.c optional acpi iommu pci x86/iommu/intel_quirks.c optional acpi iommu pci x86/iommu/intel_utils.c optional acpi iommu pci +x86/iommu/iommu_utils.c optional acpi iommu pci x86/isa/atrtc.c standard x86/isa/clock.c standard x86/isa/isa.c optional isa diff --git a/sys/x86/include/iommu.h b/sys/x86/include/iommu.h index a95480b53acc..98c6661aa8e3 100644 --- a/sys/x86/include/iommu.h +++ b/sys/x86/include/iommu.h @@ -7,6 +7,7 @@ #include #include +#include #include #endif /* !_MACHINE_IOMMU_H_ */ diff --git a/sys/x86/iommu/intel_ctx.c b/sys/x86/iommu/intel_ctx.c index 65ca88b052ed..444640570df7 100644 --- a/sys/x86/iommu/intel_ctx.c +++ b/sys/x86/iommu/intel_ctx.c @@ -66,6 +66,7 @@ #include #include #include +#include #include static MALLOC_DEFINE(M_DMAR_CTX, "dmar_ctx", "Intel DMAR Context"); @@ -85,7 +86,7 @@ dmar_ensure_ctx_page(struct dmar_unit *dmar, int bus) /* * Allocated context page must be linked. */ - ctxm = dmar_pgalloc(dmar->ctx_obj, 1 + bus, IOMMU_PGF_NOALLOC); + ctxm = iommu_pgalloc(dmar->ctx_obj, 1 + bus, IOMMU_PGF_NOALLOC); if (ctxm != NULL) return; @@ -96,14 +97,14 @@ dmar_ensure_ctx_page(struct dmar_unit *dmar, int bus) * threads are equal. */ TD_PREP_PINNED_ASSERT; - ctxm = dmar_pgalloc(dmar->ctx_obj, 1 + bus, IOMMU_PGF_ZERO | + ctxm = iommu_pgalloc(dmar->ctx_obj, 1 + bus, IOMMU_PGF_ZERO | IOMMU_PGF_WAITOK); - re = dmar_map_pgtbl(dmar->ctx_obj, 0, IOMMU_PGF_NOALLOC, &sf); + re = iommu_map_pgtbl(dmar->ctx_obj, 0, IOMMU_PGF_NOALLOC, &sf); re += bus; dmar_pte_store(&re->r1, DMAR_ROOT_R1_P | (DMAR_ROOT_R1_CTP_MASK & VM_PAGE_TO_PHYS(ctxm))); dmar_flush_root_to_ram(dmar, re); - dmar_unmap_pgtbl(sf); + iommu_unmap_pgtbl(sf); TD_PINNED_ASSERT; } @@ -115,7 +116,7 @@ dmar_map_ctx_entry(struct dmar_ctx *ctx, struct sf_buf **sfp) dmar = CTX2DMAR(ctx); - ctxp = dmar_map_pgtbl(dmar->ctx_obj, 1 + PCI_RID2BUS(ctx->context.rid), + ctxp = iommu_map_pgtbl(dmar->ctx_obj, 1 + PCI_RID2BUS(ctx->context.rid), IOMMU_PGF_NOALLOC | IOMMU_PGF_WAITOK, sfp); ctxp += ctx->context.rid & 0xff; return (ctxp); @@ -188,7 +189,7 @@ ctx_id_entry_init(struct dmar_ctx *ctx, dmar_ctx_entry_t *ctxp, bool move, ("ctx %p non-null pgtbl_obj", ctx)); ctx_root = NULL; } else { - ctx_root = dmar_pgalloc(domain->pgtbl_obj, 0, + ctx_root = iommu_pgalloc(domain->pgtbl_obj, 0, IOMMU_PGF_NOALLOC); } @@ -274,7 +275,7 @@ domain_init_rmrr(struct dmar_domain *domain, device_t dev, int bus, "region (%jx, %jx) corrected\n", domain->iodom.iommu->unit, start, end); } - entry->end += DMAR_PAGE_SIZE * 0x20; + entry->end += IOMMU_PAGE_SIZE * 0x20; } size = OFF_TO_IDX(entry->end - entry->start); ma = malloc(sizeof(vm_page_t) * size, M_TEMP, M_WAITOK); @@ -603,9 +604,9 @@ dmar_get_ctx_for_dev1(struct dmar_unit *dmar, device_t dev, uint16_t rid, func, rid, domain->domain, domain->mgaw, domain->agaw, id_mapped ? "id" : "re"); } - dmar_unmap_pgtbl(sf); + iommu_unmap_pgtbl(sf); } else { - dmar_unmap_pgtbl(sf); + iommu_unmap_pgtbl(sf); dmar_domain_destroy(domain1); /* Nothing needs to be done to destroy ctx1. */ free(ctx1, M_DMAR_CTX); @@ -705,7 +706,7 @@ dmar_move_ctx_to_domain(struct dmar_domain *domain, struct dmar_ctx *ctx) ctx->context.domain = &domain->iodom; dmar_ctx_link(ctx); ctx_id_entry_init(ctx, ctxp, true, PCI_BUSMAX + 100); - dmar_unmap_pgtbl(sf); + iommu_unmap_pgtbl(sf); error = dmar_flush_for_ctx_entry(dmar, true); /* If flush failed, rolling back would not work as well. */ printf("dmar%d rid %x domain %d->%d %s-mapped\n", @@ -789,7 +790,7 @@ dmar_free_ctx_locked(struct dmar_unit *dmar, struct dmar_ctx *ctx) if (ctx->refs > 1) { ctx->refs--; DMAR_UNLOCK(dmar); - dmar_unmap_pgtbl(sf); + iommu_unmap_pgtbl(sf); TD_PINNED_ASSERT; return; } @@ -811,7 +812,7 @@ dmar_free_ctx_locked(struct dmar_unit *dmar, struct dmar_ctx *ctx) else dmar_inv_iotlb_glob(dmar); } - dmar_unmap_pgtbl(sf); + iommu_unmap_pgtbl(sf); domain = CTX2DOM(ctx); dmar_ctx_unlink(ctx); free(ctx->context.tag, M_DMAR_CTX); diff --git a/sys/x86/iommu/intel_dmar.h b/sys/x86/iommu/intel_dmar.h index e20144094c80..8289478aed19 100644 --- a/sys/x86/iommu/intel_dmar.h +++ b/sys/x86/iommu/intel_dmar.h @@ -238,16 +238,11 @@ iommu_gaddr_t pglvl_page_size(int total_pglvl, int lvl); iommu_gaddr_t domain_page_size(struct dmar_domain *domain, int lvl); int calc_am(struct dmar_unit *unit, iommu_gaddr_t base, iommu_gaddr_t size, iommu_gaddr_t *isizep); -struct vm_page *dmar_pgalloc(vm_object_t obj, vm_pindex_t idx, int flags); -void dmar_pgfree(vm_object_t obj, vm_pindex_t idx, int flags); -void *dmar_map_pgtbl(vm_object_t obj, vm_pindex_t idx, int flags, - struct sf_buf **sf); -void dmar_unmap_pgtbl(struct sf_buf *sf); int dmar_load_root_entry_ptr(struct dmar_unit *unit); int dmar_inv_ctx_glob(struct dmar_unit *unit); int dmar_inv_iotlb_glob(struct dmar_unit *unit); int dmar_flush_write_bufs(struct dmar_unit *unit); -void dmar_flush_pte_to_ram(struct dmar_unit *unit, dmar_pte_t *dst); +void dmar_flush_pte_to_ram(struct dmar_unit *unit, iommu_pte_t *dst); void dmar_flush_ctx_to_ram(struct dmar_unit *unit, dmar_ctx_entry_t *dst); void dmar_flush_root_to_ram(struct dmar_unit *unit, dmar_root_entry_t *dst); int dmar_disable_protected_regions(struct dmar_unit *unit); @@ -315,9 +310,7 @@ void dmar_quirks_pre_use(struct iommu_unit *dmar); int dmar_init_irt(struct dmar_unit *unit); void dmar_fini_irt(struct dmar_unit *unit); -extern iommu_haddr_t dmar_high; extern int haw; -extern int dmar_tbl_pagecnt; extern int dmar_batch_coalesce; extern int dmar_rmrr_enable; diff --git a/sys/x86/iommu/intel_drv.c b/sys/x86/iommu/intel_drv.c index 7346162d1502..9a2fedf90b6a 100644 --- a/sys/x86/iommu/intel_drv.c +++ b/sys/x86/iommu/intel_drv.c @@ -67,6 +67,7 @@ #include #include #include +#include #include #ifdef DEV_APIC @@ -179,9 +180,9 @@ dmar_identify(driver_t *driver, device_t parent) return; haw = dmartbl->Width + 1; if ((1ULL << (haw + 1)) > BUS_SPACE_MAXADDR) - dmar_high = BUS_SPACE_MAXADDR; + iommu_high = BUS_SPACE_MAXADDR; else - dmar_high = 1ULL << (haw + 1); + iommu_high = 1ULL << (haw + 1); if (bootverbose) { printf("DMAR HAW=%d flags=<%b>\n", dmartbl->Width, (unsigned)dmartbl->Flags, @@ -490,7 +491,7 @@ dmar_attach(device_t dev) * address translation after the required invalidations are * done. */ - dmar_pgalloc(unit->ctx_obj, 0, IOMMU_PGF_WAITOK | IOMMU_PGF_ZERO); + iommu_pgalloc(unit->ctx_obj, 0, IOMMU_PGF_WAITOK | IOMMU_PGF_ZERO); DMAR_LOCK(unit); error = dmar_load_root_entry_ptr(unit); if (error != 0) { diff --git a/sys/x86/iommu/intel_fault.c b/sys/x86/iommu/intel_fault.c index e275304c8d51..59b482720cf1 100644 --- a/sys/x86/iommu/intel_fault.c +++ b/sys/x86/iommu/intel_fault.c @@ -54,6 +54,7 @@ #include #include #include +#include #include /* diff --git a/sys/x86/iommu/intel_idpgtbl.c b/sys/x86/iommu/intel_idpgtbl.c index 26f067e35278..82cac8bb2d39 100644 --- a/sys/x86/iommu/intel_idpgtbl.c +++ b/sys/x86/iommu/intel_idpgtbl.c @@ -64,6 +64,7 @@ #include #include #include +#include #include static int domain_unmap_buf_locked(struct dmar_domain *domain, @@ -109,7 +110,7 @@ domain_idmap_nextlvl(struct idpgtbl *tbl, int lvl, vm_pindex_t idx, iommu_gaddr_t addr) { vm_page_t m1; - dmar_pte_t *pte; + iommu_pte_t *pte; struct sf_buf *sf; iommu_gaddr_t f, pg_sz; vm_pindex_t base; @@ -118,28 +119,28 @@ domain_idmap_nextlvl(struct idpgtbl *tbl, int lvl, vm_pindex_t idx, VM_OBJECT_ASSERT_LOCKED(tbl->pgtbl_obj); if (addr >= tbl->maxaddr) return; - (void)dmar_pgalloc(tbl->pgtbl_obj, idx, IOMMU_PGF_OBJL | + (void)iommu_pgalloc(tbl->pgtbl_obj, idx, IOMMU_PGF_OBJL | IOMMU_PGF_WAITOK | IOMMU_PGF_ZERO); - base = idx * DMAR_NPTEPG + 1; /* Index of the first child page of idx */ + base = idx * IOMMU_NPTEPG + 1; /* Index of the first child page of idx */ pg_sz = pglvl_page_size(tbl->pglvl, lvl); if (lvl != tbl->leaf) { - for (i = 0, f = addr; i < DMAR_NPTEPG; i++, f += pg_sz) + for (i = 0, f = addr; i < IOMMU_NPTEPG; i++, f += pg_sz) domain_idmap_nextlvl(tbl, lvl + 1, base + i, f); } VM_OBJECT_WUNLOCK(tbl->pgtbl_obj); - pte = dmar_map_pgtbl(tbl->pgtbl_obj, idx, IOMMU_PGF_WAITOK, &sf); + pte = iommu_map_pgtbl(tbl->pgtbl_obj, idx, IOMMU_PGF_WAITOK, &sf); if (lvl == tbl->leaf) { - for (i = 0, f = addr; i < DMAR_NPTEPG; i++, f += pg_sz) { + for (i = 0, f = addr; i < IOMMU_NPTEPG; i++, f += pg_sz) { if (f >= tbl->maxaddr) break; pte[i].pte = (DMAR_PTE_ADDR_MASK & f) | DMAR_PTE_R | DMAR_PTE_W; } } else { - for (i = 0, f = addr; i < DMAR_NPTEPG; i++, f += pg_sz) { + for (i = 0, f = addr; i < IOMMU_NPTEPG; i++, f += pg_sz) { if (f >= tbl->maxaddr) break; - m1 = dmar_pgalloc(tbl->pgtbl_obj, base + i, + m1 = iommu_pgalloc(tbl->pgtbl_obj, base + i, IOMMU_PGF_NOALLOC); KASSERT(m1 != NULL, ("lost page table page")); pte[i].pte = (DMAR_PTE_ADDR_MASK & @@ -147,7 +148,7 @@ domain_idmap_nextlvl(struct idpgtbl *tbl, int lvl, vm_pindex_t idx, } } /* domain_get_idmap_pgtbl flushes CPU cache if needed. */ - dmar_unmap_pgtbl(sf); + iommu_unmap_pgtbl(sf); VM_OBJECT_WLOCK(tbl->pgtbl_obj); } @@ -301,7 +302,7 @@ put_idmap_pgtbl(vm_object_t obj) rmobj = tbl->pgtbl_obj; if (rmobj->ref_count == 1) { LIST_REMOVE(tbl, link); - atomic_subtract_int(&dmar_tbl_pagecnt, + atomic_subtract_int(&iommu_tbl_pagecnt, rmobj->resident_page_count); vm_object_deallocate(rmobj); free(tbl, M_DMAR_IDPGTBL); @@ -323,9 +324,9 @@ static int domain_pgtbl_pte_off(struct dmar_domain *domain, iommu_gaddr_t base, int lvl) { - base >>= DMAR_PAGE_SHIFT + (domain->pglvl - lvl - 1) * - DMAR_NPTEPGSHIFT; - return (base & DMAR_PTEMASK); + base >>= IOMMU_PAGE_SHIFT + (domain->pglvl - lvl - 1) * + IOMMU_NPTEPGSHIFT; + return (base & IOMMU_PTEMASK); } /* @@ -344,18 +345,18 @@ domain_pgtbl_get_pindex(struct dmar_domain *domain, iommu_gaddr_t base, int lvl) for (pidx = idx = 0, i = 0; i < lvl; i++, pidx = idx) { idx = domain_pgtbl_pte_off(domain, base, i) + - pidx * DMAR_NPTEPG + 1; + pidx * IOMMU_NPTEPG + 1; } return (idx); } -static dmar_pte_t * +static iommu_pte_t * domain_pgtbl_map_pte(struct dmar_domain *domain, iommu_gaddr_t base, int lvl, int flags, vm_pindex_t *idxp, struct sf_buf **sf) { vm_page_t m; struct sf_buf *sfp; - dmar_pte_t *pte, *ptep; + iommu_pte_t *pte, *ptep; vm_pindex_t idx, idx1; DMAR_DOMAIN_ASSERT_PGLOCKED(domain); @@ -363,13 +364,13 @@ domain_pgtbl_map_pte(struct dmar_domain *domain, iommu_gaddr_t base, int lvl, idx = domain_pgtbl_get_pindex(domain, base, lvl); if (*sf != NULL && idx == *idxp) { - pte = (dmar_pte_t *)sf_buf_kva(*sf); + pte = (iommu_pte_t *)sf_buf_kva(*sf); } else { if (*sf != NULL) - dmar_unmap_pgtbl(*sf); + iommu_unmap_pgtbl(*sf); *idxp = idx; retry: - pte = dmar_map_pgtbl(domain->pgtbl_obj, idx, flags, sf); + pte = iommu_map_pgtbl(domain->pgtbl_obj, idx, flags, sf); if (pte == NULL) { KASSERT(lvl > 0, ("lost root page table page %p", domain)); @@ -378,7 +379,7 @@ retry: * it and create a pte in the preceeding page level * to reference the allocated page table page. */ - m = dmar_pgalloc(domain->pgtbl_obj, idx, flags | + m = iommu_pgalloc(domain->pgtbl_obj, idx, flags | IOMMU_PGF_ZERO); if (m == NULL) return (NULL); @@ -399,7 +400,7 @@ retry: KASSERT(m->pindex != 0, ("loosing root page %p", domain)); m->ref_count--; - dmar_pgfree(domain->pgtbl_obj, m->pindex, + iommu_pgfree(domain->pgtbl_obj, m->pindex, flags); return (NULL); } @@ -408,7 +409,7 @@ retry: dmar_flush_pte_to_ram(domain->dmar, ptep); sf_buf_page(sfp)->ref_count += 1; m->ref_count--; - dmar_unmap_pgtbl(sfp); + iommu_unmap_pgtbl(sfp); /* Only executed once. */ goto retry; } @@ -421,7 +422,7 @@ static int domain_map_buf_locked(struct dmar_domain *domain, iommu_gaddr_t base, iommu_gaddr_t size, vm_page_t *ma, uint64_t pflags, int flags) { - dmar_pte_t *pte; + iommu_pte_t *pte; struct sf_buf *sf; iommu_gaddr_t pg_sz, base1; vm_pindex_t pi, c, idx, run_sz; @@ -438,7 +439,7 @@ domain_map_buf_locked(struct dmar_domain *domain, iommu_gaddr_t base, pi += run_sz) { for (lvl = 0, c = 0, superpage = false;; lvl++) { pg_sz = domain_page_size(domain, lvl); - run_sz = pg_sz >> DMAR_PAGE_SHIFT; + run_sz = pg_sz >> IOMMU_PAGE_SHIFT; if (lvl == domain->pglvl - 1) break; /* @@ -477,7 +478,7 @@ domain_map_buf_locked(struct dmar_domain *domain, iommu_gaddr_t base, KASSERT((flags & IOMMU_PGF_WAITOK) == 0, ("failed waitable pte alloc %p", domain)); if (sf != NULL) - dmar_unmap_pgtbl(sf); + iommu_unmap_pgtbl(sf); domain_unmap_buf_locked(domain, base1, base - base1, flags); TD_PINNED_ASSERT; @@ -489,7 +490,7 @@ domain_map_buf_locked(struct dmar_domain *domain, iommu_gaddr_t base, sf_buf_page(sf)->ref_count += 1; } if (sf != NULL) - dmar_unmap_pgtbl(sf); + iommu_unmap_pgtbl(sf); TD_PINNED_ASSERT; return (0); } @@ -513,10 +514,10 @@ domain_map_buf(struct iommu_domain *iodom, iommu_gaddr_t base, KASSERT((domain->iodom.flags & IOMMU_DOMAIN_IDMAP) == 0, ("modifying idmap pagetable domain %p", domain)); - KASSERT((base & DMAR_PAGE_MASK) == 0, + KASSERT((base & IOMMU_PAGE_MASK) == 0, ("non-aligned base %p %jx %jx", domain, (uintmax_t)base, (uintmax_t)size)); - KASSERT((size & DMAR_PAGE_MASK) == 0, + KASSERT((size & IOMMU_PAGE_MASK) == 0, ("non-aligned size %p %jx %jx", domain, (uintmax_t)base, (uintmax_t)size)); KASSERT(size > 0, ("zero size %p %jx %jx", domain, (uintmax_t)base, @@ -563,7 +564,7 @@ domain_map_buf(struct iommu_domain *iodom, iommu_gaddr_t base, } static void domain_unmap_clear_pte(struct dmar_domain *domain, - iommu_gaddr_t base, int lvl, int flags, dmar_pte_t *pte, + iommu_gaddr_t base, int lvl, int flags, iommu_pte_t *pte, struct sf_buf **sf, bool free_fs); static void @@ -571,7 +572,7 @@ domain_free_pgtbl_pde(struct dmar_domain *domain, iommu_gaddr_t base, int lvl, int flags) { struct sf_buf *sf; - dmar_pte_t *pde; + iommu_pte_t *pde; vm_pindex_t idx; sf = NULL; @@ -581,7 +582,7 @@ domain_free_pgtbl_pde(struct dmar_domain *domain, iommu_gaddr_t base, static void domain_unmap_clear_pte(struct dmar_domain *domain, iommu_gaddr_t base, int lvl, - int flags, dmar_pte_t *pte, struct sf_buf **sf, bool free_sf) + int flags, iommu_pte_t *pte, struct sf_buf **sf, bool free_sf) { vm_page_t m; @@ -589,7 +590,7 @@ domain_unmap_clear_pte(struct dmar_domain *domain, iommu_gaddr_t base, int lvl, dmar_flush_pte_to_ram(domain->dmar, pte); m = sf_buf_page(*sf); if (free_sf) { - dmar_unmap_pgtbl(*sf); + iommu_unmap_pgtbl(*sf); *sf = NULL; } m->ref_count--; @@ -601,7 +602,7 @@ domain_unmap_clear_pte(struct dmar_domain *domain, iommu_gaddr_t base, int lvl, KASSERT(m->pindex != 0, ("lost reference (idx) on root pg domain %p base %jx lvl %d", domain, (uintmax_t)base, lvl)); - dmar_pgfree(domain->pgtbl_obj, m->pindex, flags); + iommu_pgfree(domain->pgtbl_obj, m->pindex, flags); domain_free_pgtbl_pde(domain, base, lvl - 1, flags); } @@ -612,7 +613,7 @@ static int domain_unmap_buf_locked(struct dmar_domain *domain, iommu_gaddr_t base, iommu_gaddr_t size, int flags) { - dmar_pte_t *pte; + iommu_pte_t *pte; struct sf_buf *sf; vm_pindex_t idx; iommu_gaddr_t pg_sz; @@ -624,10 +625,10 @@ domain_unmap_buf_locked(struct dmar_domain *domain, iommu_gaddr_t base, KASSERT((domain->iodom.flags & IOMMU_DOMAIN_IDMAP) == 0, ("modifying idmap pagetable domain %p", domain)); - KASSERT((base & DMAR_PAGE_MASK) == 0, + KASSERT((base & IOMMU_PAGE_MASK) == 0, ("non-aligned base %p %jx %jx", domain, (uintmax_t)base, (uintmax_t)size)); - KASSERT((size & DMAR_PAGE_MASK) == 0, + KASSERT((size & IOMMU_PAGE_MASK) == 0, ("non-aligned size %p %jx %jx", domain, (uintmax_t)base, (uintmax_t)size)); KASSERT(base < (1ULL << domain->agaw), @@ -670,7 +671,7 @@ domain_unmap_buf_locked(struct dmar_domain *domain, iommu_gaddr_t base, (uintmax_t)base, (uintmax_t)size, (uintmax_t)pg_sz)); } if (sf != NULL) - dmar_unmap_pgtbl(sf); + iommu_unmap_pgtbl(sf); /* * See 11.1 Write Buffer Flushing for an explanation why RWBF * can be ignored there. @@ -706,7 +707,7 @@ domain_alloc_pgtbl(struct dmar_domain *domain) domain->pgtbl_obj = vm_pager_allocate(OBJT_PHYS, NULL, IDX_TO_OFF(pglvl_max_pages(domain->pglvl)), 0, 0, NULL); DMAR_DOMAIN_PGLOCK(domain); - m = dmar_pgalloc(domain->pgtbl_obj, 0, IOMMU_PGF_WAITOK | + m = iommu_pgalloc(domain->pgtbl_obj, 0, IOMMU_PGF_WAITOK | IOMMU_PGF_ZERO | IOMMU_PGF_OBJL); /* No implicit free of the top level page table page. */ m->ref_count = 1; diff --git a/sys/x86/iommu/intel_intrmap.c b/sys/x86/iommu/intel_intrmap.c index b2642197902a..02bf58dde299 100644 --- a/sys/x86/iommu/intel_intrmap.c +++ b/sys/x86/iommu/intel_intrmap.c @@ -55,6 +55,7 @@ #include #include #include +#include #include #include @@ -341,7 +342,7 @@ dmar_init_irt(struct dmar_unit *unit) } unit->irte_cnt = clp2(num_io_irqs); unit->irt = kmem_alloc_contig(unit->irte_cnt * sizeof(dmar_irte_t), - M_ZERO | M_WAITOK, 0, dmar_high, PAGE_SIZE, 0, + M_ZERO | M_WAITOK, 0, iommu_high, PAGE_SIZE, 0, DMAR_IS_COHERENT(unit) ? VM_MEMATTR_DEFAULT : VM_MEMATTR_UNCACHEABLE); if (unit->irt == NULL) diff --git a/sys/x86/iommu/intel_qi.c b/sys/x86/iommu/intel_qi.c index 37e2bf211e32..590cbac9bcbd 100644 --- a/sys/x86/iommu/intel_qi.c +++ b/sys/x86/iommu/intel_qi.c @@ -55,6 +55,7 @@ #include #include #include +#include #include static bool @@ -501,7 +502,7 @@ dmar_init_qi(struct dmar_unit *unit) /* The invalidation queue reads by DMARs are always coherent. */ unit->inv_queue = kmem_alloc_contig(unit->inv_queue_size, M_WAITOK | - M_ZERO, 0, dmar_high, PAGE_SIZE, 0, VM_MEMATTR_DEFAULT); + M_ZERO, 0, iommu_high, PAGE_SIZE, 0, VM_MEMATTR_DEFAULT); unit->inv_waitd_seq_hw_phys = pmap_kextract( (vm_offset_t)&unit->inv_waitd_seq_hw); diff --git a/sys/x86/iommu/intel_quirks.c b/sys/x86/iommu/intel_quirks.c index 589764bd0fa9..486bd1bc9496 100644 --- a/sys/x86/iommu/intel_quirks.c +++ b/sys/x86/iommu/intel_quirks.c @@ -59,6 +59,7 @@ #include #include #include +#include #include typedef void (*dmar_quirk_cpu_fun)(struct dmar_unit *); diff --git a/sys/x86/iommu/intel_reg.h b/sys/x86/iommu/intel_reg.h index 26a18ff94890..0fafcce7accf 100644 --- a/sys/x86/iommu/intel_reg.h +++ b/sys/x86/iommu/intel_reg.h @@ -31,16 +31,6 @@ #ifndef __X86_IOMMU_INTEL_REG_H #define __X86_IOMMU_INTEL_REG_H -#define DMAR_PAGE_SIZE PAGE_SIZE -#define DMAR_PAGE_MASK (DMAR_PAGE_SIZE - 1) -#define DMAR_PAGE_SHIFT PAGE_SHIFT -#define DMAR_NPTEPG (DMAR_PAGE_SIZE / sizeof(dmar_pte_t)) -#define DMAR_NPTEPGSHIFT 9 -#define DMAR_PTEMASK (DMAR_NPTEPG - 1) - -#define IOMMU_PAGE_SIZE DMAR_PAGE_SIZE -#define IOMMU_PAGE_MASK DMAR_PAGE_MASK - typedef struct dmar_root_entry { uint64_t r1; uint64_t r2; @@ -49,7 +39,7 @@ typedef struct dmar_root_entry { #define DMAR_ROOT_R1_CTP_MASK 0xfffffffffffff000 /* Mask for Context-Entry Table Pointer */ -#define DMAR_CTX_CNT (DMAR_PAGE_SIZE / sizeof(dmar_root_entry_t)) +#define DMAR_CTX_CNT (IOMMU_PAGE_SIZE / sizeof(dmar_root_entry_t)) typedef struct dmar_ctx_entry { uint64_t ctx1; @@ -73,9 +63,6 @@ typedef struct dmar_ctx_entry { #define DMAR_CTX2_DID(x) ((x) << 8) /* Domain Identifier */ #define DMAR_CTX2_GET_DID(ctx2) (((ctx2) & DMAR_CTX2_DID_MASK) >> 8) -typedef struct dmar_pte { - uint64_t pte; -} dmar_pte_t; #define DMAR_PTE_R 1 /* Read */ #define DMAR_PTE_W (1 << 1) /* Write */ #define DMAR_PTE_SP (1 << 7) /* Super Page */ diff --git a/sys/x86/iommu/intel_utils.c b/sys/x86/iommu/intel_utils.c index 19d4ec7d22bd..b0f2d167658a 100644 --- a/sys/x86/iommu/intel_utils.c +++ b/sys/x86/iommu/intel_utils.c @@ -64,6 +64,7 @@ #include #include #include +#include #include u_int @@ -183,7 +184,7 @@ pglvl_max_pages(int pglvl) int i; for (res = 0, i = pglvl; i > 0; i--) { - res *= DMAR_NPTEPG; + res *= IOMMU_NPTEPG; res++; } return (res); @@ -214,12 +215,12 @@ pglvl_page_size(int total_pglvl, int lvl) { int rlvl; static const iommu_gaddr_t pg_sz[] = { - (iommu_gaddr_t)DMAR_PAGE_SIZE, - (iommu_gaddr_t)DMAR_PAGE_SIZE << DMAR_NPTEPGSHIFT, - (iommu_gaddr_t)DMAR_PAGE_SIZE << (2 * DMAR_NPTEPGSHIFT), - (iommu_gaddr_t)DMAR_PAGE_SIZE << (3 * DMAR_NPTEPGSHIFT), - (iommu_gaddr_t)DMAR_PAGE_SIZE << (4 * DMAR_NPTEPGSHIFT), - (iommu_gaddr_t)DMAR_PAGE_SIZE << (5 * DMAR_NPTEPGSHIFT) + (iommu_gaddr_t)IOMMU_PAGE_SIZE, + (iommu_gaddr_t)IOMMU_PAGE_SIZE << IOMMU_NPTEPGSHIFT, + (iommu_gaddr_t)IOMMU_PAGE_SIZE << (2 * IOMMU_NPTEPGSHIFT), + (iommu_gaddr_t)IOMMU_PAGE_SIZE << (3 * IOMMU_NPTEPGSHIFT), + (iommu_gaddr_t)IOMMU_PAGE_SIZE << (4 * IOMMU_NPTEPGSHIFT), + (iommu_gaddr_t)IOMMU_PAGE_SIZE << (5 * IOMMU_NPTEPGSHIFT), }; KASSERT(lvl >= 0 && lvl < total_pglvl, @@ -244,7 +245,7 @@ calc_am(struct dmar_unit *unit, iommu_gaddr_t base, iommu_gaddr_t size, int am; for (am = DMAR_CAP_MAMV(unit->hw_cap);; am--) { - isize = 1ULL << (am + DMAR_PAGE_SHIFT); + isize = 1ULL << (am + IOMMU_PAGE_SHIFT); if ((base & (isize - 1)) == 0 && size >= isize) break; if (am == 0) @@ -254,113 +255,9 @@ calc_am(struct dmar_unit *unit, iommu_gaddr_t base, iommu_gaddr_t size, return (am); } -iommu_haddr_t dmar_high; int haw; int dmar_tbl_pagecnt; -vm_page_t -dmar_pgalloc(vm_object_t obj, vm_pindex_t idx, int flags) -{ - vm_page_t m; - int zeroed, aflags; - - zeroed = (flags & IOMMU_PGF_ZERO) != 0 ? VM_ALLOC_ZERO : 0; - aflags = zeroed | VM_ALLOC_NOBUSY | VM_ALLOC_SYSTEM | VM_ALLOC_NODUMP | - ((flags & IOMMU_PGF_WAITOK) != 0 ? VM_ALLOC_WAITFAIL : - VM_ALLOC_NOWAIT); - for (;;) { - if ((flags & IOMMU_PGF_OBJL) == 0) - VM_OBJECT_WLOCK(obj); - m = vm_page_lookup(obj, idx); - if ((flags & IOMMU_PGF_NOALLOC) != 0 || m != NULL) { - if ((flags & IOMMU_PGF_OBJL) == 0) - VM_OBJECT_WUNLOCK(obj); - break; - } - m = vm_page_alloc_contig(obj, idx, aflags, 1, 0, - dmar_high, PAGE_SIZE, 0, VM_MEMATTR_DEFAULT); - if ((flags & IOMMU_PGF_OBJL) == 0) - VM_OBJECT_WUNLOCK(obj); - if (m != NULL) { - if (zeroed && (m->flags & PG_ZERO) == 0) - pmap_zero_page(m); - atomic_add_int(&dmar_tbl_pagecnt, 1); - break; - } - if ((flags & IOMMU_PGF_WAITOK) == 0) - break; - } - return (m); -} - -void -dmar_pgfree(vm_object_t obj, vm_pindex_t idx, int flags) -{ - vm_page_t m; - - if ((flags & IOMMU_PGF_OBJL) == 0) - VM_OBJECT_WLOCK(obj); - m = vm_page_grab(obj, idx, VM_ALLOC_NOCREAT); - if (m != NULL) { - vm_page_free(m); - atomic_subtract_int(&dmar_tbl_pagecnt, 1); - } - if ((flags & IOMMU_PGF_OBJL) == 0) - VM_OBJECT_WUNLOCK(obj); -} - -void * -dmar_map_pgtbl(vm_object_t obj, vm_pindex_t idx, int flags, - struct sf_buf **sf) -{ - vm_page_t m; - bool allocated; - - if ((flags & IOMMU_PGF_OBJL) == 0) - VM_OBJECT_WLOCK(obj); - m = vm_page_lookup(obj, idx); - if (m == NULL && (flags & IOMMU_PGF_ALLOC) != 0) { - m = dmar_pgalloc(obj, idx, flags | IOMMU_PGF_OBJL); - allocated = true; - } else - allocated = false; - if (m == NULL) { - if ((flags & IOMMU_PGF_OBJL) == 0) - VM_OBJECT_WUNLOCK(obj); - return (NULL); - } - /* Sleepable allocations cannot fail. */ - if ((flags & IOMMU_PGF_WAITOK) != 0) - VM_OBJECT_WUNLOCK(obj); - sched_pin(); - *sf = sf_buf_alloc(m, SFB_CPUPRIVATE | ((flags & IOMMU_PGF_WAITOK) - == 0 ? SFB_NOWAIT : 0)); - if (*sf == NULL) { - sched_unpin(); - if (allocated) { - VM_OBJECT_ASSERT_WLOCKED(obj); - dmar_pgfree(obj, m->pindex, flags | IOMMU_PGF_OBJL); - } - if ((flags & IOMMU_PGF_OBJL) == 0) - VM_OBJECT_WUNLOCK(obj); - return (NULL); - } - if ((flags & (IOMMU_PGF_WAITOK | IOMMU_PGF_OBJL)) == - (IOMMU_PGF_WAITOK | IOMMU_PGF_OBJL)) - VM_OBJECT_WLOCK(obj); - else if ((flags & (IOMMU_PGF_WAITOK | IOMMU_PGF_OBJL)) == 0) - VM_OBJECT_WUNLOCK(obj); - return ((void *)sf_buf_kva(*sf)); -} - -void -dmar_unmap_pgtbl(struct sf_buf *sf) -{ - - sf_buf_free(sf); - sched_unpin(); -} - static void dmar_flush_transl_to_ram(struct dmar_unit *unit, void *dst, size_t sz) { @@ -375,7 +272,7 @@ dmar_flush_transl_to_ram(struct dmar_unit *unit, void *dst, size_t sz) } void -dmar_flush_pte_to_ram(struct dmar_unit *unit, dmar_pte_t *dst) +dmar_flush_pte_to_ram(struct dmar_unit *unit, iommu_pte_t *dst) { dmar_flush_transl_to_ram(unit, dst, sizeof(*dst)); @@ -687,11 +584,6 @@ dmar_timeout_sysctl(SYSCTL_HANDLER_ARGS) return (error); } -static SYSCTL_NODE(_hw_iommu, OID_AUTO, dmar, CTLFLAG_RD | CTLFLAG_MPSAFE, - NULL, ""); -SYSCTL_INT(_hw_iommu_dmar, OID_AUTO, tbl_pagecnt, CTLFLAG_RD, - &dmar_tbl_pagecnt, 0, - "Count of pages used for DMAR pagetables"); SYSCTL_INT(_hw_iommu_dmar, OID_AUTO, batch_coalesce, CTLFLAG_RWTUN, &dmar_batch_coalesce, 0, "Number of qi batches between interrupt"); diff --git a/sys/x86/iommu/iommu_utils.c b/sys/x86/iommu/iommu_utils.c new file mode 100644 index 000000000000..ffea1cc1a190 --- /dev/null +++ b/sys/x86/iommu/iommu_utils.c @@ -0,0 +1,164 @@ +/*- + * SPDX-License-Identifier: BSD-2-Clause + * + * Copyright (c) 2013, 2014 The FreeBSD Foundation + * + * This software was developed by Konstantin Belousov + * under sponsorship from the FreeBSD Foundation. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +vm_page_t +iommu_pgalloc(vm_object_t obj, vm_pindex_t idx, int flags) +{ + vm_page_t m; + int zeroed, aflags; + + zeroed = (flags & IOMMU_PGF_ZERO) != 0 ? VM_ALLOC_ZERO : 0; + aflags = zeroed | VM_ALLOC_NOBUSY | VM_ALLOC_SYSTEM | VM_ALLOC_NODUMP | + ((flags & IOMMU_PGF_WAITOK) != 0 ? VM_ALLOC_WAITFAIL : + VM_ALLOC_NOWAIT); + for (;;) { + if ((flags & IOMMU_PGF_OBJL) == 0) + VM_OBJECT_WLOCK(obj); + m = vm_page_lookup(obj, idx); + if ((flags & IOMMU_PGF_NOALLOC) != 0 || m != NULL) { + if ((flags & IOMMU_PGF_OBJL) == 0) + VM_OBJECT_WUNLOCK(obj); + break; + } + m = vm_page_alloc_contig(obj, idx, aflags, 1, 0, + iommu_high, PAGE_SIZE, 0, VM_MEMATTR_DEFAULT); + if ((flags & IOMMU_PGF_OBJL) == 0) + VM_OBJECT_WUNLOCK(obj); + if (m != NULL) { + if (zeroed && (m->flags & PG_ZERO) == 0) + pmap_zero_page(m); + atomic_add_int(&iommu_tbl_pagecnt, 1); + break; + } + if ((flags & IOMMU_PGF_WAITOK) == 0) + break; + } + return (m); +} + +void +iommu_pgfree(vm_object_t obj, vm_pindex_t idx, int flags) +{ + vm_page_t m; + + if ((flags & IOMMU_PGF_OBJL) == 0) + VM_OBJECT_WLOCK(obj); + m = vm_page_grab(obj, idx, VM_ALLOC_NOCREAT); + if (m != NULL) { + vm_page_free(m); + atomic_subtract_int(&iommu_tbl_pagecnt, 1); + } + if ((flags & IOMMU_PGF_OBJL) == 0) + VM_OBJECT_WUNLOCK(obj); +} + +void * +iommu_map_pgtbl(vm_object_t obj, vm_pindex_t idx, int flags, + struct sf_buf **sf) +{ + vm_page_t m; + bool allocated; + + if ((flags & IOMMU_PGF_OBJL) == 0) + VM_OBJECT_WLOCK(obj); + m = vm_page_lookup(obj, idx); + if (m == NULL && (flags & IOMMU_PGF_ALLOC) != 0) { + m = iommu_pgalloc(obj, idx, flags | IOMMU_PGF_OBJL); + allocated = true; + } else + allocated = false; + if (m == NULL) { + if ((flags & IOMMU_PGF_OBJL) == 0) + VM_OBJECT_WUNLOCK(obj); + return (NULL); + } + /* Sleepable allocations cannot fail. */ + if ((flags & IOMMU_PGF_WAITOK) != 0) + VM_OBJECT_WUNLOCK(obj); + sched_pin(); + *sf = sf_buf_alloc(m, SFB_CPUPRIVATE | ((flags & IOMMU_PGF_WAITOK) + == 0 ? SFB_NOWAIT : 0)); + if (*sf == NULL) { + sched_unpin(); + if (allocated) { + VM_OBJECT_ASSERT_WLOCKED(obj); + iommu_pgfree(obj, m->pindex, flags | IOMMU_PGF_OBJL); + } + if ((flags & IOMMU_PGF_OBJL) == 0) + VM_OBJECT_WUNLOCK(obj); + return (NULL); + } + if ((flags & (IOMMU_PGF_WAITOK | IOMMU_PGF_OBJL)) == + (IOMMU_PGF_WAITOK | IOMMU_PGF_OBJL)) + VM_OBJECT_WLOCK(obj); + else if ((flags & (IOMMU_PGF_WAITOK | IOMMU_PGF_OBJL)) == 0) + VM_OBJECT_WUNLOCK(obj); + return ((void *)sf_buf_kva(*sf)); +} + +void +iommu_unmap_pgtbl(struct sf_buf *sf) +{ + + sf_buf_free(sf); + sched_unpin(); +} + +iommu_haddr_t iommu_high; +int iommu_tbl_pagecnt; + +SYSCTL_NODE(_hw_iommu, OID_AUTO, dmar, CTLFLAG_RD | CTLFLAG_MPSAFE, + NULL, ""); +SYSCTL_INT(_hw_iommu_dmar, OID_AUTO, tbl_pagecnt, CTLFLAG_RD, + &iommu_tbl_pagecnt, 0, + "Count of pages used for DMAR pagetables"); diff --git a/sys/x86/iommu/x86_iommu.h b/sys/x86/iommu/x86_iommu.h new file mode 100644 index 000000000000..3789586f1eaf --- /dev/null +++ b/sys/x86/iommu/x86_iommu.h @@ -0,0 +1,62 @@ +/*- + * SPDX-License-Identifier: BSD-2-Clause + * + * Copyright (c) 2013-2015 The FreeBSD Foundation + * + * This software was developed by Konstantin Belousov + * under sponsorship from the FreeBSD Foundation. + * *** 54 LINES SKIPPED ***