From nobody Tue Apr 12 15:37:48 2022 X-Original-To: freebsd-xen@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 0BE8A1A808CF for ; Tue, 12 Apr 2022 15:38:07 +0000 (UTC) (envelope-from prvs=094341a74=roger.pau@citrix.com) Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com [216.71.145.153]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.hc3370-68.iphmx.com", Issuer "HydrantID Server CA O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Kd8xp0QRtz4Sgt for ; Tue, 12 Apr 2022 15:38:05 +0000 (UTC) (envelope-from prvs=094341a74=roger.pau@citrix.com) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1649777885; h=date:from:to:cc:subject:message-id:references: content-transfer-encoding:in-reply-to:mime-version; bh=MML1NGI0Crpuntyv5yDA1sL7h6pmzuwsnI+8Lcy0eqU=; b=L6hvaDnC45P+0KpWAdkmixeuK1QA7gWxo421h1pzbAqivN7EZPwdsrb6 KG01MMFSUnrgRi5KDnFZ51BWu7A4ZzHBGLKDD2vj4IXFwOjwbjVF4tT/d 8R1jlEtqn3pioufrt1njt4gzL36rvINKBmZrsMQBGtpzafole2h0vlEhL E=; X-SBRS: 5.1 X-MesageID: 68709445 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:hV1qdqz4sncAInlY6B96t+ebxirEfRIJ4+MujC+fZmQN5Y4CYwd3z mIfWHTOb6GXMD6uKYglM5D0oBdZ/8PInIkmFlE5sHtwQXxHs8vDCpGcKUv9eTubMsbDJK4Mx 8tBZtWedZ04RC/X+Bnxb+K7piMm2/rYG7OkA7+YZ3krFFRvRHd/0U9ty7Bpi4U2i4a0CQiDs o+q/JWPYg//hW4c3g74k06mgEoHUKPa4WtJ4QZWiYl3gWLje1kp4LM3L/jsIXagTtBYE7exS 7/Pwe7koW3Qox0nW9n/mbrwKBFbHrKKMQaw0XcHAKLKbjquB8ARPgTXENJGNC+7Xh3Qx4gZJ O1l7MD2EUFzVkH1sLx1vyNwSkmSBoUbvu6fSZSDmZbLlReeLyK2m68G4HweZuX0xM4mWQmiy tRAQNw9RkjrazWeme/TpkFE36zPHeGzVG8tkigIIQLxVJ7KdavrUaTSjeK06R9r7ix48VQyU OJCAdZnREyojxSioT77Arpm9AujriGXnzG1NDt5DEf4ioTe5FUZ7VTjDDbaUoOSYeBFuHTEm k/l0XjDRUwYGdeP7zXQpxpAhseX9c/6cIcbFbn+/f92mlyDgGcUDXX6V3Pi/6P/0BTnHYsCd QpEoULCroBrnKCvZsP6UBCi5maNozYXWsZKEv184waIokbRy1jJWTBVEmIcADAgnOYMQxAU/ HzRpNHOQgEoi6OXUGmM/ajB+FtePgBKdDRfNEfoVzAt6cTjuoYsphTBRN1qFOiylNKdMT/qz j2AtyR7l7gVpcAR2qix5lyBhCijzrDbQwIo/h7GRUqq6wp4YMiuYInA1LTAxa8edsDDFADH5 SVa3ZjFhAwTMX2TvHOVZMAdOoP129umHiHmxgNtMpIx0Tv4rhZPYrtsyD15IU5oNOMNdjnof FLftGtt2XNDAJe5RfQpOtzsUqzG2YClTI24Da6MMrKid7ArLGe6EDdSiVl8NowHuGwliukBN JiSaq5A5l5KWP08nFJaqwrwuILHJxzSJ0uOHfgXLDz9iNJygUJ5r59fbjNiichjscu5TP39q Yo3Cidz40w3vBfCSifW65UPClsBMGI2A5v7w+QOKLLSfVI3SDF4VqWMqV/ER2CDt/4K/gsv1 ivjMnK0NXKl3SGXQel0Qi0LhEzTsWZX8itgYH1E0aeA0Hk/e4e/hJrzhLNsFYTLANdLlKYuJ 9FcIp3oKq0WFlzvpmRMBbGg/dcKXEn62mqz09+NPWFXk2hIHFeSpLcJv2LHqUEzM8ZAnZdm+ O3/i1mDGPLuhW1KVa7rVR5m9Hvo1VA1k+NuRUrYZN5VfUTn6o9xLCLtyPQwJqkxxd/rnVN2C y7+7c8kmNTw IronPort-HdrOrdr: A9a23:83FxmKmG/Aj2UwauAZJbgkP+finpDfPIimdD5ihNYBxZY6Wkfp +V8sjzhCWatN9OYh0dcLC7WJVpQRvnhPhICK0qTMqftW7dyReVxeBZnPHfKljbehEWmdQtsJ uIH5IObOEYSGIK8voSgzPIY+rIouP3iJxA7N22pxwGIHAIGsNdBkVCe32m+yVNNXh77PECZe OhD6R81l2dkSN9VLXEOpBJZZmJm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjMTSSlGz7sO+X XM11WR3NTvj9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhHhkByhaohNU6CL+Bo1vOaswlA3l8 SkmWZrA+1Dr1fqOk2lqxrk3AftlB4o9n/Z0FedxUDupMToLQhKffZptMZ8SF/0+kAgtNZz3O ZgxGSCradaChvGgWDU+8XIfwsCrDv7nVMS1cooy1BPW4oXb7Fc6aYF+llOLZsGFCXmrKg6De hVCt3G7vo+SyLUU5nghBgu/DWQZAVxIv/fKXJy+PB9kgIm0EyR9nFohfD2xRw7hdcAo5ot3Z WyDk0nrsALciYsV9MOOA4we7rFNoXze2O4DIuzGyWvKEhVAQOEl3bIiI9FkN1CPqZ4i6cPpA == X-IronPort-AV: E=Sophos;i="5.90,254,1643691600"; d="scan'208,223";a="68709445" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YiQ9pdh7xPJYGolLm4P24j3AuuoCEH85jUNKSwh1KFjecodSiFX6QDGrAZc15sqUZ6XLDyW6P/KoW5vZyxkJk/FErfEzSdVoOisFbQcEvKH3NXtpSidWjHLkVWwi1L9bBLUwUU9E87/6C5grZTISKUnSc8WOdLbGcx1GzSeVGCNXubb58Og/Md9NoKrZuxCSVa+TQRjWMpoBxzDN/6jiuiv4wCDo6btoacJkekZrRyKZXM+5y7R/rrIVEUrD0jgg81Hgz1zJ9wKK8UZaQHrw4sRkFWqjuaQqWS8h50NkiNWsk9tMCBezwOJPNG7fgXbuR933uO0BtTMuTwVBlPIGaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ic04E1uVcH6JyKP1fbqPkdA6CUgjly254HlqIZPVjhM=; b=fd+F7J0xvL43+zQjq9M+vqX3wUCkPcK29Xoj9UBiB2NWh3amEIFXUmtEc0KkD46kp7abDDjkn2cUZnH1K0+7pl4qNFv54fmN5ZkU0xexZU75xDd8eQS1qCa8QxS42DGQqW4Ux6i77ENtRCalfYHjHIdBTLhabEmE8l9btWpzO0PkDgiQYElQFe2d81exYWDe5sRfNA5O41OQ5odtfkxn/v/ImMPOFN8jaBidG0qCaaejlxjyZpj+6RsMxsIlj+gWQ/wn6bSoLG8P5CIu8PRS1lOhBtBwXinTDorUtTa/AQhLIdUWW8AMmUm5dk24HyMlYy3Wt2HN1lo8QjKCHs+5EQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ic04E1uVcH6JyKP1fbqPkdA6CUgjly254HlqIZPVjhM=; b=WjG6q4TWLTxVPr6ZpTHkWFkb/CfXhgDvgjREMDZDDT+nhfJPvCt5Ama6+AThqEcdvltbaXELTKzEzGX5Dd1ZwJmuZkcQc3wr7REItwGiG9yHise2hfB+pJ1/O3HgPKsw+vJVuGxMPMPP9hS63/5/x+nS9DFSL/F1Nl9vBhczGEU= Date: Tue, 12 Apr 2022 17:37:48 +0200 From: Roger Pau =?utf-8?B?TW9ubsOp?= To: Ze Dupsys CC: , Subject: Re: ZFS + FreeBSD XEN dom0 panic Message-ID: References: <4da2302b-0745-ea1d-c868-5a8a5fc66b18@gmail.com> <48b74c39-abb3-0a3e-91a8-b5ab1e1223ce@gmail.com> <22643831-70d3-5a3e-f973-fb80957e80dc@gmail.com> <209c9b7c-4b4b-7fe3-6e73-d2a0dc651c19@gmail.com> <1286cb59-867e-e7d0-2bd3-45c33feae66a@gmail.com> Content-Type: multipart/mixed; boundary="AXxWJ3w5Q3v8ydwo" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-ClientProxiedBy: LNXP265CA0046.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:5c::34) To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18) List-Id: Discussion List-Archive: https://lists.freebsd.org/archives/freebsd-xen List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-xen@freebsd.org X-BeenThere: freebsd-xen@freebsd.org MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f96e271b-1167-4df5-c791-08da1c9a6915 X-MS-TrafficTypeDiagnostic: DM6PR03MB5161:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7DVOQXWjl0A+LHKnweRr8FUqkNWwlsc11jIOoJpH/sWO+6+WKuQvY03K22nWmIH19Uks/df1KKNdV8nLme0mv5KoqOKtUTWbmFiPE+2z1TPRduwioS1nfHrGi3CVAcjYMmZVC4O8QWn+3EUalnS0K/pMRmwitKew6SdGb7JEZ5RPwxyQrnOsmyW8vw9Emgh+oSc+oiOnlh+2Vud3YUJ9MmxP7RZLmQ9bMzs+mXcRieH0F58zj91+/Bq9G5LxEdn3YXWd9gzZnHPit/67bedo1Z8PgjOUYP+DbhVVIEXUGl6Y6ho3mi/J/1sa+8cbA1lVi7C6iHIySV+jEj109uj1p/24R3V77Ks7tSZuvBcefs8idH4LT1GM05Unankt/7/QJGyqmytZ+3ANLCPt4CuzuW861a3RiZXdXV7lnb6pfj6LWJ8ztmNKBzWODx/D65Cpms7mcuO3EoMvPZKUNaVV6e6/Q9MljrYtWQBpJmcmeocLeSf7jYF/67HWij7y1mJhwkYcI0HmI82DLgvjOpZmRXy43yonSJ33rQsmlgXesmVqC9cF5fg3ketEvtN2AXq6idjB8X7nAML7uWRqqm/Rdyfv5xK/j1wNWeaCdjAoZq6tzmDKJJqA5g8rLXcpzQxDQ/9FST3e653wLCdhynJm92TQynGJmKLxomfjO9XPOU1whGNE1WfIhzo3Im6heTsaXcEzPRC+pSEveGNgTbUz4Mz4Q5alzFOfak1DtCIS/l//dXd4mT+xrAZ2Dim2Mi5YwPU22sYtAT+OGyJl+steLTvsWEV9y+vYpJd7R3DZ7XM= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(66556008)(38100700002)(4326008)(33964004)(44144004)(6506007)(66476007)(66946007)(8936002)(8676002)(6486002)(83380400001)(6916009)(33716001)(508600001)(82960400001)(85182001)(86362001)(966005)(26005)(186003)(6666004)(2906002)(316002)(235185007)(6512007)(5660300002)(9686003)(2700100001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Q0xxdzlYcHpSdkw0MXFRMWtibERKWm4yVmZBUTBkdm9RaEdmRVZoV3JRMkFC?= =?utf-8?B?NWRSWDJxTGRnRERid2kreGhtUFg0MWlkVFhPVzhHb0dET3JYSm4rV2ZKelcz?= =?utf-8?B?anZ3M2lnYXJORlJBSUNMYnlPY0tESElLZytRR1J0UUtYbkMva3RSWnF4cXlv?= =?utf-8?B?RWpHOVdvQlVQQS9QQzhKVkNTcDRjWjY5MTFCS3I1ejJCeEd4cVhscVlVKzRl?= =?utf-8?B?SHlZb3VoWmJhYlVOUmJWY1BnY2dhWjFRaHQ2VHBYdVF5Q2RMMS9VNGFPZmdE?= =?utf-8?B?VDBIM1JkQ3B6ajlaM25mNFVFQTFQTzRmZit0TVg0VzJVbW1sR0wwNkFIdFlW?= =?utf-8?B?T05NK2V3dzRjTG5qRG9WRVB2NUFXODd3UnlPVUY3dDVkZ3dmbk1FUVljeXVE?= =?utf-8?B?T3FBbzFCeGo5YUhDQi84WEFlQitkcmVEa0FGMytlOEh4U21pZTlra3B0SzdC?= =?utf-8?B?ZVA0YUVKd0RvdUU2Um83emJhK0JoUzVIc2RFbExCQ09jS2NJaW92RnlUR21h?= =?utf-8?B?TktVWG1rRUQ1Y2wra25BMHA2NzJRWVMvVXpGYzQwSFR2ckplTUNzL2dCRlJr?= =?utf-8?B?MEUweXhQTTZoNlNFL1pGaWVIYllwMC9Ib2RrblBtanFCaG1kYVJkVURTY0Z0?= =?utf-8?B?ditkSlVOOHA2RTl2QklLalluR0RRaFhhSmhBeUxtdUtTNHFSRWc2SW1HMTZn?= =?utf-8?B?NU9Kb25kS0laaS9aY0dNQkQ3V1VvZEVKU2U5cklRUEdzRVRlN0kxNjNGdkRu?= =?utf-8?B?RGVicUprSU8raDFvWXpGM0xEbVVQekdFdlBZbnFmZCtxSm8zcVFLNG8zTnE0?= =?utf-8?B?NHBtNjZnak5KRXRiQWxFbWQ2MEw0a0JEa3ExbVN0cXNxbmRiSG1lQ2R3Mi9O?= =?utf-8?B?MCs0U2QrMFhFZUZEa1pTVDlXZFdXQjNkWE5MSFVsSGZ0MmMwU045dSs0eFAy?= =?utf-8?B?RHZwOWFNaHhTS3ozTmg2NGNpeTY5dEJLckZEOFZGa1dHTWRGSEF1dTZ4T2Nj?= =?utf-8?B?ZmVXcGZxV0FDTjVYRGpvN3o1RlYrekJHU09remVUZVJFaHNZY3FUTjQxUnI5?= =?utf-8?B?ZVh3RkFJL2N0N0d0c2ZNK3hOZnVJQVVENlROV2FNZWp0dGRFQkV0L0pHanZq?= =?utf-8?B?YW80MkhxQjUrMU9WS002cGF0WGptREp0Wlg0Sm04MTZjMVZmTHdtY1k3eFdH?= =?utf-8?B?dDN5VDZyeXBmVW1IdkhYaWF6bmV2OVY0WTBZVEtrRHRINXl5aXp0RnZHaHh2?= =?utf-8?B?aExqZXJaZnkrKy8wQjVxMmdneENCU2YrVVlmbnBUM3huQ3RkQ1J1d3hsaGh5?= =?utf-8?B?SDRYVmVuVXN6WUhmd1FCallSUlNvZWRDWGZza1pPVHdRWUFPcFVlc283am15?= =?utf-8?B?bGxyR0kyek5TQUZUZjhFZXVVcERwMkNjWkRUWnJoSDBuOXFkVElJWWJWSGhq?= =?utf-8?B?cHRVNHZJK2svUnZWSUVaWC93YUtOZE93WWJKVlZZY1hMVExnbzNIQnhQREN4?= =?utf-8?B?aFE0V1ZlVTd5Tzd2ZldKeU16S1ljcnJWTHZGeFlLeG1kVWtyWUJRcHJDcjJR?= =?utf-8?B?djdXTlM3V2l4UFF3N2trUzZnWVZZSWcwN2c5RkxBNXNwbERtRnB0ZnRRRStD?= =?utf-8?B?N0Z2QVdkaStrN3NMUmRpZTNGRk5EaUxxRXA2bU96bFZaWGhMbThPbUQ3OThQ?= =?utf-8?B?NmhIK1ZaYlVtOFN0SjI0UUxuSjlMekxPN2lWTEpUNEhZR0F2RFBTYWNyNldP?= =?utf-8?B?ZEVBbGY4RENtNkxLT2JpaUNhWmhEZ0VNamY1aDB5elNwajdqQllQR3pXWkFh?= =?utf-8?B?d2ZZeHo1aDNPUFVObHhlZFE4RENQcUk1bStSM081Q0N6VFV2MnpLak0xcUJP?= =?utf-8?B?ek5EcExYTEhTRUlTWXRFMXpOcloyakVIWGFUU0xrZnBtV21sMHBGbEhpZ251?= =?utf-8?B?TDJDRlRVcVVsS2NkRWRJekZDZEVGZXZoR1FScnZSQkptREQzZnllSnFPcTZI?= =?utf-8?B?YW54QVZTMVBTdzN2ckxxMC9XNjl4OElDVzNyU0wyaTcyTDgyUTZDYktxSmw4?= =?utf-8?B?K3NJQlRpU3plb3ovMnpEbHZOVWk0UXBjbG5WNEh4bVFOcHFocjZrSzRlb1BB?= =?utf-8?B?M243VHBKbWorcXdRMHI5NmwxLzNwR1I1NGhhcWZGbTJTNjA2RXdLRGxkdElN?= =?utf-8?B?KzA1UE5McjZFejZiZU9USXpuVGpjZEV0Zm13UERZQXRIRStXYnJkVXdtQlht?= =?utf-8?B?djhGUHhzQ1k1OWZEUlgxS3BXbjY1S0pOYnpaQmdUUHdYZ3ZNbDBxd1ZObStY?= =?utf-8?B?NGxRR0hTZkt1ZjljWTJ0L0c2Yjdnb24xaTUyQTArZVhrRGJTV1AvY1pndVI2?= =?utf-8?Q?hpQ1UfyCUIAc1e7k=3D?= X-MS-Exchange-CrossTenant-Network-Message-Id: f96e271b-1167-4df5-c791-08da1c9a6915 X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Apr 2022 15:37:54.2747 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: DnIma0OX6d3ZdLLvhOByizTehw2x7/o+T/2oBe+EexPNgP53Nn/acvukWFbYrtI6ad0ldhdFIidqxDuL5dOI8w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5161 X-OriginatorOrg: citrix.com X-Rspamd-Queue-Id: 4Kd8xp0QRtz4Sgt X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=citrix.com header.s=securemail header.b=L6hvaDnC; dkim=pass header.d=citrix.onmicrosoft.com header.s=selector2-citrix-onmicrosoft-com header.b=WjG6q4TW; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=citrix.com; spf=pass (mx1.freebsd.org: domain of "prvs=094341a74=roger.pau@citrix.com" designates 216.71.145.153 as permitted sender) smtp.mailfrom="prvs=094341a74=roger.pau@citrix.com" X-Spamd-Result: default: False [-4.64 / 15.00]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+exists:216.71.145.153.spf.hc3370-68.iphmx.com]; HAS_ATTACHMENT(0.00)[]; RCVD_DKIM_ARC_DNSWL_MED(-0.50)[]; DKIM_TRACE(0.00)[citrix.com:+,citrix.onmicrosoft.com:+]; CTYPE_MIXED_BOGUS(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[citrix.com,reject]; RCVD_IN_DNSWL_MED(-0.20)[216.71.145.153:from]; FREEMAIL_TO(0.00)[gmail.com]; RCVD_COUNT_ZERO(0.00)[0]; MIME_TRACE(0.00)[0:+,1:+,2:+]; R_MIXED_CHARSET(0.56)[subject]; ASN(0.00)[asn:16417, ipnet:216.71.145.0/24, country:US]; FROM_NEQ_ENVFROM(0.00)[roger.pau@citrix.com,prvs=094341a74=roger.pau@citrix.com]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[citrix.com:s=securemail,citrix.onmicrosoft.com:s=selector2-citrix-onmicrosoft-com]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; NEURAL_HAM_SHORT(-1.00)[-1.000]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[multipart/mixed,text/plain]; FORGED_SENDER_VERP_SRS(0.00)[]; DWL_DNSWL_LOW(-1.00)[citrix.com:dkim]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MLMMJ_DEST(0.00)[freebsd-xen]; MID_RHS_NOT_FQDN(0.50)[] X-ThisMailContainsUnwantedMimeParts: N --AXxWJ3w5Q3v8ydwo Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit On Mon, Apr 11, 2022 at 05:37:27PM +0200, Roger Pau Monné wrote: > On Mon, Apr 11, 2022 at 11:47:50AM +0300, Ze Dupsys wrote: > > On 2022.04.08. 18:02, Roger Pau Monné wrote: > > > On Fri, Apr 08, 2022 at 10:45:12AM +0300, Ze Dupsys wrote: > > > > On 2022.04.05. 18:22, Roger Pau Monné wrote: > > > > > .. Thanks, sorry for the late reply, somehow the message slip. > > > > > > > > > > I've been able to get the file:line for those, and the trace is kind > > > > > of weird, I'm not sure I know what's going on TBH. It seems to me the > > > > > backend instance got freed while being in the process of connecting. > > > > > > > > > > I've made some changes, that might mitigate this, but having not a > > > > > clear understanding of what's going on makes this harder. > > > > > > > > > > I've pushed the changes to: > > > > > > > > > > http://xenbits.xen.org/gitweb/?p=people/royger/freebsd.git;a=shortlog;h=refs/heads/for-leak > > > > > > > > > > (This is on top of main branch). > > > > > > > > > > I'm also attaching the two patches on this email. > > > > > > > > > > Let me know if those make a difference to stabilize the system. > > > > > > > > Hi, > > > > > > > > Yes, it stabilizes the system, but there is still a memleak somewhere, i > > > > think. > > > > > > > > System could run tests for approximately 41 hour, did not panic, but started > > > > to OOM kill everything. > > > > > > > > I did not know how to git clone given commit, thus i just applied patches to > > > > 13.0-RELEASE sources. > > > > > > > > Serial logs have nothing unusual, just that at some point OOM kill starts. > > > > > > Well, I think that's good^W better than before. Thanks again for all > > > the testing. > > > > > > It might be helpful now to start dumping `vmstat -m` periodically > > > while running the stress tests. As there are (hopefully) no more > > > panics now vmstat might report us what subsystem is hogging the > > > memory. It's possible it's blkback (again). > > > > > > Thanks, Roger. > > > > > > > Yes, it certainly is better. Applied patch on my pre-production server, have > > not had any panic since then, still testing though. > > > > On my stressed lab server, it's a bit different story. On occasion i see a > > panic with this trace on serial (can not reliably repeat, but sometimes upon > > starting dom id 1 and 2, sometimes mid-stress-test, dom id > 95). > > panic: pmap_growkernel: no memory to grow kernel > > cpuid = 2 > > time = 1649485133 > > KDB: stack backtrace: > > #0 0xffffffff80c57385 at kdb_backtrace+0x65 > > #1 0xffffffff80c09d61 at vpanic+0x181 > > #2 0xffffffff80c09bd3 at panic+0x43 > > #3 0xffffffff81073eed at pmap_growkernel+0x27d > > #4 0xffffffff80f2d918 at vm_map_insert+0x248 > > #5 0xffffffff80f30079 at vm_map_find+0x549 > > #6 0xffffffff80f2bda6 at kmem_init+0x226 > > #7 0xffffffff80c731a1 at vmem_xalloc+0xcb1 > > #8 0xffffffff80c72a9b at vmem_xalloc+0x5ab > > #9 0xffffffff80c724a6 at vmem_alloc+0x46 > > #10 0xffffffff80f2ac6b at kva_alloc+0x2b > > #11 0xffffffff8107f0eb at pmap_mapdev_attr+0x27b > > #12 0xffffffff810588ca at nexus_add_irq+0x65a > > #13 0xffffffff81058710 at nexus_add_irq+0x4a0 > > #14 0xffffffff810585b9 at nexus_add_irq+0x349 > > #15 0xffffffff80c495c1 at bus_alloc_resource+0xa1 > > #16 0xffffffff8105e940 at xenmem_free+0x1a0 > > #17 0xffffffff80a7e0dd at xbd_instance_create+0x943d > > > > | sed -Ee 's/^#[0-9]* //' -e 's/ .*//' | xargs addr2line -e > > /usr/lib/debug/boot/kernel/kernel.debug > > /usr/src/sys/kern/subr_kdb.c:443 > > /usr/src/sys/kern/kern_shutdown.c:0 > > /usr/src/sys/kern/kern_shutdown.c:843 > > /usr/src/sys/amd64/amd64/pmap.c:0 > > /usr/src/sys/vm/vm_map.c:0 > > /usr/src/sys/vm/vm_map.c:0 > > /usr/src/sys/vm/vm_kern.c:712 > > /usr/src/sys/kern/subr_vmem.c:928 > > /usr/src/sys/kern/subr_vmem.c:0 > > /usr/src/sys/kern/subr_vmem.c:1350 > > /usr/src/sys/vm/vm_kern.c:150 > > /usr/src/sys/amd64/amd64/pmap.c:0 > > /usr/src/sys/x86/x86/nexus.c:0 > > /usr/src/sys/x86/x86/nexus.c:449 > > /usr/src/sys/x86/x86/nexus.c:412 > > /usr/src/sys/kern/subr_bus.c:4620 > > /usr/src/sys/x86/xen/xenpv.c:123 > > /usr/src/sys/dev/xen/blkback/blkback.c:3010 > > > > With gdb backtrace i think i can get a better trace though: > > #0 __curthread at /usr/src/sys/amd64/include/pcpu_aux.h:55 > > #1 doadump at /usr/src/sys/kern/kern_shutdown.c:399 > > #2 kern_reboot at /usr/src/sys/kern/kern_shutdown.c:486 > > #3 vpanic at /usr/src/sys/kern/kern_shutdown.c:919 > > #4 panic at /usr/src/sys/kern/kern_shutdown.c:843 > > #5 pmap_growkernel at /usr/src/sys/amd64/amd64/pmap.c:208 > > #6 vm_map_insert at /usr/src/sys/vm/vm_map.c:1752 > > #7 vm_map_find at /usr/src/sys/vm/vm_map.c:2259 > > #8 kva_import at /usr/src/sys/vm/vm_kern.c:712 > > #9 vmem_import at /usr/src/sys/kern/subr_vmem.c:928 > > #10 vmem_try_fetch at /usr/src/sys/kern/subr_vmem.c:1049 > > #11 vmem_xalloc at /usr/src/sys/kern/subr_vmem.c:1449 > > #12 vmem_alloc at /usr/src/sys/kern/subr_vmem.c:1350 > > #13 kva_alloc at /usr/src/sys/vm/vm_kern.c:150 > > #14 pmap_mapdev_internal at /usr/src/sys/amd64/amd64/pmap.c:8974 > > #15 pmap_mapdev_attr at /usr/src/sys/amd64/amd64/pmap.c:8990 > > #16 nexus_map_resource at /usr/src/sys/x86/x86/nexus.c:523 > > #17 nexus_activate_resource at /usr/src/sys/x86/x86/nexus.c:448 > > #18 nexus_alloc_resource at /usr/src/sys/x86/x86/nexus.c:412 > > #19 BUS_ALLOC_RESOURCE at ./bus_if.h:321 > > #20 bus_alloc_resource at /usr/src/sys/kern/subr_bus.c:4617 > > #21 xenpv_alloc_physmem at /usr/src/sys/x86/xen/xenpv.c:121 > > #22 xbb_alloc_communication_mem at > > /usr/src/sys/dev/xen/blkback/blkback.c:3010 > > #23 xbb_connect at /usr/src/sys/dev/xen/blkback/blkback.c:3336 > > #24 xenbusb_back_otherend_changed at > > /usr/src/sys/xen/xenbus/xenbusb_back.c:228 > > #25 xenwatch_thread at /usr/src/sys/dev/xen/xenstore/xenstore.c:1003 > > #26 in fork_exit at /usr/src/sys/kern/kern_fork.c:1069 > > #27 > > > > > > There is some sort of mismatch in info, because panic message printed > > "panic: pmap_growkernel: no memory to grow kernel", but gdb backtrace in > > #5 0xffffffff81073eed in pmap_growkernel at > > /usr/src/sys/amd64/amd64/pmap.c:208 > > leads to lines: > > switch (pmap->pm_type) { > > .. > > panic("pmap_valid_bit: invalid pm_type %d", pmap->pm_type) > > > > So either trace is off the mark or message in serial logs. If this was only > > memleak related, then it should not happen when dom id 1 is started, i > > suppose. > > That's weird, I would rather trust the printed panic message rather > than the symbol resolution. Seems to be a kind of memory exhaustion, > as the kernel is failing to allocate a page for use in the kernel page > table. > > I will try to see what can be done here. I have a patch to disable the bounce buffering done in blkback (attached). While I think it's not directly related to the panic you are hitting, it's long time since we should have disabled that. It should reduce the memory consumption by blkback greatly, so might have the side effect of helping with your issue related to pmap_growkernel. On my test box a single instance of blkback reduced memory usage from ~100M to ~300K. It should be applied on top of the other two patches. Regards, Roger. --AXxWJ3w5Q3v8ydwo Content-Type: text/plain; charset=utf-8 Content-Disposition: attachment; filename="0001-xen-blkback-remove-bounce-buffering-mode.patch" From 449ef76695cf5ec5cc3514e6bd653d0b1dff3dde Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Date: Tue, 12 Apr 2022 16:17:09 +0200 Subject: [PATCH] xen/blkback: remove bounce buffering mode Remove bounce buffering code for blkback and only attach if Xen creates IOMMU entries for grant mapped pages. Such bounce buffering consumed a non trivial amount of memory and CPU resources to do the memory copy, when it's been a long time since Xen has been creating IOMMU entries for grant maps. Refuse to attach blkback if Xen doesn't advertise that IOMMU entries are created for grant maps. Sponsored by: Citrix Systems R&D --- sys/dev/xen/blkback/blkback.c | 181 ++-------------------------------- 1 file changed, 8 insertions(+), 173 deletions(-) diff --git a/sys/dev/xen/blkback/blkback.c b/sys/dev/xen/blkback/blkback.c index 15e4bbe78fc0..97939d32ffce 100644 --- a/sys/dev/xen/blkback/blkback.c +++ b/sys/dev/xen/blkback/blkback.c @@ -80,6 +80,7 @@ __FBSDID("$FreeBSD$"); #include #include +#include #include #include @@ -101,27 +102,6 @@ __FBSDID("$FreeBSD$"); #define XBB_MAX_REQUESTS \ __CONST_RING_SIZE(blkif, PAGE_SIZE * XBB_MAX_RING_PAGES) -/** - * \brief Define to force all I/O to be performed on memory owned by the - * backend device, with a copy-in/out to the remote domain's memory. - * - * \note This option is currently required when this driver's domain is - * operating in HVM mode on a system using an IOMMU. - * - * This driver uses Xen's grant table API to gain access to the memory of - * the remote domains it serves. When our domain is operating in PV mode, - * the grant table mechanism directly updates our domain's page table entries - * to point to the physical pages of the remote domain. This scheme guarantees - * that blkback and the backing devices it uses can safely perform DMA - * operations to satisfy requests. In HVM mode, Xen may use a HW IOMMU to - * insure that our domain cannot DMA to pages owned by another domain. As - * of Xen 4.0, IOMMU mappings for HVM guests are not updated via the grant - * table API. For this reason, in HVM mode, we must bounce all requests into - * memory that is mapped into our domain at domain startup and thus has - * valid IOMMU mappings. - */ -#define XBB_USE_BOUNCE_BUFFERS - /** * \brief Define to enable rudimentary request logging to the console. */ @@ -257,14 +237,6 @@ struct xbb_xen_reqlist { */ uint64_t gnt_base; -#ifdef XBB_USE_BOUNCE_BUFFERS - /** - * Pre-allocated domain local memory used to proxy remote - * domain memory during I/O operations. - */ - uint8_t *bounce; -#endif - /** * Array of grant handles (one per page) used to map this request. */ @@ -500,30 +472,6 @@ struct xbb_file_data { * so we only need one of these. */ struct iovec xiovecs[XBB_MAX_SEGMENTS_PER_REQLIST]; -#ifdef XBB_USE_BOUNCE_BUFFERS - - /** - * \brief Array of io vectors used to handle bouncing of file reads. - * - * Vnode operations are free to modify uio data during their - * exectuion. In the case of a read with bounce buffering active, - * we need some of the data from the original uio in order to - * bounce-out the read data. This array serves as the temporary - * storage for this saved data. - */ - struct iovec saved_xiovecs[XBB_MAX_SEGMENTS_PER_REQLIST]; - - /** - * \brief Array of memoized bounce buffer kva offsets used - * in the file based backend. - * - * Due to the way that the mapping of the memory backing an - * I/O transaction is handled by Xen, a second pass through - * the request sg elements is unavoidable. We memoize the computed - * bounce address here to reduce the cost of the second walk. - */ - void *xiovecs_vaddr[XBB_MAX_SEGMENTS_PER_REQLIST]; -#endif /* XBB_USE_BOUNCE_BUFFERS */ }; /** @@ -891,25 +839,6 @@ xbb_reqlist_vaddr(struct xbb_xen_reqlist *reqlist, int pagenr, int sector) return (reqlist->kva + (PAGE_SIZE * pagenr) + (sector << 9)); } -#ifdef XBB_USE_BOUNCE_BUFFERS -/** - * Given a page index and 512b sector offset within that page, - * calculate an offset into a request's local bounce memory region. - * - * \param reqlist The request structure whose bounce region will be accessed. - * \param pagenr The page index used to compute the bounce offset. - * \param sector The 512b sector index used to compute the page relative - * bounce offset. - * - * \return The computed global bounce buffer address. - */ -static inline uint8_t * -xbb_reqlist_bounce_addr(struct xbb_xen_reqlist *reqlist, int pagenr, int sector) -{ - return (reqlist->bounce + (PAGE_SIZE * pagenr) + (sector << 9)); -} -#endif - /** * Given a page number and 512b sector offset within that page, * calculate an offset into the request's memory region that the @@ -929,11 +858,7 @@ xbb_reqlist_bounce_addr(struct xbb_xen_reqlist *reqlist, int pagenr, int sector) static inline uint8_t * xbb_reqlist_ioaddr(struct xbb_xen_reqlist *reqlist, int pagenr, int sector) { -#ifdef XBB_USE_BOUNCE_BUFFERS - return (xbb_reqlist_bounce_addr(reqlist, pagenr, sector)); -#else return (xbb_reqlist_vaddr(reqlist, pagenr, sector)); -#endif } /** @@ -1508,17 +1433,6 @@ xbb_bio_done(struct bio *bio) } } -#ifdef XBB_USE_BOUNCE_BUFFERS - if (bio->bio_cmd == BIO_READ) { - vm_offset_t kva_offset; - - kva_offset = (vm_offset_t)bio->bio_data - - (vm_offset_t)reqlist->bounce; - memcpy((uint8_t *)reqlist->kva + kva_offset, - bio->bio_data, bio->bio_bcount); - } -#endif /* XBB_USE_BOUNCE_BUFFERS */ - /* * Decrement the pending count for the request list. When we're * done with the requests, send status back for all of them. @@ -2180,17 +2094,6 @@ xbb_dispatch_dev(struct xbb_softc *xbb, struct xbb_xen_reqlist *reqlist, for (bio_idx = 0; bio_idx < nbio; bio_idx++) { -#ifdef XBB_USE_BOUNCE_BUFFERS - vm_offset_t kva_offset; - - kva_offset = (vm_offset_t)bios[bio_idx]->bio_data - - (vm_offset_t)reqlist->bounce; - if (operation == BIO_WRITE) { - memcpy(bios[bio_idx]->bio_data, - (uint8_t *)reqlist->kva + kva_offset, - bios[bio_idx]->bio_bcount); - } -#endif if (operation == BIO_READ) { SDT_PROBE3(xbb, kernel, xbb_dispatch_dev, read, device_get_unit(xbb->dev), @@ -2241,10 +2144,6 @@ xbb_dispatch_file(struct xbb_softc *xbb, struct xbb_xen_reqlist *reqlist, struct uio xuio; struct xbb_sg *xbb_sg; struct iovec *xiovec; -#ifdef XBB_USE_BOUNCE_BUFFERS - void **p_vaddr; - int saved_uio_iovcnt; -#endif /* XBB_USE_BOUNCE_BUFFERS */ int error; file_data = &xbb->backend.file; @@ -2300,18 +2199,6 @@ xbb_dispatch_file(struct xbb_softc *xbb, struct xbb_xen_reqlist *reqlist, xiovec = &file_data->xiovecs[xuio.uio_iovcnt]; xiovec->iov_base = xbb_reqlist_ioaddr(reqlist, seg_idx, xbb_sg->first_sect); -#ifdef XBB_USE_BOUNCE_BUFFERS - /* - * Store the address of the incoming - * buffer at this particular offset - * as well, so we can do the copy - * later without having to do more - * work to recalculate this address. - */ - p_vaddr = &file_data->xiovecs_vaddr[xuio.uio_iovcnt]; - *p_vaddr = xbb_reqlist_vaddr(reqlist, seg_idx, - xbb_sg->first_sect); -#endif /* XBB_USE_BOUNCE_BUFFERS */ xiovec->iov_len = 0; xuio.uio_iovcnt++; } @@ -2331,28 +2218,6 @@ xbb_dispatch_file(struct xbb_softc *xbb, struct xbb_xen_reqlist *reqlist, xuio.uio_td = curthread; -#ifdef XBB_USE_BOUNCE_BUFFERS - saved_uio_iovcnt = xuio.uio_iovcnt; - - if (operation == BIO_WRITE) { - /* Copy the write data to the local buffer. */ - for (seg_idx = 0, p_vaddr = file_data->xiovecs_vaddr, - xiovec = xuio.uio_iov; seg_idx < xuio.uio_iovcnt; - seg_idx++, xiovec++, p_vaddr++) { - memcpy(xiovec->iov_base, *p_vaddr, xiovec->iov_len); - } - } else { - /* - * We only need to save off the iovecs in the case of a - * read, because the copy for the read happens after the - * VOP_READ(). (The uio will get modified in that call - * sequence.) - */ - memcpy(file_data->saved_xiovecs, xuio.uio_iov, - xuio.uio_iovcnt * sizeof(xuio.uio_iov[0])); - } -#endif /* XBB_USE_BOUNCE_BUFFERS */ - switch (operation) { case BIO_READ: @@ -2429,25 +2294,6 @@ xbb_dispatch_file(struct xbb_softc *xbb, struct xbb_xen_reqlist *reqlist, /* NOTREACHED */ } -#ifdef XBB_USE_BOUNCE_BUFFERS - /* We only need to copy here for read operations */ - if (operation == BIO_READ) { - for (seg_idx = 0, p_vaddr = file_data->xiovecs_vaddr, - xiovec = file_data->saved_xiovecs; - seg_idx < saved_uio_iovcnt; seg_idx++, - xiovec++, p_vaddr++) { - /* - * Note that we have to use the copy of the - * io vector we made above. uiomove() modifies - * the uio and its referenced vector as uiomove - * performs the copy, so we can't rely on any - * state from the original uio. - */ - memcpy(*p_vaddr, xiovec->iov_base, xiovec->iov_len); - } - } -#endif /* XBB_USE_BOUNCE_BUFFERS */ - bailout_send_response: if (error != 0) @@ -2826,12 +2672,6 @@ xbb_disconnect(struct xbb_softc *xbb) /* There is one request list for ever allocated request. */ for (i = 0, reqlist = xbb->request_lists; i < xbb->max_requests; i++, reqlist++){ -#ifdef XBB_USE_BOUNCE_BUFFERS - if (reqlist->bounce != NULL) { - free(reqlist->bounce, M_XENBLOCKBACK); - reqlist->bounce = NULL; - } -#endif if (reqlist->gnt_handles != NULL) { free(reqlist->gnt_handles, M_XENBLOCKBACK); reqlist->gnt_handles = NULL; @@ -3210,17 +3050,6 @@ xbb_alloc_request_lists(struct xbb_softc *xbb) reqlist->xbb = xbb; -#ifdef XBB_USE_BOUNCE_BUFFERS - reqlist->bounce = malloc(xbb->max_reqlist_size, - M_XENBLOCKBACK, M_NOWAIT); - if (reqlist->bounce == NULL) { - xenbus_dev_fatal(xbb->dev, ENOMEM, - "Unable to allocate request " - "bounce buffers"); - return (ENOMEM); - } -#endif /* XBB_USE_BOUNCE_BUFFERS */ - reqlist->gnt_handles = malloc(xbb->max_reqlist_segments * sizeof(*reqlist->gnt_handles), M_XENBLOCKBACK, M_NOWAIT|M_ZERO); @@ -3489,8 +3318,14 @@ xbb_attach_failed(struct xbb_softc *xbb, int err, const char *fmt, ...) static int xbb_probe(device_t dev) { + uint32_t regs[4]; + + KASSERT(xen_cpuid_base != 0, ("Invalid base Xen CPUID leaf")); + cpuid_count(xen_cpuid_base + 4, 0, regs); - if (!strcmp(xenbus_get_type(dev), "vbd")) { + /* Only attach if Xen creates IOMMU entries for grant mapped pages. */ + if ((regs[0] & XEN_HVM_CPUID_IOMMU_MAPPINGS) && + !strcmp(xenbus_get_type(dev), "vbd")) { device_set_desc(dev, "Backend Virtual Block Device"); device_quiet(dev); return (0); -- 2.35.1 --AXxWJ3w5Q3v8ydwo--