From nobody Thu Mar 10 15:40:54 2022 X-Original-To: freebsd-xen@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id D8AD81A073EF for ; Thu, 10 Mar 2022 15:41:10 +0000 (UTC) (envelope-from prvs=061b0cdbb=roger.pau@citrix.com) Received: from esa5.hc3370-68.iphmx.com (esa5.hc3370-68.iphmx.com [216.71.155.168]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mx1.hc3370-68.iphmx.com", Issuer "HydrantID Server CA O1" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4KDtZY41Vhz3mCN for ; Thu, 10 Mar 2022 15:41:09 +0000 (UTC) (envelope-from prvs=061b0cdbb=roger.pau@citrix.com) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1646926869; h=date:from:to:cc:subject:message-id:references: content-transfer-encoding:in-reply-to:mime-version; bh=8BjBswdBOZNzx+hLaB/OMUf5MMjzuG7nLAcCBWLhP3k=; b=FvZ79uHVvQ1B7CZuFCYoaHP1GBeMdcL+o2P0PToCcwMAe7V4QL74rI95 px9lP5posfIccTpUHZJiLCBpzm/Eo7YLonZdW1d87jt2IetGWY+j314Wd hI2EkT8SEfS9hm9BAMEgwBreGgWj8cnuWs2ey01Jpa9SlyauyMQ+ArjPD o=; X-SBRS: 5.1 X-MesageID: 65402375 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:plYOrK9eUSyvd97ZZaj/DrUDkX6TJUtcMsCJ2f8bNWPcYEJGY0x3m jNMXm2OP/3cY2L8eopxYI+18RgGvMTXy9BqHgtqpXw8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ54f5fs7Rh2NQw2oHhW1nlV e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9 DlCnaWrRz0JDKbTo8oYVDoCNDhdYJd66oaSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFJkYtXx6iynQEN4tQIzZQrWM7thdtNs1rp4TQa6EP JVEAdZpRDiZIDlDO0o1Mc4/38LyqSDnVBlW9l3A8MLb5ECMlVcsgdABKuH9cMGKX8JKtkCWr GnP+yL+GB5yHNKFxDeP6X7pluLJtS3hVY8YD7H+8eRl6HWBy2AOEAYHTnO0pPC4jgi1XNc3F qAP0nNw9+5orhXtF4SjGU3jyJKZgvICc+NLHdca2gqS8YTR5CXDIXEnEwR9euVz4afaWgcW/ lOOmtroAxlmv7uUVW+R+9+okN+iBcQGBTRcPHFZFGPp9/Gm+dhu1UyXEr6PBYbo1oWdJN3m/ 9ydQMHSbZ03hNVD6ai09Euvb9mE9smQFV5dCuk6swuYAuJFiGyNOtTABbvzt68owGOlor+p5 iZsdy+2trxmMH11vHbRKNjh5Znwjxp/DBXSgER0A74q/Cm39niocOh4uW8iehc0a51fIWG5P ic/XD+9ArcJbBNGioctP+qM5zkCl/C8RbwJqNiIBjaxXnSBXFDep3w/DaJh92vsjFItgckC1 WSzKq6R4YIhIf0/llKeHr5FuZdyn3xW7T6DFPjTkkX8uZLDNSH9dFvwGAbXBgzPxPjf+1u9H hc2H5bi9iizp8WlO3iJq9FPdQ5WRZX5bLivw/Fqmie4ClMOMEkqCuPLwKNnfIpgnq9PkfzP8 G37UUhdoGcTT1WXQelWQhiPsI/SYKs= IronPort-HdrOrdr: A9a23:H+Vs7K5c3v3xdx6e3QPXwSyBI+orL9Y04lQ7vn2ZFiY6TiXIra +TdaoguSMc6AxwZJkh8erwXpVoZUmsiKKdhrNhQYtKPTOWwldASbsC0WKM+UyEJ8STzJ846U 4kSdkANDSSNykLsS+Z2njBLz9I+rDum8rE9ISurUuFDzsaEJ2Ihz0JezpzeXcGPTWua6BJc6 Z1saF81kSdkDksH4+GL0hAe9KGi8zAlZrgbxJDLxk76DOWhTftzLLhCRCX0joXTjsKmN4ZgC X4uj28wp/mn+Cwyxfa2WOWx5NKmOH5wt8GIMCXkMAaJhjllw7tToV8XL+puiwzvYiUmRwXue iJhy1lE9V46nvXcG3wiRzx2zP42DJr0HPmwU/wuwqUneXJABYBT+ZRj4NQdRXUr2A6ustn7a 5N12WF87JKEBLphk3Glpb1fiAvsnDxjWspkOYVgXAae5AZcqVtoYsW+14QOIscHRj99JssHI BVfY7hDc5tABOnhk3izypSKITGZAVwIv7GeDlPhiWt6UkWoJgjpHFogfD2nR87heUAotd/lq D5259T5cNzp/8tHNFA7dg6ML6K40z2MFvx2TGpUBza/J9uAQO4l3ew2sRz2N2X X-IronPort-AV: E=Sophos;i="5.90,171,1643691600"; d="scan'208";a="65402375" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jBk4hVXCY+8VNeLVBgKu0qKY/vdX0RjA+EWydx+0+pqIN9bclaHx0CjYNjF2mIh3OVWJU+56dH64M5hLHfzTIHSg4REZxuMgCkRRaESh9KjJAdX7I//ilhvr5bdfR4r7UFy5iNYqeXUGcohNZUKPACFxshUAWyTaTFUtw4/GS9ToEE8CeBGTcqJdyYgJ6b+/2+9eKSZwdmJqNA8b1/0RKh4k6z/zEK+4/WY72+LA3ulYqTPkBB8Z1ORxqZtKWsBsiMtmlRFZD4iYJJ6np99+dS0VP7yH5ZUPdYfhSSWRjV+ZoEGVMHjnA8hyIVPRD/kcQumedBA7Ux2Xbxajjb7htw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6hSgc10ZfOXiydfe4uZH4ESvHqRhuoGyjCb2AP2LrDk=; b=JU7dnHgGn3r6kplSSnvtIAOn0KZ8ChAkjXLfr7tyy1DO/BTQJUspouvv2cxn9cXVOlwbRU1fchV1mtgBP706UkE7wg8o1VLklVHW2jkdUAqQ2iLcNiLglbDbo/uwflSbhuGkY5Qc4UnfqhfFMpinsG+qxwGIdhapGlwfnrO3YTgh9epnJ7BNd7RD8COuTCttNjzt3+zuhZQvQsuFWRjC2ap8h3z0RNRnMZ6Rk8RsDmynx/BPCDUwlS7sQVUIiFfhBN5K7WYIWmsO7Hug/I0NaUQ1uAZT2BDBZuc1GmZhY0/eiisvMIuHzXtcrfEfeRSdQ0EewvreB4czgjCG8xhA4Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6hSgc10ZfOXiydfe4uZH4ESvHqRhuoGyjCb2AP2LrDk=; b=ezq6leGcufGeN/VHvt18KlmGae2uU/puSUDFTJrGs2cUFmoX7EY3gnDTHbs/83/ousalu9aqMKauQDK+/Ni5/5qH48xy7duxnNGXREArusXAOUhaRVRHVFMINZMF0Y+dZ63kzE34R2LQEJAjGouBorO5zyx6SOifnwaA7NfJQds= Date: Thu, 10 Mar 2022 16:40:54 +0100 From: Roger Pau =?utf-8?B?TW9ubsOp?= To: Ze Dupsys CC: , Subject: Re: ZFS + FreeBSD XEN dom0 panic Message-ID: References: <202203011540.221FeR4f028103@nfbcal.org> <3d4691a7-c4b3-1c91-9eaa-7af071561bb6@gmail.com> <061700f9-86aa-b2ae-07a4-27d4c3d96966@gmail.com> Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <061700f9-86aa-b2ae-07a4-27d4c3d96966@gmail.com> X-ClientProxiedBy: LO2P265CA0282.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a1::30) To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18) List-Id: Discussion List-Archive: https://lists.freebsd.org/archives/freebsd-xen List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-xen@freebsd.org X-BeenThere: freebsd-xen@freebsd.org MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a3544ec0-1a86-4a97-9a05-08da02ac5f7e X-MS-TrafficTypeDiagnostic: SN6PR03MB3583:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ky9wC0cBEKz3nu2JHAcH3fCNReJd7Qyk7fh9DuDWBR4k3Oj1J/19jBCDYgwMuNJIXb1VdoU9jqO0R+UC0SZJcmgV0ELbLwx+WRV7Vvp5nQc/rmP6to7DjxGTrpsB1AZYX1k8/JQt3p51zixHAkiJExCVHRj5fWN3DX4iEgXT5M6TjqgrRyr0plhtJpDTKuu88z+92bVZGsil2kBSuNkeryP6UVG7cr3x6h+nw6xSILgfJpkEZiABkTBpYTepduJ05XNnKHD6lFScp7aFeCRLBV1NxIGQYRJ3qvArMeuDNHcdJcttozxHpGhWC8CimmD/Zl0fIshwgzDheUhPjIub1U7m4rC2zPG8z8HmsJaY4R8EqcvGIbwc4txcs8NXo8rFqMmOKBbqKWzBwvXHty3Bz8gAZbwnWGRpDe6p23IoEBGWzf/zj9o0IKpvxFIFyrTzQR6T26dA/fg12TVcXXo+/qm2Gk5K1V9XX0fkB+93AV8BaiRuw3mR4MRAoIc51p1kUQB8iLalhYYwWmh4FgkFItYmnsk7PUClw20OAGmMU3sT1UaJa8ja2pwkLG4DCZiB+cIIZ+qg8RUNKJXmE6A9MrLq9471ywO7yWNPXY+8G12A6+dZL2KSI2HrrLJS1eoaooh82FVIr6MW2T0GCV8X7H+FnY3TvbcuSGzMWDPXxid9M8xB63X1jhYauKAXIQSG7fGbo5GA5hHh7WzUawRo5w== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230001)(4636009)(7916004)(366004)(8676002)(316002)(83380400001)(6916009)(85182001)(4326008)(186003)(86362001)(82960400001)(6506007)(9686003)(5660300002)(66946007)(6512007)(6666004)(26005)(38100700002)(6486002)(30864003)(8936002)(66476007)(66556008)(508600001)(33716001)(2906002)(21314003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?NFo5dkFtQ0NPek5Cc2FKT0hUcnZCa1FJOU1EVDJsTjJtc0hSWWt0KzE3bFIw?= =?utf-8?B?N291cW9Dc0RzRm9WZG1SUklRdG44V3NmWG1GNzhaYVEvSTh2bjFLZitHZ1FE?= =?utf-8?B?MTR3bWVScE5Wd2VMSzd5djVWT3hnejJ3VHdGZkVqOUkxMjBONHR4RTdXQXFB?= =?utf-8?B?bjFObmZWSEs1cnJjdjVnL2h5ZDd2b0tEU0pBd0NUNHFtc1BDdmRrY2o1ZzAy?= =?utf-8?B?aUdlSHRIZFExK1gwOS85b2pPcDFXa043aXRsMmdFMnNQQU9tY0FlclRkQmh4?= =?utf-8?B?LytWMzdpRlBEb25VQ2ZlNVdWM0o3ckEzSzVyencwQzlpVjJaaFRTcWNGV0NV?= =?utf-8?B?dEZpZHU5NXZsSy90Zllod2Q5bHJDZjdtMWI1aUM0UU5UeXB5LzVPSlBXeU5q?= =?utf-8?B?c1U2bXprZ3dTYTlyQ0R5dkZUVFd0L0tKUThBYVlBL0JVM0U2SWxjbk1tTHZR?= =?utf-8?B?anJWSFdTWDlCOGtpTzg4VldvenNuUGMvekdyZGFXanBGREJOL0hhT1VZZVVC?= =?utf-8?B?SndtSys1L0RCUzZZTXpqdzc3RVZBdGtQa0FicUs4djRYTTlhY0hxTXpxY0Fw?= =?utf-8?B?MHIrUENGUVBwM1BuU2xxWER6aWVMVzc1WXNTNnBDZDFPT3NFblNCQUpjY21Y?= =?utf-8?B?NnMrc3VjSXZDb05ac0M0blBkY3pLZ0EwcE1hT2ozTEEyUW5PbWFpNzB2dHdt?= =?utf-8?B?WitwUDZmeWtkemg0Z1h2NEh6ODBGbndMUG82bGwvVE5EbW1TcENEU25McjBF?= =?utf-8?B?RkhpcFcxdUtpbXRGQmZHSHRTVHVXK0hZUU5VdnhCT3RhWlkrN050c3YyakY2?= =?utf-8?B?QU9jMGpJV1NyTTFBQzdWWjBTVHJzdG03alp6OGFmelJpZUx0SG9qL2trK2VK?= =?utf-8?B?RVRpWGJMM2NTQjd0MFFFQjBSTldlUTdzckhxTXY0TUltRXJ3eStPNnN3WGNU?= =?utf-8?B?Tk5VVW1IUnhVVllXVS9LSzdxbEhSODRuREJpWEdiY3VFU3lVY0w4MjF3VW5h?= =?utf-8?B?Y20vRjNNcEFQWFBmSUUrTFVkR1hCUzZBbSs1ZWthQ2FyNDFHWUVpR1M3RGpF?= =?utf-8?B?WlFlalZ0VEhZbGl3MzBMZ1d0U2ptYml0VGFJaHRhV0tnc05iM1NUditQSU8x?= =?utf-8?B?Z21vcWM2TzZHeHR3bU1iSGVSNGsyVk1ud0g2MktOL3BhRTBnZVQ0Z2hTU0w2?= =?utf-8?B?TGVtYXZmWFBOU3o4UWhiMTNjTFdBdENhdkhiaUJFUEFmOE9QRWxPdDJRRTh5?= =?utf-8?B?WnhLaXpyWng5anF3VzRNM1BHK2tyNllrV3VpTTdGa2k4blFIWERLK1lWWWZG?= =?utf-8?B?c1VUTUI5WlJSd1FEcjNRK3F2VEV3TXFlUEJDaHFRQjVBMkY1Rk43NXdGRGRO?= =?utf-8?B?NzdZOVBRdzlod2lzbjhhbE5kOGhNZEJuSVpreUhXNDVaLzVCWmk1S3hCWGQ3?= =?utf-8?B?L1VPaFJldTBHcXFqRE5icGczSDBhb0pqU21ZYitrZEhJQnkrcFRxdnlreWZ0?= =?utf-8?B?a2dsdW1hY1Eva1ZmRDJCaWJNTkhUK2RpTW1tU0NycEFjV3p1ZStqa2NOd1ZU?= =?utf-8?B?Zllib3pVRXUwSldIbXJYREhoL1JmTmJYOS8rdHZ4MmJrL1Z6VU5CSzgrUlV0?= =?utf-8?B?bU1QQ2thL1VQZXNZeGp5ampJQ3ByRWZHRG9rRittTEpLQWhDSG1IYmI0MWp5?= =?utf-8?B?d1NFdkViT0ZUajQ2OW12TVVEQUxFSUlyRUFoSEQ0aVdJV0wxbFlqdG14NDlZ?= =?utf-8?B?MGlXL1cyNzJuSGYzcWV2THczcWZjVkFXbEhvalhDMy9oUHg5dUgvUDNjK3U1?= =?utf-8?B?MVAzMU1zNGUvTGFzSEVGNnhNdTdkZEhUbkRtZzlqWFVWWEhXU2pRTzkwdi9x?= =?utf-8?B?SUxLa2h3Q0pEeTNpcVQ0RnlZVTNYMk00WjRZMkNEajVlVzU5VlY3ZHpDVkNV?= =?utf-8?B?eWYrQnAxUzhTK1h4Z1R2MWlMR1RmTDg4L1UwQnhkamYxTHhVb0JpRm40dDlK?= =?utf-8?B?NERhc0hQdHAvTWtRVmc3c0ZvekdSSVRKcDJSNFNHY0tmSzJtTlkyeEhqd3RR?= =?utf-8?B?dkU2cFo2SFRuOS94dG04VDhGM09RTER3TUMxVkw1b1d5cnAyNlkwN3VxS2gr?= =?utf-8?B?Z0NEQVN0dTJFZ0xhRXVZRFFzY25rS29FdW5IZnREQUFIVTdiSGVQOWc3TUw3?= =?utf-8?Q?q8ebPuV/kgvbRtdxmh9pRKM=3D?= X-MS-Exchange-CrossTenant-Network-Message-Id: a3544ec0-1a86-4a97-9a05-08da02ac5f7e X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Mar 2022 15:40:58.7671 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: JsCayWA1AlOPVmmjhJskqHLi6Xe0Thrfdr7kod3No3pNdLq1aSsipK699OpRx6fDuqzjw+mTxMYLKLd/YOml7g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB3583 X-OriginatorOrg: citrix.com X-Rspamd-Queue-Id: 4KDtZY41Vhz3mCN X-Spamd-Bar: ----- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=citrix.com header.s=securemail header.b=FvZ79uHV; dkim=pass header.d=citrix.onmicrosoft.com header.s=selector2-citrix-onmicrosoft-com header.b=ezq6leGc; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=citrix.com; spf=pass (mx1.freebsd.org: domain of "prvs=061b0cdbb=roger.pau@citrix.com" designates 216.71.155.168 as permitted sender) smtp.mailfrom="prvs=061b0cdbb=roger.pau@citrix.com" X-Spamd-Result: default: False [-5.64 / 15.00]; TO_DN_SOME(0.00)[]; R_SPF_ALLOW(-0.20)[+exists:216.71.155.168.spf.hc3370-68.iphmx.com]; RCVD_DKIM_ARC_DNSWL_MED(-0.50)[]; DKIM_TRACE(0.00)[citrix.com:+,citrix.onmicrosoft.com:+]; RCVD_IN_DNSWL_MED(-0.20)[216.71.155.168:from]; DMARC_POLICY_ALLOW(-0.50)[citrix.com,reject]; NEURAL_HAM_SHORT(-1.00)[-1.000]; FREEMAIL_TO(0.00)[gmail.com]; RCVD_COUNT_ZERO(0.00)[0]; MIME_TRACE(0.00)[0:+]; R_MIXED_CHARSET(0.56)[subject]; ASN(0.00)[asn:16417, ipnet:216.71.154.0/23, country:US]; FROM_NEQ_ENVFROM(0.00)[roger.pau@citrix.com,prvs=061b0cdbb=roger.pau@citrix.com]; ARC_ALLOW(-1.00)[microsoft.com:s=arcselector9901:i=1]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[citrix.com:s=securemail,citrix.onmicrosoft.com:s=selector2-citrix-onmicrosoft-com]; FROM_HAS_DN(0.00)[]; RCPT_COUNT_THREE(0.00)[3]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; FORGED_SENDER_VERP_SRS(0.00)[]; DWL_DNSWL_LOW(-1.00)[citrix.com:dkim]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MLMMJ_DEST(0.00)[freebsd-xen]; RWL_MAILSPIKE_POSSIBLE(0.00)[216.71.155.168:from]; MID_RHS_NOT_FQDN(0.50)[] X-ThisMailContainsUnwantedMimeParts: N On Thu, Mar 10, 2022 at 11:34:26AM +0200, Ze Dupsys wrote: > On 2022.03.09. 10:42, Roger Pau Monné wrote: > > On Sun, Mar 06, 2022 at 02:41:17PM +0200, Ze Dupsys wrote: > > > If i execute steps 1 and 4, it seems that machine does not crash; tested 8 > > > hours Dom0 8GB RAM, 4 hours Dom0 2GB RAM. So creating a lot of ZVOLs and > > > monitoring it does not seem to be the cause. > > > > > > Yesterday was a happy and a sad day at the same time. The happy thing is > > > that i think that i have understood one panic reason and somewhat better > > > located where  one problem might be, but at the same time, i think i have > > > located 2 bugs that cause one specific panic, but 2 other panic reasons are > > > due to different circumstances. > > > > > > Further is my pure speculation about two things, somewhat related; i do not > > > know XEN or kernel internals, thus take this with grain of salt. > > > > > > Knowing that this seemed to be RAM problem and thinking how ZFS sysctl > > > tuning values differ, i started to look at "sysctl -a" diffs for Dom0 with > > > 2GB RAM and 8GB RAM, by pure luck because comparing Dom0 that was running > > > longer and just recently restarted, i noticed there are diff lines like so: > > > .. > > > +dev.xbbd.202.%driver: xbbd > > > +dev.xbbd.202.%location: > > > +dev.xbbd.202.%parent: xenbusb_back0 > > > +dev.xbbd.202.%pnpinfo: > > > +dev.xbbd.202.xenbus_connection_state: Closed > > > +dev.xbbd.202.xenbus_dev_type: vbd > > > +dev.xbbd.202.xenbus_peer_domid: 309 > > > +dev.xbbd.202.xenstore_path: backend/vbd/309/51728 > > > +dev.xbbd.202.xenstore_peer_path: /local/domain/309/device/vbd/51728 > > > +dev.xbbd.203.%desc: Backend Virtual Block Device > > > +dev.xbbd.203.%driver: xbbd > > > +dev.xbbd.203.%location: > > > +dev.xbbd.203.%parent: xenbusb_back0 > > > +dev.xbbd.203.%pnpinfo: > > > +dev.xbbd.203.xenbus_connection_state: Closed > > > .. > > > although actually in system at the time there were only 2 DomUs running max. > > > This seems to be the reason why this felt like memory leak, since after long > > > time of start/stop VMs, most probably sysctl hit some sort of RAM limit > > > (which most probably is auto-calculated from total RAM somehow) and panics. > > Those dev.xbbd.XXX sysctls reference a specific blkback instance, and > > should be gone once the guest is shutdown. Do you mean that you have a > > range of dev.xbbd.[0-203] entries in your sysctl? > Yes, i have that range of those entries. Even if all DomUs are destroyed. > Snippet from logs - yesterday started, today crashed: > cat 2022_03_10__1646887233_sysctl__001.log | grep xbbd | sed -Ee > 's/(xbbd\.[0-9]*)\..*/\1/g' | sort | uniq > dev.xbbd.%parent: > dev.xbbd.0 > dev.xbbd.1 > dev.xbbd.10 > dev.xbbd.100 > dev.xbbd.101 > dev.xbbd.102 > dev.xbbd.103 > dev.xbbd.104 > dev.xbbd.105 > dev.xbbd.106 > dev.xbbd.107 > dev.xbbd.108 > dev.xbbd.109 > dev.xbbd.11 > .. > dev.xbbd.52 > dev.xbbd.520 > dev.xbbd.521 > dev.xbbd.522 > dev.xbbd.523 > dev.xbbd.524 > .. > dev.xbbd.99 > irq804: xbbd520:133 @cpu0(domain0): 436527 > irq805: xbbd521:135 @cpu0(domain0): 4777 > irq806: xbbd522:137 @cpu0(domain0): 16589254 > irq807: xbbd523:139 @cpu0(domain0): 6412 > irq808: xbbd524:141 @cpu0(domain0): 103083 > > And those entries keep growing. That's not expected. Can you paste the output of `xenstore-ls -fp` when you get those stale entries? Also, what does `xl list`? Also, can you check the log files at '/var/log/xen/xl-*.log' (where * is the domain name) to try to gather why the backend is not properly destroyed? There's clearly something that prevents blkback from shutting down and releasing its resources. > Yesterday, i tried to follow Brian's email > about working setup with FreeBSD 12.1. With given scripts 8GB RAM Dom0, > still crashed, took longer, but on morning i saw that it machine rebooted, > in serial logs had panic message. It seems that "kern.panic_reboot_wait_time > = -1" did not disable rebooting as well. In monitoring logs i see that > approx. 3 minutes after crash, testing machine was up and running again > (rebooted). > > > > Note that in dev.xbbd.XXX XXX matches the domain ID of the guest > > that's using the backend. > On my setups i have tested this does not seem to be the case. If each VM has > more than 1 HDD, i have that many instances of xbbd sysctl variables as > total count of HDDs attached to VMs. But at some point after rebooting > machines or something, this number starts to grow on each create. For me it > seems that dom ID is in sysctl, like this "dev.xbbd.5.xenbus_peer_domid", > and if i grep those values, they show domids for nonexistent VMs. Right, my bad. Those numbers are assigned by FreeBSD to identify multiple instances of the same driver. > I do not know how sysctls are created, but is there some test suite that > creates awfully lot of sysctl variables and tries to crash system that way? > I suppose there are no ways to limit sysctl variable count or any protection > mechanisms for such cases. Then i could see if panic message is the same. Or > test that allocates sysctl variable with insaneley large value, or appends > to it. It's likely the sysctl plus the associated blkback bounce buffers for example, that are not freed. The problem is that somehow blkback instances are not properly freed. > > > Then i caught another sysctl variable that is growing due to XEN, > > > "kern.msgbuf: Contents of kernel message buffer". I do not know how this > > > variable grows or by which component it is managed, but in VM start/stop > > > case it grows and contains lines with pattern like so: > > > .. > > > xnb(xnb_rxpkt2rsp:2059): Got error -1 for hypervisor gnttab_copy status > > > xnb(xnb_ring2pkt:1526): Unknown extra info type 255.  Discarding packet > > > xnb(xnb_dump_txreq:299): netif_tx_request index =0 > > > xnb(xnb_dump_txreq:300): netif_tx_request.gref  =0 > > > xnb(xnb_dump_txreq:301): netif_tx_request.offset=0 > > > xnb(xnb_dump_txreq:302): netif_tx_request.flags =8 > > > xnb(xnb_dump_txreq:303): netif_tx_request.id    =69 > > > xnb(xnb_dump_txreq:304): netif_tx_request.size  =1000 > > > xnb(xnb_dump_txreq:299): netif_tx_request index =1 > > > xnb(xnb_dump_txreq:300): netif_tx_request.gref  =255 > > > xnb(xnb_dump_txreq:301): netif_tx_request.offset=0 > > > xnb(xnb_dump_txreq:302): netif_tx_request.flags =0 > > > xnb(xnb_dump_txreq:303): netif_tx_request.id    =0 > > > xnb(xnb_dump_txreq:304): netif_tx_request.size  =0 > > > .. > > > > > > Those lines in that variable just keep growing and growing, it is not that > > > they are flushed, trimmed or anything. Each time i get the same message on > > > serial output, it has one more section of error appended to "same-previous" > > > serial output message and sysctl variable as well. Thus at some point serial > > > output and sysctl contains a large block of those errors while VM is > > > starting. So at some point the value of this sysctl could be reaching max > > > allowed/available and this makes the system panic.  I do not know the reason > > > for those errors, but actually if there was a patch to suppress them, this > > > could be "solved". Another diff chunk might be related to this: > > > +dev.xnb.1.xenstore_peer_path: /local/domain/7/device/vif/0 > > > +dev.xnb.1.xenbus_peer_domid: 7 > > > +dev.xnb.1.xenbus_connection_state: InitWait > > > +dev.xnb.1.xenbus_dev_type: vif > > > +dev.xnb.1.xenstore_path: backend/vif/7/0 > > > +dev.xnb.1.dump_rings: > > > +dev.xnb.1.unit_test_results: xnb_rxpkt2rsp_empty:1765 Assertion Error: > > > nr_reqs == 0 > > > +xnb_rxpkt2rsp_empty:1767 Assertion Error: memcmp(&rxb_backup, > > > &xnb_unit_pvt.rxb, sizeof(rxb_backup)) == 0 > > > +xnb_rxpkt2rsp_empty:1769 Assertion Error: memcmp(&rxs_backup, > > > xnb_unit_pvt.rxs, sizeof(rxs_backup)) == 0 > > > +52 Tests Passed > > > +1 Tests FAILED > > So you have failed tests for netback. Maybe the issue is with > > netback rather than blkback. > I am not sure where the problem is. Maybe there are two or more problems. > Should they be discussed separately network related and disk related? > > > > > What was suspicious about this is that i am using 13.0-RELEASE-p7, it is not > > > a DEV version of XEN or anything, but there are sysctl variables with "dev" > > > prefix. Could it be that ports have been accidentally compiled XEN version > > > with some development flags turned on? > > 'dev' here means device, not developer. > Okay. Understood. > > > > > So those were my conclusions. > > > > > > What do you think? How should we proceed with this? Should i try to somehow > > > build XEN from git sources? > > I think that's unlikely to make any difference. I would think the > > problem is with FreeBSD rather than Xen. Can you paste the config file > > you are using the create the domain(s)? > Yet again, as i said, i do not know where the problem is. My wording might > be off when pointing that it's Xen, since Dom0 is FreeBSD and it panics not > Xen. It's just that i do not know the boundaries and responsibilities of > components clearly enough. And often by mentioning Xen i mean whole setup or > FreeBSD module that interacts with Xen. > > > > I've tried myself to trigger this crash by creating and destroying a > > guest in a loop but didn't manage to trigger it. I was able to create > > (and shutdown) 1328 guests successfully on a ZFS dom0 using 4GB of > > RAM. > No, just turning on/off single VM does not crash system. I had tried that as > well before reporting bug. I had attached all /bin/sh scripts and configs > that crash the system in bug report #261059 tar archive. Those scripts are > really simple. I've been trying to minimize/trim everything off since then, > but the simplest requirements for crash are executing 3 processes in > parallel: > > 1) loop that creates ZFS volumes at least of size 2GB and with dd writes > data into them, /dev/zero is good. I've never run out of space, thus crash > is not due to exhausted space in ZFS pool. > 2) loop that creates VM1 with single disk, connects through ssh to that VM > and writes 3GB data in /tmp, then reboots, starts VM, removes file from > /tmp, then reboots > 3) loop that starts/stops VM2 with 5 disks. > > As for panic timing. Less RAM for Dom0, increases speed for panic, panic > message is the same. For 8GB RAM on my machine i have to wait for approx. 10 > hours till panic, on 4G RAM around 6 hours. On resource starved Dom0 with > 2GB, even shorter, but panic messages are more or less the same. I have > crashed just by omitting step 1, but it takes longer. Running just on/off > for both VMs, system does not crash. Thus it seems that some sort of ZFS > load (VM internal or Dom0) is essential for panic to occur. I have managed > to crash with VM which has 2 HDDs, thus it does not seem to be due to 4 HDD > threshold emulated/virtio disks. It's just that with 5, it takes less time > till crash. Usually panic occurs after when xbbd.$value, $value > 230. Right, at some point you will run out of memory if resources are not properly freed. Can you try to boot with "boot_verbose=YES" in /boot/loader.conf and see if that gives you are more information? Otherwise I might have to provide you with a patch to blkback in order to attempt to detect why backends are not destroyed. Since you stress the system quite badly, do you by any chance see 'xl' processes getting terminated? Background xl processes being killed will lead to backends not being properly shut down. Regards, Roger.