From nobody Thu Aug 03 17:20:00 2023 X-Original-To: freebsd-arm@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4RGwb84jp4z3cVDS for ; Thu, 3 Aug 2023 17:20:20 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic305-20.consmr.mail.gq1.yahoo.com (sonic305-20.consmr.mail.gq1.yahoo.com [98.137.64.83]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4RGwb70NfWz4hBm for ; Thu, 3 Aug 2023 17:20:18 +0000 (UTC) (envelope-from marklmi@yahoo.com) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=yahoo.com header.s=s2048 header.b=QvHNQxVf; spf=pass (mx1.freebsd.org: domain of marklmi@yahoo.com designates 98.137.64.83 as permitted sender) smtp.mailfrom=marklmi@yahoo.com; dmarc=pass (policy=reject) header.from=yahoo.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1691083216; bh=9r7MZi9NiZSTqivreD+4r4oWBObVcTrRsez+mYfoQR0=; h=From:Subject:Date:References:To:In-Reply-To:From:Subject:Reply-To; b=QvHNQxVfZixsxXhpT4zWH8d8xiP855Ny724Y9z5Xezz3nf/iphi3Rsp3Yh7+40xB7gmdk53thC3cm9L1g8YFZc/TQA4kNT2S6LfNXQNRozHRxDPqUSrxSItVJbBkYZDZq+S0cVUtPYmSZgRWEDF7ZDwvaCHxRr1FmMxuBe4o+g/mI0PykIprkhsVWzuQUgEbqK7+utLZyX0DFV5KW4D9gCWUy1ry0JTaODtl490EQcM4Xb41m8cjnw1b5NGtpJyROtMhnJWlKpK9X7guuNGrZtHAmoTuBamh+qxjN2r1gQLzBkEeCvubEaN8zvwVgAtC6awj11dWLEKS3T1a24BZiw== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1691083216; bh=XyhyGgf31KAod0EuQDCaCxOOtRahxGp3g8HH1SDylBU=; h=X-Sonic-MF:From:Subject:Date:To:From:Subject; b=nDO3xD0S9SNW7kqC8ZuM1IAMTRbpej9VmUkP/zpOKzYk4Kn9xk1ff7BGzISTahSNpVKqJ9mDAafEjmCJANTba6fMmQ5B9GDk/p8uvm+3ZzQb8TrPCxfSn7WJBMalENLaQJAO219gVTr/uKg4n2q92Gly1rmdcEtYyz3sxfz+Xrk9Dr7M4KS8uc/D6GT7aH6DE3j19XunyQ5V/Fnd6qfIkyDHtqB/UwQFeVP1Ro6KhMnrjvigNGnWzxJ8B2EvFZNwxh8+2Z9VxydnbFHc1D/fLgRrysq0mCwzSQsw4GOqiMqjriZ1uZKoFCn8Af9sFDfnm1BoitPzfiFVEJedCkGXAA== X-YMail-OSG: lS0yDAgVM1mZYPxpv4VACQsv4PUGRdicQnREylkwK3C6gZcc7el4G6k7nHcsyi6 YDRX3CJvpJ9.od_GDV1LgoDopijOgkb_VltxkZ08f_zJ93x.EOXMBRXcu_P6Sv0gONCKJ71GGd7g AXmCqRggpPT7gk9em3BxHEbou1CQj9r9Ki1dN1n0X2.PCiMdiuwWLmMjfPA1MC5Pa2am02pqNyop Q5H5hKe9GzoJJMA9FxgakjwjdEMLiJMZXD_BxN9PSrXSs.0I4qf9xTBJv8IjwoM5ZP32zynvP.Ew zwAb.qNqhYMWenQ.tMCTs9_AXRWndqO9ZXW3WES8RIFoLX6_1xfEM2U4q7q0sFW10jHbHMDMMpCi vnIwCIc83TPwu7gIk.u8RIlqHlaJF5CCYZfGdVlO_bG_GVD3jwEFcy3S0EGz0RKFfi.HVscNsoL_ hpH6cM12uw_3yv4Wr5J_.e58IbBJAtlo3Oc7CHNmEPHGpev9gQv2pwrW.5QlX3avKupRTbKRtuqq Jj_3vVljr3BS1UoiH0SPyguWGmJrqZN94rNodkM7tFiKoboPVARDdRj80PCHgVy3htHfld2aZXGc ju11.P64Krul_LFQm3133qOAU2.QR6U3DBAZoXxWLjD_TRLoYFVYmQFMxQpyeFAMzxbDL.yptUf8 okJ4nM.DZ13.oGFq4TxP4v137D_TJ_m21TsBBWfc1MAioIMWgFrMHTMsUfc6OtXAC1XKk78HlH15 P2TCQffAirK4YyWe7oK3N9B.8SY6avWF5humHnMo.FHJFTojHbfcWzkBX104SAv9.eJ5W9W4Akzc CERyeXUx6JvqCkTY_QpJaghqygVXboPT0XcbbTX_nB46I4QiDn62krvDxYboonF9KtEduLdM1BIp fkDoPLAgC5LJFjXNiNz7TQr_7Ny8B_xVI2l7w8YrL8QnKKc_hMZAoU6DtI3p5_iqdC3QujfQYXoH jwUTYhsOegl19WYxTkukk.QMWq2DSlWKMObDMXTbjQOwCagptjUibq8_C3mL.FumsAVDRgUXn96O Z27Eh2mKZxRdpqg5nAQrJoHcF0QZsL2Fww9s0s7toQBcy50SrbioKQDjP6kUVhLgMk1IIU7rZdba iUYdg3yqRyOyULcePBqDCCqzZDNMUHXnu8RZFEHsD04ywe7XUx0jPeqUdhWbg03jDnk4KT0GXsv8 _fn9rOsWaUQNTdzHoKvElL4_WjQmrwC4EUyxE2rEr3IPVBHX2QFdYkg9nzJK0SMDdgr1AW093y1e rgqsBb28_M_J438JUXJfIjs0xZd426glBNOyKKeQg2MjMEw.GInF9H2pD4Ozkcv39iL.xNpPcr8c 7aRcQZw.fj5PCiipavOcHAioAld20y7FWGaA0PlzsOyk7tUWNFfRDprceAMummpvAILCnEg9H_Sm ZL4sdArhsAbdDaZUdra4QlQTy0cJmkdw70MD3sEBwhRgbW0SwTv40CfMmyol68SiRZDxJ1UrzOGg nXghS6VHklwgOsd.nj6i4p9UlE235uIUH_iArmXmvFqL3a80yMGJE8PgxKz81iXjDpuNvgXZNtA7 .9A4OcQTCwzyZSrYCPoRto5DJ3EcNQqoIsMK7ywJh93eSE3UWT6s9DHlQrIWm2nZwaaJ3IZv4FwX PFOwK2LU.7GKPaTC6JUfEnr556AnFpxjN.tXiCaBfO4t6OczrZDZnd7ptTAjGAtlbrbiSnbNMKCV bzO68IEGBvkLPSHet.uI1.bRjHcNEVAiZWinvj0Pe7Tp0SMUT5wHZpC5ODhWXQ3UtPFGrtbyeGYY tjNGNzwYn9d5QKYesbWUOCALnZapJRKUIKqzXiOc8M5uu.0s_CsvyxtOuy3e03REOqj9dMYc7KOU Pxw0HDMDhJkppD9pR61MWp8.K7dlAFPStgywoh1KF9zgbm4CEe99cRmZq85P66tg1IYTVJMLeIs4 BG.TFmcM.4exATAxfsZ6f6c_Sj1.9Rv7d8dfz0_HcJjQjq1lFZmoqXwP728KuGOhurJo0vNwjRjM 0hURTR7dpYuaY.GTOQ0fCkDy8f0EhGsDikJ4BTFQVWdhu3AbRcy733qxhQ_Y5YLwphp_Wz7B9xzR EWOHvynny9HGe0JZNmwalZAO6DT5iXuZ47DrV0mzVbXoaRhzu6qhbpJMd3xwlGVYM_JL.kdQ26t5 QeNwB7PCsoKDY86fan3HUKbibvYlsimXd7F34pV3jwNxST_QanZrmdT5DijksjzP2zCHkI7IhcBL YX6koOVK2EBlHQMk539ic8_FkmpJnxU7cmr8KlVGb5C8RkJmSGUjruoj_5MKHKi19ABJIZuoIPJn n X-Sonic-MF: X-Sonic-ID: d43a6903-45e6-4b86-a4ef-7db197127462 Received: from sonic.gate.mail.ne1.yahoo.com by sonic305.consmr.mail.gq1.yahoo.com with HTTP; Thu, 3 Aug 2023 17:20:16 +0000 Received: by hermes--production-gq1-7d844d8954-p9m8l (Yahoo Inc. Hermes SMTP Server) with ESMTPA ID bd9e305d0a7a4100838daa4fb532a726; Thu, 03 Aug 2023 17:20:11 +0000 (UTC) From: Mark Millard Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable List-Id: Porting FreeBSD to ARM processors List-Archive: https://lists.freebsd.org/archives/freebsd-arm List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-arm@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3731.700.6\)) Subject: Re: armv7 kyua runs via chroot on aarch64: zfs tests leave behind processes from timed out tests Date: Thu, 3 Aug 2023 10:20:00 -0700 References: <1BDD2369-BCC3-469B-8094-AEFE7FC3CE94@yahoo.com> To: Current FreeBSD , FreeBSD ARM List In-Reply-To: <1BDD2369-BCC3-469B-8094-AEFE7FC3CE94@yahoo.com> Message-Id: X-Mailer: Apple Mail (2.3731.700.6) X-Spamd-Result: default: False [-1.50 / 15.00]; NEURAL_HAM_SHORT(-1.00)[-1.000]; DMARC_POLICY_ALLOW(-0.50)[yahoo.com,reject]; MV_CASE(0.50)[]; R_DKIM_ALLOW(-0.20)[yahoo.com:s=s2048]; R_SPF_ALLOW(-0.20)[+ptr:yahoo.com]; MIME_GOOD(-0.10)[text/plain]; MLMMJ_DEST(0.00)[freebsd-arm@freebsd.org]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; FREEMAIL_ENVFROM(0.00)[yahoo.com]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; ASN(0.00)[asn:36647, ipnet:98.137.64.0/20, country:US]; RCVD_IN_DNSWL_NONE(0.00)[98.137.64.83:from]; DKIM_TRACE(0.00)[yahoo.com:+]; MID_RHS_MATCH_FROM(0.00)[]; FROM_HAS_DN(0.00)[]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCPT_COUNT_TWO(0.00)[2]; DWL_DNSWL_NONE(0.00)[yahoo.com:dkim]; TO_MATCH_ENVRCPT_SOME(0.00)[]; FREEMAIL_FROM(0.00)[yahoo.com]; TO_DN_ALL(0.00)[]; RWL_MAILSPIKE_POSSIBLE(0.00)[98.137.64.83:from] X-Spamd-Bar: - X-Rspamd-Queue-Id: 4RGwb70NfWz4hBm On Aug 3, 2023, at 07:18, Mark Millard wrote: > On Aug 3, 2023, at 00:19, Mark Millard wrote: >=20 >> This is after the patch (leading whitespace might >> not have been preserved in what you see): >>=20 >> # git -C /usr/main-src/ diff sys/dev/md/ >> diff --git a/sys/dev/md/md.c b/sys/dev/md/md.c >> index a719dccb1955..365296ec4276 100644 >> --- a/sys/dev/md/md.c >> +++ b/sys/dev/md/md.c >> @@ -147,8 +147,15 @@ struct md_ioctl32 { >> int md_fwsectors; >> uint32_t md_label; >> int md_pad[MDNPAD]; >> +#ifdef __aarch64__ >> + uint32_t md_pad0; >> +#endif >> } __attribute__((__packed__)); >> +#ifdef __aarch64__ >> +CTASSERT((sizeof(struct md_ioctl32)) =3D=3D 440); >> +#else >> CTASSERT((sizeof(struct md_ioctl32)) =3D=3D 436); >> +#endif >>=20 >> #define MDIOCATTACH_32 _IOC_NEWTYPE(MDIOCATTACH, struct = md_ioctl32) >> #define MDIOCDETACH_32 _IOC_NEWTYPE(MDIOCDETACH, struct = md_ioctl32) >>=20 >>=20 >> The kyua run is still in process, but at this point there is >> the following accumulation from the zfs testing timouts: >>=20 >> # ps -alxdww >> UID PID PPID C PRI NI VSZ RSS MWCHAN STAT TT TIME = COMMAND >> . . . >> 0 17491 1 6 20 0 36460 12324 - T - 0:24.71 |-- = fsync_integrity /testdir2316/testfile2316 >> 0 17551 1 5 20 0 10600 7512 tx->tx_s D - 0:00.00 |-- = /sbin/zpool destroy -f testpool.2316 >> 0 17739 1 7 20 0 10600 7308 zfs tear D - 0:00.00 |-- = /sbin/zpool destroy -f testpool.2316 >> 0 17841 1 3 20 0 10600 7316 tx->tx_s D - 0:00.00 |-- = /sbin/zpool destroy -f testpool.2316 >> 0 17860 1 0 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 17888 1 3 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 17907 1 6 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 17928 1 7 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 17955 1 0 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 17976 1 4 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 17995 1 2 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18023 1 2 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18043 1 2 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18064 1 3 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18085 1 0 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18114 1 7 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18135 1 2 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18157 1 6 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18177 1 6 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18205 1 4 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18224 1 1 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18255 1 3 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18275 1 1 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18296 1 5 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18317 1 4 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18345 1 4 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18365 1 2 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18386 1 3 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18412 1 1 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18447 1 5 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18466 1 5 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18516 1 6 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18535 1 2 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> 0 18632 1 0 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >=20 > It has added: >=20 > 0 18656 1 7 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 18748 1 0 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 18767 1 4 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 18858 1 5 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 18877 1 0 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 18907 1 7 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 18926 1 5 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 18956 1 7 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 18975 1 7 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 19005 1 4 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 19026 1 4 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 19298 1 6 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 19317 1 6 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 19408 1 7 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 19427 1 2 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 19518 1 4 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 19537 1 4 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade > 0 19635 1 5 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade 0 19654 1 5 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade 0 19746 1 6 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade 0 19767 1 6 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade 0 19854 1 6 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade 0 19873 1 0 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade 0 19960 1 1 20 0 10080 6956 spa_name D - 0:00.00 |-- = /sbin/zfs upgrade >> Lots of these are from 300s timeouts but some are from 1200s or >> 1800s or 3600s timeouts. >>=20 >> For reference: >>=20 >> = sys/cddl/zfs/tests/txg_integrity/txg_integrity_test:fsync_integrity_001_po= s -> broken: Test case body timed out [1800.053s] >> = sys/cddl/zfs/tests/txg_integrity/txg_integrity_test:txg_integrity_001_pos = -> passed [63.702s] >> sys/cddl/zfs/tests/userquota/userquota_test:groupspace_001_pos -> = skipped: Required program 'runwattr' not found in PATH [0.003s] >> sys/cddl/zfs/tests/userquota/userquota_test:groupspace_002_pos -> = skipped: Required program 'runwattr' not found in PATH [0.002s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_001_pos -> = skipped: Required program 'runwattr' not found in PATH [0.002s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_002_pos -> = broken: Test case cleanup timed out [0.148s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_003_pos -> = broken: Test case cleanup timed out [0.151s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_004_pos -> = skipped: Required program 'runwattr' not found in PATH [0.002s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_005_neg -> = broken: Test case body timed out [300.021s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_006_pos -> = broken: Test case body timed out [300.080s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_007_pos -> = skipped: Required program 'runwattr' not found in PATH [0.002s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_008_pos -> = broken: Test case body timed out [300.034s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_009_pos -> = broken: Test case body timed out [300.143s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_010_pos -> = skipped: Required program 'runwattr' not found in PATH [0.002s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_011_pos -> = broken: Test case body timed out [300.003s] >> sys/cddl/zfs/tests/userquota/userquota_test:userquota_012_neg -> = broken: Test case body timed out [300.019s] >> sys/cddl/zfs/tests/userquota/userquota_test:userspace_001_pos -> = skipped: Required program 'runwattr' not found in PATH [0.002s] >> sys/cddl/zfs/tests/userquota/userquota_test:userspace_002_pos -> = skipped: Required program 'runwattr' not found in PATH [0.002s] >> sys/cddl/zfs/tests/utils_test/utils_test_test:utils_test_001_pos -> = broken: Test case body timed out [300.052s] >> sys/cddl/zfs/tests/utils_test/utils_test_test:utils_test_002_pos -> = skipped: Required program 'labelit' not found in PATH [0.002s] >> sys/cddl/zfs/tests/utils_test/utils_test_test:utils_test_003_pos -> = broken: Test case body timed out [300.076s] >> sys/cddl/zfs/tests/utils_test/utils_test_test:utils_test_004_pos -> = broken: Test case body timed out [300.106s] >> sys/cddl/zfs/tests/utils_test/utils_test_test:utils_test_005_pos -> = skipped: Required program 'ff' not found in PATH [0.002s] >> sys/cddl/zfs/tests/utils_test/utils_test_test:utils_test_006_pos -> = broken: Test case body timed out [300.015s] >> sys/cddl/zfs/tests/utils_test/utils_test_test:utils_test_007_pos -> = broken: Test case body timed out [300.005s] >> sys/cddl/zfs/tests/utils_test/utils_test_test:utils_test_008_pos -> = skipped: Required program 'ncheck' not found in PATH [0.002s] >> sys/cddl/zfs/tests/utils_test/utils_test_test:utils_test_009_pos -> = broken: Test case body timed out [300.051s] >> sys/cddl/zfs/tests/write_dirs/write_dirs_test:write_dirs_001_pos -> = broken: Test case body timed out [1200.056s] >> sys/cddl/zfs/tests/write_dirs/write_dirs_test:write_dirs_002_pos -> = broken: Test case body timed out [1200.046s] >> sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_autoreplace_001_neg -> = broken: Test case body timed out [3600.055s] >=20 > And added: >=20 > sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_autoreplace_002_pos -> = broken: Test case body timed out [3600.028s] > sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_autoreplace_003_pos -> = broken: Test case body timed out [3600.146s] > sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_degrade_001_pos -> broken: = Test case body timed out [600.067s] > sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_degrade_002_pos -> broken: = Test case body timed out [600.015s] > sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_fault_001_pos -> broken: Test = case body timed out [300.061s] > sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_hotspare_001_pos -> broken: = Test case body timed out [3600.042s] > sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_hotspare_002_pos -> broken: = Test case body timed out [3600.161s] > sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_hotspare_003_pos -> broken: = Test case body timed out [3600.033s] > sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_hotspare_004_pos -> broken: = Test case body timed out [3600.007s] sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_hotspare_005_pos -> broken: = Test case body timed out [3600.065s] sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_hotspare_006_pos -> broken: = Test case body timed out [3600.014s] sys/cddl/zfs/tests/zfsd/zfsd_test:zfsd_hotspare_007_pos -> broken: = Test case body timed out [3600.066s] >> Other timeouts not from zfs tests have not had an accumulation >> of processes left behind. But these may be the set of tests >> that use ksh93 for scripting. I make no claim of knowing the >> zfs vs. ksh93 vs. both vs. ??? for what is contributing. >>=20 >>=20 >> I'll note that the system was booted via a bectl BE environment >> on the only FreeBSD media enabled, so is a zfs-root boot context. >>=20 >> For reference: >>=20 >> # uname -apKU >> FreeBSD CA78C-WDK23-ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT aarch64 = 1400093 #6 main-n264334-215bab7924f6-dirty: Wed Aug 2 14:12:14 PDT 2023 = = root@CA78C-WDK23-ZFS:/usr/obj/BUILDs/main-CA78C-nodbg-clang/usr/main-src/a= rm64.aarch64/sys/GENERIC-NODBG-CA78C arm64 aarch64 1400093 1400093 >>=20 >> I preload various modules (6 are commented out [not preloaded] >> and some listed may be actually built into the kernel): >>=20 >> # grep kldload ~/prekyua-kldloads.sh=20 >> kldload -v -n zfs.ko >> kldload -v -n cryptodev.ko >> kldload -v -n nullfs.ko >> kldload -v -n fdescfs.ko >> kldload -v -n filemon.ko >> kldload -v -n nfsd.ko >> kldload -v -n tarfs.ko >> kldload -v -n xz.ko >> kldload -v -n geom_concat.ko >> kldload -v -n geom_eli.ko >> kldload -v -n geom_nop.ko >> kldload -v -n geom_gate.ko >> kldload -v -n geom_mirror.ko >> kldload -v -n geom_multipath.ko >> kldload -v -n sdt.ko >> kldload -v -n dtrace.ko >> kldload -v -n opensolaris.ko >> kldload -v -n geom_raid3.ko >> kldload -v -n geom_shsec.ko >> kldload -v -n geom_stripe.ko >> kldload -v -n geom_uzip.ko >> kldload -v -n if_epair.ko >> kldload -v -n if_gif.ko >> kldload -v -n if_tuntap.ko >> kldload -v -n if_lagg.ko >> kldload -v -n if_infiniband.ko >> kldload -v -n if_wg.ko >> kldload -v -n ng_socket.ko >> kldload -v -n netgraph.ko >> kldload -v -n ng_hub.ko >> kldload -v -n ng_bridge.ko >> kldload -v -n ng_ether.ko >> kldload -v -n ng_vlan_rotate.ko >> kldload -v -n ipdivert.ko >> kldload -v -n pf.ko >> kldload -v -n if_bridge.ko >> kldload -v -n bridgestp.ko >> kldload -v -n mqueuefs.ko >> kldload -v -n tcpmd5.ko >> kldload -v -n carp.ko >> kldload -v -n sctp.ko >> kldload -v -n if_stf.ko >> kldload -v -n if_ovpn.ko >> kldload -v -n ipsec.ko >> #kldload -v -n ipfw.ko >> #kldload -v -n pflog.ko >> #kldload -v -n pfsync.ko >> kldload -v -n dummynet.ko >> #kldload -v -n mac_bsdextended.ko >> #kldload -v -n mac_ipacl.ko >> #kldload -v -n mac_portacl.ko >>=20 >> armv7 ports built and installed in the armv7 chroot >> area include: >>=20 >> # more ~/origins/kyua-origins.txt >> archivers/gtar >> devel/gdb >> devel/py-pytest >> devel/py-pytest-twisted >> devel/py-twisted >> lang/perl5.32 >> lang/python >> net/scapy >> security/nist-kat >> security/openvpn >> security/sudo >> shells/ksh93 >> shells/bash >> sysutils/coreutils >> sysutils/sg3_utils >> textproc/jq >>=20 >> (Those cause others to also be installed.) >=20 > I tried gdb -p PID against a couple of the processes. > Each got stuck, not reaching the gdb prompt. I also > show a Control-T output: >=20 > Attaching to process 17491 > load: 0.24 cmd: gdb131 19693 [uwait] 32.27r 0.02u 0.06s 0% 32152k > #0 0xffff00000049fe20 at mi_switch+0xe0 > #1 0xffff0000004f3658 at sleepq_catch_signals+0x318 > #2 0xffff0000004f3318 at sleepq_wait_sig+0x8 > #3 0xffff00000049f410 at _sleep+0x1d0 > #4 0xffff0000004b52dc at umtxq_sleep+0x27c > #5 0xffff0000004bab7c at do_wait+0x25c > #6 0xffff0000004b8cdc at __umtx_op_wait_uint_private+0x5c > #7 0xffff0000004b6e64 at sys__umtx_op+0x84 > #8 0xffff0000008267d4 at do_el0_sync+0x9b4 > #9 0xffff000000805910 at handle_el0_sync+0x44 >=20 > and: >=20 > Attaching to process 17860 > load: 0.23 cmd: gdb131 19697 [uwait] 13.14r 0.06u 0.01s 0% 32184k > #0 0xffff00000049fe20 at mi_switch+0xe0 > #1 0xffff0000004f3658 at sleepq_catch_signals+0x318 > #2 0xffff0000004f3318 at sleepq_wait_sig+0x8 > #3 0xffff00000049f410 at _sleep+0x1d0 > #4 0xffff0000004b52dc at umtxq_sleep+0x27c > #5 0xffff0000004bab7c at do_wait+0x25c > #6 0xffff0000004b8cdc at __umtx_op_wait_uint_private+0x5c > #7 0xffff0000004b6e64 at sys__umtx_op+0x84 > #8 0xffff0000008267d4 at do_el0_sync+0x9b4 > #9 0xffff000000805910 at handle_el0_sync+0x44 >=20 > I was unable to Control-C the gdb's to gain control > but was able to put them in the background (Control-Z > then bg). Looks like I'm going to have to reboot instead of letting the kyua run go to completion. The periodic daily is stuck as well. 0 19064 1657 1 20 0 12980 2484 piperd I - 0:00.00 | `-- = cron: running job (cron) 0 19066 19064 3 40 0 13436 2928 wait Is - 0:00.00 | = `-- /bin/sh - /usr/sbin/periodic daily . . . 0 19237 19235 0 68 0 13436 2936 wait I - 0:00.00 | = | | `-- /bin/sh - /etc/periodic/security/100.chksetuid 0 19242 19237 6 68 0 21912 10292 zfs D - 0:10.21 | = | | |-- / /var/mail . . . /dev/null (find) 0 19243 19237 7 68 0 13436 2932 wait I - 0:00.00 | = | | `-- /bin/sh - /etc/periodic/security/100.chksetuid 0 19245 19243 1 68 0 15204 2212 piperd I - 0:00.00 | = | | `-- cat is also stuck. So the problems are now not limited to the kyua run. =3D=3D=3D Mark Millard marklmi at yahoo.com