From nobody Wed Jan 19 01:08:07 2022 X-Original-To: freebsd-arm@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 0396F197029C for ; Wed, 19 Jan 2022 01:08:08 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4JdnZH5C9Hz3Fst for ; Wed, 19 Jan 2022 01:08:07 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2610:1c1:1:606c::50:1d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 8ED643BA5 for ; Wed, 19 Jan 2022 01:08:07 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.5]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id 20J187R5090879 for ; Wed, 19 Jan 2022 01:08:07 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id 20J187p7090878 for freebsd-arm@FreeBSD.org; Wed, 19 Jan 2022 01:08:07 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: freebsd-arm@FreeBSD.org Subject: [Bug 261324] xhci1 resets on ROCKPro64 when reading from ZFS pool Date: Wed, 19 Jan 2022 01:08:07 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: arm X-Bugzilla-Version: 13.0-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Only Me X-Bugzilla-Who: freebsd@toyingwithfate.com X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: freebsd-arm@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated List-Id: Porting FreeBSD to ARM processors List-Archive: https://lists.freebsd.org/archives/freebsd-arm List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-arm@freebsd.org MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1642554487; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=anMgiXM7dWa9fmlM9TD+wifu23wDaXyGh/iH3GeZMRs=; b=Oun8jYfcfVxqNG/K1VrlP5lCaH7KxVw3yFLAkhBeSbP6jcp+bmm0864fk/BT/FvOaeL6E4 GpfWNZUqPsVfcUtS1Mu98NWpNdduC8261eOJ+cxz8v8Gk1dII0Y1i4JlhzGXw3o/SoBvwR ABctTFcchuTlQyOZKI4l9Fhj1IZ9o/BOdIAne1XCnWkvI/YigmRLy/C1VUCA31tpY+zssY QYL2p2eBOqWvERP/7oZHNwFQqG8u47qVdvo3wMKYC2zNlVIV4BnoDecHErKYp3rjgGPLp3 aFNElb4syB8iHinxm45tmLKK8UNjMdTo7FT2kcaF6s3EIRCLzhBTN6eChDdrbQ== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1642554487; a=rsa-sha256; cv=none; b=skD0+Jb7e3wWxogUOZqzH8lgn5DvzNiU/y6TqL9GsM8+khXaQatFKpgZMljDtpV+ckEKLF HvF85ofZ4//L1hnp5jcwTgrdxtKLoGJZrMcxDgiQhoAQIWqj5xhE9yyIKSwSHTmg+rW9/7 Cplt5npnqHnZJUoaLYDyBLmKgCBCp65Q7nnmWbthh2+jAjvX2qP48yf+zLUV9PFq5wU8g1 caKojAfPXl/ioInZq34WnOi4o3AXw8t06CbQr2mJSEOi/CzuMzynQeVd7vFwQSG4/62SqE zHjlNeXYH5zyJFFv96mvTZM6J4cqF9sxdjIP7RgKeI27w7VdMIjBaLMPuUwQYQ== ARC-Authentication-Results: i=1; mx1.freebsd.org; none X-ThisMailContainsUnwantedMimeParts: N https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D261324 Bug ID: 261324 Summary: xhci1 resets on ROCKPro64 when reading from ZFS pool Product: Base System Version: 13.0-STABLE Hardware: arm64 OS: Any Status: New Severity: Affects Only Me Priority: --- Component: arm Assignee: freebsd-arm@FreeBSD.org Reporter: freebsd@toyingwithfate.com I have a ROCKPro64 single board computer with 4GB of RAM that I am attempti= ng to use as a small NAS. The storage consists of an encrypted (OpenZFS native, not GELI) RAID-Z pool using three 8TB SATA spinning disks and one 512GB SATA SSD (used as cache) in a simple 4-disk JBOD USB enclosure (the "OWC Mercury Elite Pro Quad"). My understanding is that it contains a USB 3 hub with four SATA-to-USB bridges in it. The issue I run into is that after reading ~40GB of data from the ZFS filesystem, disk I/O stops and I get the following messages in dmesg: xhci1: Resetting controller uhub4: at usbus5, port 1, addr 1 (disconnected) ugen5.7: at usbus5 (disconnected) uhub7: at uhub4, port 1, addr 6 (disconnected) uhub7: detached ugen5.2: at usbus5 (disconnected) uhub6: at uhub4, port 2, addr 1 (disconnected) ugen5.3: at usbus5 (disconnected) (da0:umass-sim0:0:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5c f8 00 00 = 00 80 00 00=20 umass0: at uhub6, port 1, addr 2 (disconnected) (da0:umass-sim0:0:0:0): CAM status: CCB request completed with an error (da0:umass-sim0:0:0:0): Retrying command, 3 more tries remain (da0:umass-sim0:0:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5c f8 00 00 = 00 80 00 00=20 (da0:umass-sim0:0:0:0): CAM status: CCB request completed with an error (da0:umass-sim0:0:0:0): Retrying command, 2 more tries remain (da0:umass-sim0:0:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5c f8 00 00 = 00 80 00 00=20 (da0:umass-sim0:0:0:0): CAM status: CCB request completed with an error (da0:umass-sim0:0:0:0): Retrying command, 1 more tries remain da0 at umass-sim0 bus 0 scbus0 target 0 lun 0 da0: s/n 123456789004 detached (da1:umass-sim1:1:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5a 00 00 00 = 00 80 00 00=20 (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an error (da1:umass-sim1:1:0:0): Retrying command, 3 more tries remain (da2:umass-sim2:2:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5d 78 00 00 = 00 80 00 00=20 (da2:umass-sim2:2:0:0): CAM status: CCB request completed with an error (da2:umass-sim2:2:0:0): Retrying command, 3 more tries remain (da3:umass-sim3:3:0:0): WRITE(10). CDB: 2a 00 2d 5d 61 31 00 01 00 00=20 (da3:umass-sim3:3:0:0): CAM status: CCB request completed with an error (da3:umass-sim3:3:0:0): Retrying command, 3 more tries remain (da1:umass-sim1:1:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5a 00 00 00 = 00 80 00 00=20 (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an error (da1:umass-sim1:1:0:0): Retrying command, 2 more tries remain (da2:umass-sim2:2:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5d 78 00 00 = 00 80 00 00=20 (da2:umass-sim2:2:0:0): CAM status: CCB request completed with an error (da2:umass-sim2:2:0:0): Retrying command, 2 more tries remain (da3:umass-sim3:3:0:0): WRITE(10). CDB: 2a 00 2d 5d 61 31 00 01 00 00=20 (da3:umass-sim3:3:0:0): CAM status: CCB request completed with an error (da3:umass-sim3:3:0:0): Retrying command, 2 more tries remain (da1:umass-sim1:1:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5a 00 00 00 = 00 80 00 00=20 (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an error (da1:umass-sim1:1:0:0): Retrying command, 1 more tries remain (da2:umass-sim2:2:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5d 78 00 00 = 00 80 00 00=20 (da2:umass-sim2:2:0:0): CAM status: CCB request completed with an error (da2:umass-sim2:2:0:0): Retrying command, 1 more tries remain (da3:umass-sim3:3:0:0): WRITE(10). CDB: 2a 00 2d 5d 61 31 00 01 00 00=20 (da3:umass-sim3:3:0:0): CAM status: CCB request completed with an error (da3:umass-sim3:3:0:0): Retrying command, 1 more tries remain (da1:umass-sim1:1:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5a 00 00 00 = 00 80 00 00=20 (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an error (da1:umass-sim1:1:0:0): Retrying command, 0 more tries remain (da2:umass-sim2:2:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5d 78 00 00 = 00 80 00 00=20 (da2:umass-sim2:2:0:0): CAM status: CCB request completed with an error (da2:umass-sim2:2:0:0): Retrying command, 0 more tries remain (da3:umass-sim3:3:0:0): WRITE(10). CDB: 2a 00 2d 5d 61 31 00 01 00 00=20 (da3:umass-sim3:3:0:0): CAM status: CCB request completed with an error (da3:umass-sim3:3:0:0): Retrying command, 0 more tries remain (da1:umass-sim1:1:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5a 00 00 00 = 00 80 00 00=20 (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an error (da1:umass-sim1:1:0:0): Error 5, Retries exhausted (da2:umass-sim2:2:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5d 78 00 00 = 00 80 00 00=20 (da2:umass-sim2:2:0:0): CAM status: CCB request completed with an error (da2:umass-sim2:2:0:0): Error 5, Retries exhausted (da3:umass-sim3:3:0:0): WRITE(10). CDB: 2a 00 2d 5d 61 31 00 01 00 00=20 (da3:umass-sim3:3:0:0): CAM status: CCB request completed with an error (da3:umass-sim3:3:0:0): Error 5, Retries exhausted (da1:umass-sim1:1:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5b 80 00 00 = 08 00 00 00=20 (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an error (da1:umass-sim1:1:0:0): Retrying command, 3 more tries remain (da2:umass-sim2:2:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5e f8 00 00 = 01 80 00 00=20 (da2:umass-sim2:2:0:0): CAM status: CCB request completed with an error (da2:umass-sim2:2:0:0): Retrying command, 3 more tries remain (da3:umass-sim3:3:0:0): WRITE(10). CDB: 2a 00 2d 5d 63 31 00 01 00 00=20 (da3:umass-sim3:3:0:0): CAM status: CCB request completed with an error (da3:umass-sim3:3:0:0): Retrying command, 3 more tries remain (da1:umass-sim1:1:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5b 80 00 00 = 08 00 00 00=20 (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an error (da1:umass-sim1:1:0:0): Retrying command, 2 more tries remain (da2:umass-sim2:2:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5e f8 00 00 = 01 80 00 00=20 (da2:umass-sim2:2:0:0): CAM status: CCB request completed with an error (da2:umass-sim2:2:0:0): Retrying command, 2 more tries remain (da3:umass-sim3:3:0:0): WRITE(10). CDB: 2a 00 2d 5d 63 31 00 01 00 00=20 (da3:umass-sim3:3:0:0): CAM status: CCB request completed with an error (da3:umass-sim3:3:0:0): Retrying command, 2 more tries remain (da1:umass-sim1:1:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5b 80 00 00 = 08 00 00 00=20 (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an error (da1:umass-sim1:1:0:0): Retrying command, 1 more tries remain (da2:umass-sim2:2:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5e f8 00 00 = 01 80 00 00=20 (da2:umass-sim2:2:0:0): CAM status: CCB request completed with an error (da2:umass-sim2:2:0:0): Retrying command, 1 more tries remain (da3:umass-sim3:3:0:0): WRITE(10). CDB: 2a 00 2d 5d 63 31 00 01 00 00=20 (da3:umass-sim3:3:0:0): CAM status: CCB request completed with an error (da3:umass-sim3:3:0:0): Retrying command, 1 more tries remain (da1:umass-sim1:1:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5b 80 00 00 = 08 00 00 00=20 (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an error (da1:umass-sim1:1:0:0): Retrying command, 0 more tries remain (da2:umass-sim2:2:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5e f8 00 00 = 01 80 00 00=20 (da2:umass-sim2:2:0:0): CAM status: CCB request completed with an error (da2:umass-sim2:2:0:0): Retrying command, 0 more tries remain (da3:umass-sim3:3:0:0): WRITE(10). CDB: 2a 00 2d 5d 63 31 00 01 00 00=20 (da3:umass-sim3:3:0:0): CAM status: CCB request completed with an error (da3:umass-sim3:3:0:0): Retrying command, 0 more tries remain (da1:umass-sim1:1:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5b 80 00 00 = 08 00 00 00=20 (da1:umass-sim1:1:0:0): CAM status: CCB request completed with an error (da1:umass-sim1:1:0:0): Error 5, Retries exhausted Jan 1 05:47:13 generic ZFS[5395]: vdev probe failure, zpool=3Dzraid-v5 path=3D/dev/da1 Jan 1 05:47:13 generic ZFS[5399]: vdev probe failure, zpool=3Dzraid-v5 path=3D/dev/da1 (da2:umass-sim2:2:0:0): READ(16). CDB: 88 00 00 00 00 01 01 89 5e f8 00 00 = 01 80 00 00=20 (da2:umass-sim2:2:0:0): CAM status: CCB request completed with an error (da2:umass-sim2:2:0:0): Error 5, Retries exhausted Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Solaris: WARNING: Pool 'zraid-v5' has encountered an uncorrectable I/O fail= ure and has been suspended. Jan 1 05:47:14 generic ZFS[5423]: vdev probe failure, zpool=3Dzraid-v5 path=3D/dev/da2 Jan 1 05:47:14 generic ZFS[5447]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5495]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5499]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5503]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5507]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5511]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5515]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5519]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5523]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5527]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5531]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5535]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5539]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5543]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5547]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5551]: catastrophic pool I/O failure, zpool=3Dzraid-v5 Jan 1 05:47:15 generic ZFS[5555]: catastrophic pool I/O failure, zpool=3Dzraid-v5 (da3:umass-sim3:3:0:0): WRITE(10). CDB: 2a 00 2d 5d 63 31 00 01 00 00=20 (da3:umass-sim3:3:0:0): CAM status: CCB request completed with an error (da3:umass-sim3:3:0:0): Error 5, Retries exhausted ...and so on, including further complaints from Solaris, ZFS, and CAM. The system is still operable, and I can SSH in, but the ZFS pool is unresponsiv= e. I can tell FreeBSD to shut down, and although it will start to do so (disconnecting me from SSH), it will not complete powering off: SSH connect= ions are refused and it still responds to pings. If I unplug the SBC's power, disconnect the enclosure's USB cable (since the ROCKPro64 won't boot with it connected, another issue I'll report separately), then boot the SBC and reconnect the USB, things will start working again. (The ZFS pool will sometimes complain about errors that I need to clear, but if I scrub the po= ol it confirms that no data was corrupted.) Interestingly, I seem to be able to scrub the pool fine: despite the pool having several TB of data, the issue hasn't appeared then. Additionally, I = can use dd to read from the individual block devices for many hours without iss= ue, and can even do so from all the disks simultaneously (averaging ~100MB/s per disk, so ~400MB/s total) without seeing the issue. But if I mount the ZFS filesystem (I do so read-only during my tests to try to reduce risk to the data) and start making a tarball of said filesystem (writing said tar strea= m to /dev/null), it gets about 40GB in and then the issue appears. The exact amo= unt of data it's able to read (as measured by piping the stream through pv) var= ies a bit: my last three tests have failed at 43.3, 40.6, and 54.2 GiB, respectively. The issue is 100% repeatable: although the amount of data it's able to read before failing varies, it always fails. I've tried both the FreeBSD-13.0-RELEASE-arm64-aarch64-ROCKPRO64.img.xz and FreeBSD-13.0-STABLE-arm64-aarch64-ROCKPRO64-20211230-3684bb89d52-248759.img images, but they both have the issue. I've tried Linux on the same SBC with= the same ZFS pool, and it had no issue. I tried a Raspberry Pi 4 running FreeBSD 13.0 RELEASE with the same ZFS pool, and it had no issue. I should note that both the Linux and Raspberry Pi tests lacked hardware crypto support (Linux because Linux apparently doesn't support ARM crypto in ZFS yet, and Raspber= ry Pi because those have no crypto hardware), so they both relied on software crypto which constrained performance to tens of MB/s, rather than hundreds = as FreeBSD on the ROCKPro64 does in my problem case. To that end I also used t= he ZFS pool with an Intel system running FreeBSD 13, which has working hardware crypto and it was able to work with the disks even faster than the ROCKPro6= 4, with no issues. So my suspicion is the enclosure is not the culprit, since = it worked fine on RPi and Intel systems. I also suspect the SBC itself is not = the culprit since it worked fine with Linux, though it's possible the issue only shows up when reading from the disks at a particular speed and Linux's constrained crypto performance just didn't reach that speed. I'm also able = to work with other disks just fine with FreeBSD on the ROCKPro64: it's only th= is multi-disk enclosure that has trouble. I can create an encrypted ZFS pool o= n a USB 3 SSD connected directly to the SBC (no intermediary hub) and it works fine, no trouble at all. Initially I suspected that my SBC or enclosure might be at fault, but throu= gh this testing I've come to instead suspect there is some sort of issue in the xhci driver for the ROCKPro64, perhaps somehow related to the enclosure hav= ing a USB hub in front of the disks. But I am only speculating! Could someone h= elp me troubleshoot this issue further, to see why xhci1 is resetting unexpecte= dly? Thank you for your help! --=20 You are receiving this mail because: You are the assignee for the bug.=