From nobody Thu Feb 08 17:43:27 2024 X-Original-To: bugs@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4TW48h1SZ1z5B6BH for ; Thu, 8 Feb 2024 17:43:32 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4TW48g67SPz4cpP for ; Thu, 8 Feb 2024 17:43:31 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1707414211; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+g6zC7mRpmbNMibDYNALLZSGNzjMpB7EwGSCzyPzj0o=; b=wcyeCuSNAFpB31pdIQWP1kHn5Ot/m+9L5+5VES3f5jAeuT5NGPa2DuySwSCl8MObTkYvh9 ZVGvHjjFYrS+22Jrdootk5iyRraTn3xPI/iPNPeKs4Xh7f3lx9XrzmT3EMxiemo//ccaXg wI6/Xf0/BFUR5FQ2qooZ/0pfL4kcC5ji0ADobdL6NYvvCnu6NZ1IXOGkehUi5S3wnT2nFX hheHaQRD3m5obMHFhjHoQPodVtwSTKrDcp4ABypuQfU2Z5RoC30AtrEFUtwxqPVZZWSpHK 7JmZcuOlV8l+mWEeZEUT0dBQjNRvf3htUoRzs+qCqpm1HHVhTxlGyWjmOkctXQ== ARC-Authentication-Results: i=1; mx1.freebsd.org; none ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1707414211; a=rsa-sha256; cv=none; b=Ovi+u5MWOrWKxbUt6RHg/E1/EQwq8Hd4YZ5lD8ZrNxeH5vVYX5u1vy2/Dr9Xl/GuoNwVjx IneTxePOWEXNCFYOqXhv3sw/McEIdOcXAvlt9UBkW81WsJjNqxtwfIoC4yknJYxXhFe5Zp JEXIoY3k5gx0/CNebmlen0dwf9frf/HiNhZRoQnexlD5njehOPfepJEuLIP7n//eX+qv3J 9ICjJzznZzOW1Vd7LkEjeXcTRogzYBxH8wTRC65Q2YopnYEU1gXlro8bYxpjyWAdfVVxk1 uw0n5cS3RwVk40/1wpx1ucc2q6NEedJA0uaEZR2cVg48VR3sMNrUsY3MRAZxAQ== Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2610:1c1:1:606c::50:1d]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 4TW48g5DgHz10KW for ; Thu, 8 Feb 2024 17:43:31 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.5]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id 418HhVpZ091911 for ; Thu, 8 Feb 2024 17:43:31 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id 418HhV2G091905 for bugs@FreeBSD.org; Thu, 8 Feb 2024 17:43:31 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: bugs@FreeBSD.org Subject: [Bug 229745] ahcich: CAM status: Command timeout Date: Thu, 08 Feb 2024 17:43:27 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.2-STABLE X-Bugzilla-Keywords: X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: imp@FreeBSD.org X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: bugs@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated List-Id: Bug reports List-Archive: https://lists.freebsd.org/archives/freebsd-bugs List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-bugs@freebsd.org MIME-Version: 1.0 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D229745 --- Comment #76 from Warner Losh --- (In reply to Kevin Zheng from comment #75) >The issue that I'm writing about is the system behavior. It seemed that al= l I/O (or maybe just writes?) to the ZFS pool were stalled waiting of the = disk to time out and reattach, despite the fact that I have other working m= irror devices. It seems to me that one hardware issue with one disk shouldn= 't stall the whole pool. > > I'm not actually sure if this problem is happening on the ZFS level or in= CAM or the SATA subsystem; if this happens again, what debugging steps wou= ld determine what the cause of this is? Yes. So there's a few things going on here. First, ZFS has ordering issues = that it undertakes to enforce by scheduling some I/O (especially writes) only af= ter I/O it depends on has completed. The ZFS code ensures that the state of its= log is always in a reasonable state by this means. That means that if some I/O hangs for a "long period of time (more than a second or five)." then that w= ould delay the I/O that depend on that completing as well. This could have the effect of causing processes to hang waiting for that I/O to complete. So while I'd agree that one misbehaving disk shouldn't hang the pool, I can= see how it might. How can ZFS know what to schedule, consistent with its desire= to keep the law consistent, if any disk could suddenly stop writing? Now, I'm = not a ZFS expert enough to know if one of its goals is to cope with this situat= ion. I'd check with the ZFS developers to see if they'd expect ZFS to not stall = if one disk stalls for a long time. ZFS does try to pipeline its stream of I/O= s as well, as much as possible, and one stalling disk interferes with that pipel= ine. One way to mitigate this, however, could be to set the timeout down from 30= s to something smaller like 3-5s (ssd) or 8-12s (hdd). And the number of retires down to 2 (it has to be greater than 1 for most controllers due to deficien= cies in their recovery protocols, which are kinda hard to fix). That could help = keep the hangs down from 90s down to more like 5-10s (ssd) or 15-20s (hdd) which would be less noticeable in a wide range of workloads (though certainly not all). There may be ZFS specific tunings that you might be able to try if this hap= pens often. Maybe smaller (or paradoxically larger) I/Os by creating the pools w= ith a smaller logical block size (ashift). This might help align the I/O to the physical NAND blocks better (hence maybe bigger is needed). Also partitioni= ng the drive such that it starts on a good LBA boundary (I often keep 1MB at t= he start of disks unused because that's still < physical block sizes, but also= a trivial amount... I expect to bump this to 8M or 16MB in the future). That might help keep whatever bug / pathology that's in the drive leading to the hangs to not occur (though there's no guarantee: maybe it's due to a bug in= the firmware that's impossible to avoid). --=20 You are receiving this mail because: You are the assignee for the bug.=