From nobody Sat Feb 01 14:10:25 2025 X-Original-To: freebsd-current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4YlZRL1y3Qz5lWKp for ; Sat, 01 Feb 2025 14:10:38 +0000 (UTC) (envelope-from dclarke@blastwave.org) Received: from mail.oetec.com (mail.oetec.com [108.160.241.186]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature ECDSA (prime256v1) client-digest SHA256) (Client CN "mail.oetec.com", Issuer "E5" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4YlZRK1x0fz3S6d for ; Sat, 01 Feb 2025 14:10:37 +0000 (UTC) (envelope-from dclarke@blastwave.org) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=blastwave.org header.s=default header.b=VbLNpfFL; spf=pass (mx1.freebsd.org: domain of dclarke@blastwave.org designates 108.160.241.186 as permitted sender) smtp.mailfrom=dclarke@blastwave.org; dmarc=pass (policy=quarantine) header.from=blastwave.org Received: from [172.16.35.3] (pool-99-253-118-250.cpe.net.cable.rogers.com [99.253.118.250]) (authenticated bits=0) by mail.oetec.com (8.17.1/8.17.1) with ESMTPSA id 511EAQlK018312 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Sat, 1 Feb 2025 09:10:27 -0500 (EST) (envelope-from dclarke@blastwave.org) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=blastwave.org; s=default; t=1738419027; bh=hEM17bQ3Y+WuiewKvkpw2WbA0a+c8kZrKEMkwAJ4wbo=; h=Date:Subject:To:References:From:In-Reply-To; b=VbLNpfFL+qgroVqEyrLfuzz0HaMx+isIpvxsv2R/E3nnoHig37rvl0ar4DEIb4JQN o7S27Ror0VjCkV9ljs9tMBT6ZVQfqLyDMfR7iu75MSNc+Q8eCAk+J/EtF0df8hgKVA 3eQ63ThmbgplEtKXcF10XwtgqHnEulc16yyjOll5GYTQP16VsWMqxSEzblw05yBjZL M+nsz+1q1HXTwYkjY14Iz66PAHSQFj0QW+Fal44ssRSr6uXGsI6NA5UBh0SZMJHSfA LusjuB/NGbrPF2HCBldJdthis1sc64hKksOXJfXeX3yQuqp+hdMqXwgTPSG5NW1lrO c+lqxd7iHSCXg== Message-ID: <62da6831-fbc8-4bab-9a4c-6b0ec9dd3585@blastwave.org> Date: Sat, 1 Feb 2025 09:10:25 -0500 List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@FreeBSD.org MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: ZFS: Rescue FAULTED Pool Content-Language: en-CA To: freebsd-current@freebsd.org References: <20250129112701.0c4a3236@freyja> <20250130123354.2d767c7c@thor.sb211.local> <980401eb-f8f6-44c7-8ee1-5ff0c9e1c35c@freebsd.org> <20250201095656.1bdfbe5f@thor.sb211.local> From: Dennis Clarke Organization: GENUNIX In-Reply-To: <20250201095656.1bdfbe5f@thor.sb211.local> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-oetec-MailScanner-Information: Please contact the ISP for more information X-oetec-MailScanner-ID: 511EAQlK018312 X-oetec-MailScanner: Found to be clean X-oetec-MailScanner-From: dclarke@blastwave.org X-Spam-Status: No X-Spamd-Result: default: False [-4.70 / 15.00]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_SHORT(-1.00)[-1.000]; DMARC_POLICY_ALLOW(-0.50)[blastwave.org,quarantine]; RCVD_DKIM_ARC_DNSWL_MED(-0.50)[]; R_DKIM_ALLOW(-0.20)[blastwave.org:s=default]; R_SPF_ALLOW(-0.20)[+mx]; RCVD_IN_DNSWL_MED(-0.20)[108.160.241.186:from]; MIME_GOOD(-0.10)[text/plain]; RCPT_COUNT_ONE(0.00)[1]; ASN(0.00)[asn:812, ipnet:108.160.240.0/20, country:CA]; HAS_ORG_HEADER(0.00)[]; RCVD_COUNT_ONE(0.00)[1]; MIME_TRACE(0.00)[0:+]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; RCVD_TLS_ALL(0.00)[]; MLMMJ_DEST(0.00)[freebsd-current@freebsd.org]; FROM_EQ_ENVFROM(0.00)[]; FROM_HAS_DN(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[freebsd-current@freebsd.org]; TO_MATCH_ENVRCPT_ALL(0.00)[]; DKIM_TRACE(0.00)[blastwave.org:+] X-Spamd-Bar: ---- X-Rspamd-Queue-Id: 4YlZRK1x0fz3S6d >> >> The most useful thing to share right now would be the output of `zpool >> import` (with no pool name) on the rebooted system. >> >> That will show where the issues are, and suggest how they might be solved. >> > > Hello, this exactly happens when trying to import the pool. Prior to the loss, device da1p1 > has been faulted with numbers in the colum/columns "corrupted data"/further not seen now. > > > ~# zpool import > pool: BUNKER00 > id: XXXXXXXXXXXXXXXXXXXX > state: FAULTED > status: The pool metadata is corrupted. > action: The pool cannot be imported due to damaged devices or data. > The pool may be active on another system, but can be imported using > the '-f' flag. > see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72 > config: > > BUNKER00 FAULTED corrupted data > raidz1-0 ONLINE > da2p1 ONLINE > da3p1 ONLINE > da4p1 ONLINE > da7p1 ONLINE > da6p1 ONLINE > da1p1 ONLINE > da5p1 ONLINE > > > ~# zpool import -f BUNKER00 > cannot import 'BUNKER00': I/O error > Destroy and re-create the pool from > a backup source. > > > ~# zpool import -F BUNKER00 > cannot import 'BUNKER00': one or more devices is currently unavailable > This is indeed a sad situation. You have a raidz1 pool with one or MORE devices that seem to have left the stage. I suspect more than one. I can only guess what you see from "camcontrol devlist" as well as data from "gpart show -l" where we would see the partition data along with and GPT labels. If in fact you used GPT scheme. You have a list of devices that all say "p1" there and so I guess you made some sort of a partition table. ZFS does not need that but it can be nice to have. In any case, it really does look like you have _more_ than one failure in there somewhere and only dmesg and some separate tests on each device would reveal the truth. -- Dennis Clarke RISC-V/SPARC/PPC/ARM/CISC UNIX and Linux spoken