From nobody Tue Mar 12 20:22:42 2024 X-Original-To: current@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4TvQ7M4SrZz5FQl7 for ; Tue, 12 Mar 2024 20:22:55 +0000 (UTC) (envelope-from dclarke@blastwave.org) Received: from mail.oetec.com (mail.oetec.com [108.160.241.186]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature ECDSA (P-256) client-digest SHA256) (Client CN "mail.oetec.com", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4TvQ7M28f5z4VL5; Tue, 12 Mar 2024 20:22:55 +0000 (UTC) (envelope-from dclarke@blastwave.org) Authentication-Results: mx1.freebsd.org; none Received: from [172.16.35.4] (cpe8c6a8d4d360a-cm8c6a8d4d3608.cpe.net.cable.rogers.com [99.253.151.152]) (authenticated bits=0) by mail.oetec.com (8.17.1/8.17.1) with ESMTPSA id 42CKMhdO085773 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT); Tue, 12 Mar 2024 16:22:50 -0400 (EDT) (envelope-from dclarke@blastwave.org) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=blastwave.org; s=default; t=1710274970; bh=RcOg2aVZ99mvhOOwLjXDb3N1fIvGJ+UczMAyl2M18+I=; h=Date:Subject:To:Cc:References:From:In-Reply-To; b=pzzrYkpudsbUpnpj/74KtOKC9ZOor0akgN176i5IcNJm9rNoHVfXSTqLlaw15ntqD 1jDABA4k6SJ8Gnpa4BswG52mhdd6VpmZxVNnD0SKmK6N1ua+ozuqrd/S2+qvQHe29I cLPJOajf6rOEOUoj7XaH7SGcCp491AxOk6BeIe1t8zO9iMskUpwyax4uhjpEt2+p2y c0dAdYxsHvyz6BMIUHVAoES7ucigsVQkNXnoHBGDkpcVV6O5+BFPB+4gB84yzZvZPJ vpDzP1RSECiQRo4pRUH9E83/cvThDaB9o+cg1h5cvWOaSjE+3kJjsHiGSdY7Ak1u0v I/JSuRNj0fKoA== Message-ID: <0ac4f17f-21ab-42f7-91c6-7760f322f819@blastwave.org> Date: Tue, 12 Mar 2024 16:22:42 -0400 List-Id: Discussions about the use of FreeBSD-current List-Archive: https://lists.freebsd.org/archives/freebsd-current List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-current@freebsd.org MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: ZPool on iSCSI storage not available after a reboot To: Alan Somers Cc: current@freebsd.org References: <8228ca0c-85a0-4436-aaf4-d2d987e0f5a4@blastwave.org> Content-Language: en-CA From: Dennis Clarke Organization: GENUNIX In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-oetec-MailScanner-Information: Please contact the ISP for more information X-oetec-MailScanner-ID: 42CKMhdO085773 X-oetec-MailScanner: Found to be clean X-oetec-MailScanner-From: dclarke@blastwave.org X-Spam-Status: No X-Spamd-Bar: ---- X-Rspamd-Pre-Result: action=no action; module=replies; Message is reply to one we originated X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[]; ASN(0.00)[asn:812, ipnet:108.160.240.0/20, country:CA] X-Rspamd-Queue-Id: 4TvQ7M28f5z4VL5 On 3/12/24 15:41, Alan Somers wrote: > On Tue, Mar 12, 2024 at 1:28 PM Dennis Clarke wrote: . . . . > Yes, this looks exactly like an ordering problem. zpools get imported > early in the boot process, under the assumption that most of them are > local. Networking comes up later, under the assumption that > networking might require files that are mounted on ZFS. For you, I > suggest setting proteus's cachefile to a non-default location and > importing it from /etc/rc.local, like this: > > zpool set cachefile=/var/cache/iscsi-zpools.cache proteus > > Then in /etc/rc.local: > zpool import -a -c /var/cache/iscsi-zpools.cache -o > cachefile=/var/cache/iscsi-zpools.cache > That seems to be perfectly reasonable. I will give that a test right now. I was messing with the previous zpool called proteus and destroyed it. Easy enough to re-create : titan# gpart add -t freebsd-zfs /dev/da0 da0p1 added titan# titan# gpart show /dev/da0 => 40 4294967216 da0 GPT (2.0T) 40 8 - free - (4.0K) 48 4294967200 1 freebsd-zfs (2.0T) 4294967248 8 - free - (4.0K) titan# titan# zpool create -O compress=zstd -O checksum=sha512 -O atime=off -o compatibility=openzfs-2.0-freebsd -o autoexpand=off -o autoreplace=on -o failmode=continue -o listsnaps=off -m none proteus /dev/da0p1 titan# zpool set cachefile=/var/cache/iscsi-zpools.cache proteus titan# titan# ls -lapb /etc/rc.local ls: /etc/rc.local: No such file or directory titan# ed /etc/rc.local /etc/rc.local: No such file or directory a zpool import -a -c /var/cache/iscsi-zpools.cache -o cachefile=/var/cache/iscsi-zpools.cache . f /etc/rc.local w 92 q titan# After reboot ... yes ... this seems to get the job done neatly ! root@titan:~ # root@titan:~ # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT iota 7.27T 321G 6.95T - - 0% 4% 1.00x ONLINE - proteus 1.98T 1.03M 1.98T - - 0% 0% 1.00x ONLINE - t0 444G 40.8G 403G - - 4% 9% 1.00x ONLINE - root@titan:~ # root@titan:~ # uptime 8:21PM up 3 mins, 1 user, load averages: 0.02, 0.04, 0.01 root@titan:~ # Looks good. Thank you very much :) -- -- Dennis Clarke RISC-V/SPARC/PPC/ARM/CISC UNIX and Linux spoken