Re: difficulties replacing a ZFS installer zroot pool with a new zroot pool on a new disk
Date: Wed, 30 Mar 2022 02:12:24 UTC
On 3/29/22 16:16, Russell L. Carter wrote: > Greetings, > After many hours, I am stuck trying to replace my spinning rust > drive with a new SSD. > > Basically I have renamed the old drive pool 'zroot.old' and imported > it so that it mounts to /mnt/zroot2: > > root@bruno> zfs list | grep zroot.old > zroot.old 89.6G 523G 96K > /mnt/zroot2/mnt/zroot.old > zroot.old/ROOT 37.6G 523G 96K > none > zroot.old/ROOT/default 37.6G 523G 37.6G > /mnt/zroot2 > zroot.old/export 264K 523G 88K > /mnt/zroot2/mnt/zroot.old/export > zroot.old/export/packages 176K 523G 88K > /mnt/zroot2/mnt/zroot.old/export/packages > zroot.old/export/packages/stable-amd64-default 88K 523G 88K > /mnt/zroot2/mnt/zroot.old/export/packages/stable-amd64-default > zroot.old/tmp 144K 523G 144K > /mnt/zroot2/tmp > zroot.old/usr 37.8G 523G 96K > /mnt/zroot2/usr > zroot.old/usr/home 582M 523G 582M > /mnt/zroot2/usr/home > zroot.old/usr/obj 6.14G 523G 6.14G > /mnt/zroot2/usr/obj > zroot.old/usr/ports 27.8G 523G 27.8G > /mnt/zroot2/usr/ports > zroot.old/usr/src 3.27G 523G 3.27G > /mnt/zroot2/usr/src > zroot.old/var 1.89M 523G 96K > /mnt/zroot2/var > zroot.old/var/audit 96K 523G 96K > /mnt/zroot2/var/audit > zroot.old/var/crash 96K 523G 96K > /mnt/zroot2/var/crash > zroot.old/var/log 1.32M 523G 1.32M > /mnt/zroot2/var/log > zroot.old/var/mail 120K 523G 120K > /mnt/zroot2/var/mail > zroot.old/var/tmp 176K 523G 176K > /mnt/zroot2/var/tmp > zroot.old/vm 14.1G 523G 615M > /mnt/zroot2/vm > zroot.old/vm/debianv9base 3.79G 523G 120K > /mnt/zroot2/vm/debianv9base > zroot.old/vm/debianv9base/disk0 3.79G 523G 3.57G - > zroot.old/vm/debianv9n2 9.70G 523G 160K > /mnt/zroot2/vm/debianv9n2 > zroot.old/vm/debianv9n2/disk0 9.70G 523G 11.3G - > root@bruno> zfs mount -a > root@bruno> > > The problem is that /mnt/zroot2/usr/home, /mnt/zroot2/usr, > /mnt/zroot2/usr/src are all empty: > > root@bruno> ls /mnt/zroot.old/usr > root@bruno> > > Even though I can look at the individual datasets and theey're > still using the same amount data as original. This is a bit > unhelpful for migrating over the old configuration. > > The oddball mounting is just the result of several 10s of attempts to > import and mount so that a) the original zroot pool doesn't clobber the > new one, and b) attempts to make the datasets visible. > > So can someone enlighten me on the proper way to do this, and possibly > give a hint how I can get those original datasets visible? This is > definitely a new wrinkle for a geezer who has been doing such things > without (nontrivial) problems for 30 years now. > > Yeah yeah, this is also my backup drive and I should have replicated > infra over to another system... I'm a gonna do that next. > > Thanks very much, > Russell I recall attempting to install two ZFS FreeBSD OS disks in the same machine at the same time, and the results were very confusing. I suggest that you install only one ZFS FreeBSD OS disk at any given time. If you need to work on the FreeBSD OS disk without booting it, I would boot FreeBSD installer media and use the live system/ shell to access the ZFS pools and datasets. I expect that you will want to set the "altroot" property when you import any pools. I am unclear if you will need to export the ZFS boot pool ("bootpool") or the ZFS root pool ("zroot.old"?) if you import them (?). If the HDD and SSD both have the same interface (e.g. SAS or SATA), if the SSD is the same size or larger than the HDD, and if you can revert your changes to the HDD so that it is a working FreeBSD instance again, you should be able to use a live distribution to clone the HDD to the SSD using dd(1), power down, remove the HDD and live media, connect the SSD to the interface port the HDD was connected to, and boot the SSD. I would use a Linux live distribution without ZFS support, to ensure that the live distribution does not interact with any ZFS content on the HDD or SSD before or after the clone. David