bsdinstall, zfs booting, gpt partition order suitable for volume expansion
Teske, Devin
Devin.Teske at fisglobal.com
Sun Dec 22 03:59:25 UTC 2013
On Dec 21, 2013, at 7:17 PM, Adam McDougall wrote:
> On 12/20/2013 17:26, Adam McDougall wrote:
>> On 12/19/2013 02:19, Teske, Devin wrote:
>>>
>>> On Dec 18, 2013, at 8:31 AM, Adam McDougall wrote:
>>>
>>>> [snip]
>>>> I have posted /tmp/bsdinstall_log at: http://p.bsd-unix.net/ps9qmfqc2
>>>>
>>>
>>> I think this logging stuff I put so much effort into is really paying dividends.
>>> I'm finding it really easy to debug issues that others have run into.
>>>
>>>
>>>> The corresponding procedure:
>>>>
>>>> Virtualbox, created VM with 4 2.0TB virtual hard disks
>>>> Install
>>>> Continue with default keymap
>>>> Hostname: test
>>>> Distribution Select: OK
>>>> Partitioning: ZFS
>>>> Pool Type/Disks: stripe, select ada0-3 and hit OK
>>>> Install
>>>> Last Chance! YES
>>>>
>>>
>>> I've posted the following commits to 11.0-CURRENT:
>>>
>>> http://svnweb.freebsd.org/base?view=revision&revision=259597
>>> http://svnweb.freebsd.org/base?view=revision&revision=259598
>>>
>>> As soon as a new ISO is rolled, can you give the above another go?
>>> I rolled my own ISO with the above and tested cleanly.
>>>
>>
>> I did some testing with 11.0-HEAD-r259612-JPSNAP and 4 disk raidz, 4
>> disk mirror worked, 1-3 disk stripe worked but 4 disk stripe got "ZFS:
>> i/o error - all block copies unavailable" although the parts where this
>> happens during the loader varies". Sometimes the loader would fault,
>> sometimes it just can't load kernel, sometimes it prints some of the
>> color text and sometimes not even that far. Might depend on the
>> install? Also I did not try exhaustive combinations such as 2-3 in a
>> mirror, 4 in a raidz2, or anything more than 4 disks. I'll try to test
>> a 10 ISO tomorrow if I can, either a fresh JPSNAP or RC3 if it is ready
>> by the time I am, maybe both.
>
> Good news, I believe this was "hardware" error. VirtualBox (in sata
> mode along with a virtual cdrom) and XenServer 6.0/6.2 appear to make a
> maximum of 3 virtual hard disks visible to the FreeBSD bootloader. This
> is easier to tell when booting from the CD since you can see it
> enumerate them, but if you are booting from disks, it may not get that
> far. Interestingly, when you tell VirtualBox to use scsi disks, you max
> out at 4 bootable instead of 3. Installation then works on 4 disks but
> not 5 (understandably). Thus the symptoms are appropriate and it is not
> a fault of the installer/installation. I've heard of similar issues on
> real hardware but since this is a new install, nothing should be lost.
>
> Thanks for making the improvements and bug fixes!
>
Thank you very much for testing! And I'm very happy it was a limitation
of VirtualBox. Imho, the new module is doing a great job in making it
easier to test more combinations and learn these limitations, sharing
with others along the way.
> The below issue stands, but I'd say is not urgent for 10.0.
>
Yeah, will have to do some testing to see the best way to deal with that
(I agree ideally no export/re-import would be best -- will have to investigate).
--
Devin
>>
>> I also found another issue, not very dire: If you install to X number of
>> disks as "zpool", then reinstall on (X-1 or less) disks as "zpool", the
>> install fails with: "cannot import 'zroot': more than one matching pool
>> import by numeric ID instead"
>> because it sees both the old and the new zroot (makes sense, since it
>> should not be touching disks we didn't ask about):
>>
>> DEBUG: zfs_create_boot: Temporarily exporting ZFS pool(s)...
>> DEBUG: zfs_create_boot: zpool export "zroot"
>> DEBUG: zfs_create_boot: retval=0 <no output>
>> DEBUG: zfs_create_boot: gnop destroy "ada0p3.nop"
>> DEBUG: zfs_create_boot: retval=0 <no output>
>> DEBUG: zfs_create_boot: gnop destroy "ada1p3.nop"
>> DEBUG: zfs_create_boot: retval=0 <no output>
>> DEBUG: zfs_create_boot: gnop destroy "ada2p3.nop"
>> DEBUG: zfs_create_boot: retval=0 <no output>
>> DEBUG: zfs_create_boot: Re-importing ZFS pool(s)...
>> DEBUG: zfs_create_boot: zpool import -o altroot="/mnt" "zroot"
>> DEBUG: zfs_create_boot: retval=1 <output below>
>> cannot import 'zroot': more than one matching pool
>> import by numeric ID instead
>> DEBUG: f_dialog_max_size: dialog --print-maxsize = [MaxSize: 25, 80]
>> DEBUG: f_getvar: var=[height] value=[6] r=0
>> DEBUG: f_getvar: var=[width] value=[54] r=0
>>
>> Full log at: http://p.bsd-unix.net/p2juq9y25
>>
>> Workaround: use a different pool name, or use a shell to manually zpool
>> labelclear the locations with the old zpool label (advanced user operation)
>>
>> Suggested solution: avoid exporting and importing the pool? I don't
>> think you need to unload gnop, zfs should be able to find the underlying
>> partition fine on its own the next boot and the install would go quicker
>> without the export and import. Or were you doing it for another reason
>> such as the cache file?
>>
>> Alternative: would it be possible to determine the numeric ID before
>> exporting so it can use it to import? But that would be adding
>> complexity as opposed to removing complexity by eliminating the
>> export/import if possible.
_____________
The information contained in this message is proprietary and/or confidential. If you are not the intended recipient, please: (i) delete the message and all copies; (ii) do not disclose, distribute or use the message in any manner; and (iii) notify the sender immediately. In addition, please be aware that any message addressed to our domain is subject to archiving and review by persons other than the intended recipient. Thank you.
More information about the freebsd-stable
mailing list