ZFS: I/O error - blocks larger than 16777216 are not supported
Toomas Soome
tsoome at me.com
Thu Jun 21 07:48:46 UTC 2018
> On 21 Jun 2018, at 09:00, KIRIYAMA Kazuhiko <kiri at kx.openedu.org> wrote:
>
> At Wed, 20 Jun 2018 23:34:48 -0400,
> Allan Jude wrote:
>>
>> On 2018-06-20 21:36, KIRIYAMA Kazuhiko wrote:
>>> Hi all,
>>>
>>> I've been reported ZFS boot disable problem [1], and found
>>> that this issue occers form RAID configuration [2]. So I
>>> rebuit with RAID5 and re-installed 12.0-CURRENT
>>> (r333982). But failed to boot with:
>>>
>>> ZFS: i/o error - all block copies unavailable
>>> ZFS: can't read MOS of pool zroot
>>> gptzfsboot: failed to mount default pool zroot
>>>
>>> FreeBSD/x86 boot
>>> ZFS: I/O error - blocks larger than 16777216 are not supported
>>> ZFS: can't find dataset u
>>> Default: zroot/<0x0>:
>>>
>>> In this case, the reason is "blocks larger than 16777216 are
>>> not supported" and I guess this means datasets that have
>>> recordsize greater than 8GB is NOT supported by the
>>> FreeBSD boot loader(zpool-features(7)). Is that true ?
>>>
>>> My zpool featues are as follows:
>>>
>>> # kldload zfs
>>> # zpool import
>>> pool: zroot
>>> id: 13407092850382881815
>>> state: ONLINE
>>> status: The pool was last accessed by another system.
>>> action: The pool can be imported using its name or numeric identifier and
>>> the '-f' flag.
>>> see: http://illumos.org/msg/ZFS-8000-EY
>>> config:
>>>
>>> zroot ONLINE
>>> mfid0p3 ONLINE
>>> # zpool import -fR /mnt zroot
>>> # zpool list
>>> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
>>> zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE /mnt
>>> # zpool get all zroot
>>> NAME PROPERTY VALUE SOURCE
>>> zroot size 19.9T -
>>> zroot capacity 0% -
>>> zroot altroot /mnt local
>>> zroot health ONLINE -
>>> zroot guid 13407092850382881815 default
>>> zroot version - default
>>> zroot bootfs zroot/ROOT/default local
>>> zroot delegation on default
>>> zroot autoreplace off default
>>> zroot cachefile none local
>>> zroot failmode wait default
>>> zroot listsnapshots off default
>>> zroot autoexpand off default
>>> zroot dedupditto 0 default
>>> zroot dedupratio 1.00x -
>>> zroot free 19.7T -
>>> zroot allocated 129G -
>>> zroot readonly off -
>>> zroot comment - default
>>> zroot expandsize - -
>>> zroot freeing 0 default
>>> zroot fragmentation 0% -
>>> zroot leaked 0 default
>>> zroot feature at async_destroy enabled local
>>> zroot feature at empty_bpobj active local
>>> zroot feature at lz4_compress active local
>>> zroot feature at multi_vdev_crash_dump enabled local
>>> zroot feature at spacemap_histogram active local
>>> zroot feature at enabled_txg active local
>>> zroot feature at hole_birth active local
>>> zroot feature at extensible_dataset enabled local
>>> zroot feature at embedded_data active local
>>> zroot feature at bookmarks enabled local
>>> zroot feature at filesystem_limits enabled local
>>> zroot feature at large_blocks enabled local
>>> zroot feature at sha512 enabled local
>>> zroot feature at skein enabled local
>>> zroot unsupported at com.delphix:device_removal inactive local
>>> zroot unsupported at com.delphix:obsolete_counts inactive local
>>> zroot unsupported at com.delphix:zpool_checkpoint inactive local
>>> #
>>>
>>> Regards
>>>
>>> [1] https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068886.html
>>> [2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=151910
>>>
>>> ---
>>> KIRIYAMA Kazuhiko
>>> _______________________________________________
>>> freebsd-current at freebsd.org mailing list
>>> https://lists.freebsd.org/mailman/listinfo/freebsd-current
>>> To unsubscribe, send any mail to "freebsd-current-unsubscribe at freebsd.org"
>>>
>>
>> I am guessing it means something is corrupt, as 16MB is the maximum size
>> of a record in ZFS. Also, the 'large_blocks' feature is 'enabled', not
>> 'active', so this suggest you do not have any records larger than 128kb
>> on your pool.
>
> As I mentioned above, [2] says ZFS on RAID disks have any
> serious bugs except for mirror. Anyway I gave up to use ZFS
> on RAID{5,6}* until Bug 151910 [2] fixed.
>
if you boot from usb stick (or cd), press esc at boot loader menu and enter lsdev -v. what sector and disk sizes are reported?
the issue [2] is mix of ancient freebsd (v 8.1 is mentioned there), and RAID luns with 512B sector size and 15TB!!! total size - are you really sure your BIOS can actually address 15TB lun (with 512B sector size)? Note that the problem with large disks can hide itself till you have pool filled up enough till the essential files will be stored above the limit… meaning that you may have “perfectly working” setup till at some point in time, after next update, it is suddenly not working any more.
Note that for boot loader we have only INT13h for BIOS version, and it really is limited. The UEFI version is using EFI_BLOCK_IO API, which usually can handle large sectors and disk sizes better.
rgds,
toomas
More information about the freebsd-current
mailing list