ZFS: I/O error - blocks larger than 16777216 are not supported
Toomas Soome
tsoome at me.com
Thu Jun 21 05:38:25 UTC 2018
> On 21 Jun 2018, at 06:34, Allan Jude <allanjude at freebsd.org> wrote:
>
> On 2018-06-20 21:36, KIRIYAMA Kazuhiko wrote:
>> Hi all,
>>
>> I've been reported ZFS boot disable problem [1], and found
>> that this issue occers form RAID configuration [2]. So I
>> rebuit with RAID5 and re-installed 12.0-CURRENT
>> (r333982). But failed to boot with:
>>
>> ZFS: i/o error - all block copies unavailable
>> ZFS: can't read MOS of pool zroot
>> gptzfsboot: failed to mount default pool zroot
>>
>> FreeBSD/x86 boot
>> ZFS: I/O error - blocks larger than 16777216 are not supported
>> ZFS: can't find dataset u
>> Default: zroot/<0x0>:
>>
>> In this case, the reason is "blocks larger than 16777216 are
>> not supported" and I guess this means datasets that have
>> recordsize greater than 8GB is NOT supported by the
>> FreeBSD boot loader(zpool-features(7)). Is that true ?
>>
>> My zpool featues are as follows:
>>
>> # kldload zfs
>> # zpool import
>> pool: zroot
>> id: 13407092850382881815
>> state: ONLINE
>> status: The pool was last accessed by another system.
>> action: The pool can be imported using its name or numeric identifier and
>> the '-f' flag.
>> see: http://illumos.org/msg/ZFS-8000-EY
>> config:
>>
>> zroot ONLINE
>> mfid0p3 ONLINE
>> # zpool import -fR /mnt zroot
>> # zpool list
>> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
>> zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE /mnt
>> # zpool get all zroot
>> NAME PROPERTY VALUE SOURCE
>> zroot size 19.9T -
>> zroot capacity 0% -
>> zroot altroot /mnt local
>> zroot health ONLINE -
>> zroot guid 13407092850382881815 default
>> zroot version - default
>> zroot bootfs zroot/ROOT/default local
>> zroot delegation on default
>> zroot autoreplace off default
>> zroot cachefile none local
>> zroot failmode wait default
>> zroot listsnapshots off default
>> zroot autoexpand off default
>> zroot dedupditto 0 default
>> zroot dedupratio 1.00x -
>> zroot free 19.7T -
>> zroot allocated 129G -
>> zroot readonly off -
>> zroot comment - default
>> zroot expandsize - -
>> zroot freeing 0 default
>> zroot fragmentation 0% -
>> zroot leaked 0 default
>> zroot feature at async_destroy enabled local
>> zroot feature at empty_bpobj active local
>> zroot feature at lz4_compress active local
>> zroot feature at multi_vdev_crash_dump enabled local
>> zroot feature at spacemap_histogram active local
>> zroot feature at enabled_txg active local
>> zroot feature at hole_birth active local
>> zroot feature at extensible_dataset enabled local
>> zroot feature at embedded_data active local
>> zroot feature at bookmarks enabled local
>> zroot feature at filesystem_limits enabled local
>> zroot feature at large_blocks enabled local
>> zroot feature at sha512 enabled local
>> zroot feature at skein enabled local
>> zroot unsupported at com.delphix:device_removal inactive local
>> zroot unsupported at com.delphix:obsolete_counts inactive local
>> zroot unsupported at com.delphix:zpool_checkpoint inactive local
>> #
>>
>> Regards
>>
>> [1] https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068886.html
>> [2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=151910
>>
>> ---
>> KIRIYAMA Kazuhiko
>> _______________________________________________
>> freebsd-current at freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-current
>> To unsubscribe, send any mail to "freebsd-current-unsubscribe at freebsd.org"
>>
>
> I am guessing it means something is corrupt, as 16MB is the maximum size
> of a record in ZFS. Also, the 'large_blocks' feature is 'enabled', not
> 'active', so this suggest you do not have any records larger than 128kb
> on your pool.
>
>
yes indeed, this value printed is 1 << 24 and is current, however, I would start with reinstalling gptzfsboot on freebsd-boot partition.
rgds,
toomas
More information about the freebsd-current
mailing list