ZFS: I/O error - blocks larger than 16777216 are not supported
KIRIYAMA Kazuhiko
kiri at kx.openedu.org
Tue Jun 26 02:08:41 UTC 2018
At Thu, 21 Jun 2018 10:48:28 +0300,
Toomas Soome wrote:
>
>
>
> > On 21 Jun 2018, at 09:00, KIRIYAMA Kazuhiko <kiri at kx.openedu.org> wrote:
> >
> > At Wed, 20 Jun 2018 23:34:48 -0400,
> > Allan Jude wrote:
> >>
> >> On 2018-06-20 21:36, KIRIYAMA Kazuhiko wrote:
> >>> Hi all,
> >>>
> >>> I've been reported ZFS boot disable problem [1], and found
> >>> that this issue occers form RAID configuration [2]. So I
> >>> rebuit with RAID5 and re-installed 12.0-CURRENT
> >>> (r333982). But failed to boot with:
> >>>
> >>> ZFS: i/o error - all block copies unavailable
> >>> ZFS: can't read MOS of pool zroot
> >>> gptzfsboot: failed to mount default pool zroot
> >>>
> >>> FreeBSD/x86 boot
> >>> ZFS: I/O error - blocks larger than 16777216 are not supported
> >>> ZFS: can't find dataset u
> >>> Default: zroot/<0x0>:
> >>>
> >>> In this case, the reason is "blocks larger than 16777216 are
> >>> not supported" and I guess this means datasets that have
> >>> recordsize greater than 8GB is NOT supported by the
> >>> FreeBSD boot loader(zpool-features(7)). Is that true ?
> >>>
> >>> My zpool featues are as follows:
> >>>
> >>> # kldload zfs
> >>> # zpool import
> >>> pool: zroot
> >>> id: 13407092850382881815
> >>> state: ONLINE
> >>> status: The pool was last accessed by another system.
> >>> action: The pool can be imported using its name or numeric identifier and
> >>> the '-f' flag.
> >>> see: http://illumos.org/msg/ZFS-8000-EY
> >>> config:
> >>>
> >>> zroot ONLINE
> >>> mfid0p3 ONLINE
> >>> # zpool import -fR /mnt zroot
> >>> # zpool list
> >>> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
> >>> zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE /mnt
> >>> # zpool get all zroot
> >>> NAME PROPERTY VALUE SOURCE
> >>> zroot size 19.9T -
> >>> zroot capacity 0% -
> >>> zroot altroot /mnt local
> >>> zroot health ONLINE -
> >>> zroot guid 13407092850382881815 default
> >>> zroot version - default
> >>> zroot bootfs zroot/ROOT/default local
> >>> zroot delegation on default
> >>> zroot autoreplace off default
> >>> zroot cachefile none local
> >>> zroot failmode wait default
> >>> zroot listsnapshots off default
> >>> zroot autoexpand off default
> >>> zroot dedupditto 0 default
> >>> zroot dedupratio 1.00x -
> >>> zroot free 19.7T -
> >>> zroot allocated 129G -
> >>> zroot readonly off -
> >>> zroot comment - default
> >>> zroot expandsize - -
> >>> zroot freeing 0 default
> >>> zroot fragmentation 0% -
> >>> zroot leaked 0 default
> >>> zroot feature at async_destroy enabled local
> >>> zroot feature at empty_bpobj active local
> >>> zroot feature at lz4_compress active local
> >>> zroot feature at multi_vdev_crash_dump enabled local
> >>> zroot feature at spacemap_histogram active local
> >>> zroot feature at enabled_txg active local
> >>> zroot feature at hole_birth active local
> >>> zroot feature at extensible_dataset enabled local
> >>> zroot feature at embedded_data active local
> >>> zroot feature at bookmarks enabled local
> >>> zroot feature at filesystem_limits enabled local
> >>> zroot feature at large_blocks enabled local
> >>> zroot feature at sha512 enabled local
> >>> zroot feature at skein enabled local
> >>> zroot unsupported at com.delphix:device_removal inactive local
> >>> zroot unsupported at com.delphix:obsolete_counts inactive local
> >>> zroot unsupported at com.delphix:zpool_checkpoint inactive local
> >>> #
> >>>
> >>> Regards
> >>>
> >>> [1] https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068886.html
> >>> [2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=151910
> >>>
> >>> ---
> >>> KIRIYAMA Kazuhiko
> >>> _______________________________________________
> >>> freebsd-current at freebsd.org mailing list
> >>> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> >>> To unsubscribe, send any mail to "freebsd-current-unsubscribe at freebsd.org"
> >>>
> >>
> >> I am guessing it means something is corrupt, as 16MB is the maximum size
> >> of a record in ZFS. Also, the 'large_blocks' feature is 'enabled', not
> >> 'active', so this suggest you do not have any records larger than 128kb
> >> on your pool.
> >
> > As I mentioned above, [2] says ZFS on RAID disks have any
> > serious bugs except for mirror. Anyway I gave up to use ZFS
> > on RAID{5,6}* until Bug 151910 [2] fixed.
> >
>
> if you boot from usb stick (or cd), press esc at boot loader menu and enter lsdev -v. what sector and disk sizes are reported?
OK lsdev -v
disk devices:
disk0: BIOS drive C (31588352 X 512)
disk0p1: FreeBSD boot 512KB
disk0p2: FreeBSD UFS 13GB
disk0p3: FreeBSD swap 771MB
disk1: BIOS drive D (4294967295 X 512)
disk0p1: FreeBSD boot 512KB
disk0p2: FreeBSD swap 128GB
disk0p3: FreeBSD ZFS 19TB
OK
Does this means whole disk size that I can use is
2TB (4294967295 X 512) ?
>
> the issue [2] is mix of ancient freebsd (v 8.1 is mentioned there), and RAID luns with 512B sector size and 15TB!!! total size - are you really sure your BIOS can actually address 15TB lun (with 512B sector size)? Note that the problem with large disks can hide itself till you have pool filled up enough till the essential files will be stored above the limit~ meaning that you may have ~perfectly working~ setup till at some point in time, after next update, it is suddenly not working any more.
>
I see why I could use for a while.
> Note that for boot loader we have only INT13h for BIOS version, and it really is limited. The UEFI version is using EFI_BLOCK_IO API, which usually can handle large sectors and disk sizes better.
I re-installed the machine with UEFI boot:
# gpart show mfid0
=> 40 42965401520 mfid0 GPT (20T)
40 409600 1 efi (200M)
409640 2008 - free - (1.0M)
411648 268435456 2 freebsd-swap (128G)
268847104 42696552448 3 freebsd-zfs (20T)
42965399552 2008 - free - (1.0M)
# uname -a
FreeBSD vm.openedu.org 12.0-CURRENT FreeBSD 12.0-CURRENT #0 r335317: Mon Jun 18 16:21:17 UTC 2018 root at releng3.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64
# zpool get all zroot
NAME PROPERTY VALUE SOURCE
zroot size 19.9T -
zroot capacity 0% -
zroot altroot - default
zroot health ONLINE -
zroot guid 11079446129259852576 default
zroot version - default
zroot bootfs zroot/ROOT/default local
zroot delegation on default
zroot autoreplace off default
zroot cachefile - default
zroot failmode wait default
zroot listsnapshots off default
zroot autoexpand off default
zroot dedupditto 0 default
zroot dedupratio 1.00x -
zroot free 19.9T -
zroot allocated 1.67G -
zroot readonly off -
zroot comment - default
zroot expandsize - -
zroot freeing 0 default
zroot fragmentation 0% -
zroot leaked 0 default
zroot bootsize - default
zroot checkpoint - -
zroot feature at async_destroy enabled local
zroot feature at empty_bpobj active local
zroot feature at lz4_compress active local
zroot feature at multi_vdev_crash_dump enabled local
zroot feature at spacemap_histogram active local
zroot feature at enabled_txg active local
zroot feature at hole_birth active local
zroot feature at extensible_dataset enabled local
zroot feature at embedded_data active local
zroot feature at bookmarks enabled local
zroot feature at filesystem_limits enabled local
zroot feature at large_blocks enabled local
zroot feature at sha512 enabled local
zroot feature at skein enabled local
zroot feature at device_removal enabled local
zroot feature at obsolete_counts enabled local
zroot feature at zpool_checkpoint enabled local
#
and checked 'lsdev -v' at loader prompt:
OK lsdev -v
PciRoot(0x0)/Pci(0x1,0x0)/Pci(0x0,0x0)/VenHw(CF31FAC5-C24E-11D2-85F3-00A0C93EC93B,80)
disk0: 4294967295 X 512 blocks
disk0p1: EFI 200MB
disk0p2: FreeBSD swap 128GB
disk0p2: FreeBSD ZFS 19TB
net devices:
zfs devices:
pool: zroot
bootfs: zroot/ROOT/default
config:
NAME STATE
zroot ONLINE
mfid0p3 ONLINE
OK
but disk size (4294967295 X 512) still not changed or this
means 4294967295 X 512 X 512 bytes ?
>
> rgds,
> toomas
>
> _______________________________________________
> freebsd-current at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe at freebsd.org"
Regards
---
KIRIYAMA Kazuhiko
More information about the freebsd-current
mailing list