zpool asize problem on 11.0
Steven Hartland
killing at multiplay.co.uk
Thu Jan 12 23:30:53 UTC 2017
On 12/01/2017 22:57, Stefan Bethke wrote:
> Am 12.01.2017 um 23:29 schrieb Stefan Bethke <stb at lassitu.de>:
>> I’ve just created two pools on a freshly partitioned disk, using 11.0 amd64, and the shift appears to be 9:
>>
>> # zpool status -v host
>> pool: host
>> state: ONLINE
>> status: One or more devices are configured to use a non-native block size.
>> Expect reduced performance.
>> action: Replace affected devices with devices that support the
>> configured block size, or migrate data to a properly configured
>> pool.
>> scan: none requested
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> host ONLINE 0 0 0
>> gpt/host0 ONLINE 0 0 0 block size: 512B configured, 4096B native
>>
>> errors: No known data errors
>>
>> # zdb host | grep ashift
>> ashift: 9
>> ashift: 9
>>
>> But:
>> # sysctl vfs.zfs.min_auto_ashift
>> vfs.zfs.min_auto_ashift: 12
>>
>> Of course, I’ve noticed this only after restoring all the backups, and getting ready to put the box back into production.
>>
>> Is this expected behaviour? I guess there’s no simple fix, and I have to start over from scratch?
> I had falsely assumed that vfs.zfs.min_auto_ashift would be 12 in all circumstances. It appears when running FreeBSD 11.0p2 in VirtualBox, it can be 9. And my target disk was attached to the host and mapped into the VM as a „native disk image“, but the 4k native sector size apparently got lost in that abstraction.
>
> The output above is with the disk installed in the target system with a native AHCI connection, and the system booted from that disk.
>
> I’ve certainly learned to double check the ashift property on creating pools.
>
The default value for vfs.zfs.min_auto_ashift is 9, so unless you
specifically set it to 12 you will get the behaviour you described.
Regards
Steve
More information about the freebsd-stable
mailing list