Instance drives in AWS comming up with the wrong size

From: Pete French <pete_at_twisted.org.uk>
Date: Tue, 22 Feb 2022 19:11:35 UTC
So, I have a number of machines in AWS. They are all of type r5a.xlarge 
which is supposed to have 140 gig of instance syorage on it. All these 
machines started life as clones of the same dirve, ro are runnign the 
same OS kernel, and they all have mysql on them. Some of them also have 
Apache and some other software, but I thought the config of the base OS 
was the same. Certainly /boot/loader.conf and /etc/sysctl.conf are 
indentical and its the same kernel and userland running on all of them.

but on the mysql-only of the machines the instance drives have the worng 
size, around a gig, and are not recognised properly. Example, heres iw 
what it is supposed to look like (from one of the Apache machines)

root@sydney01:/usr/home/webadmin # diskinfo -v nda2
nda2
	512         	# sectorsize
	150000000000	# mediasize in bytes (140G)
	292968750   	# mediasize in sectors
	0           	# stripesize
	0           	# stripeoffset
	Amazon EC2 NVMe Instance Storage	# Disk descr.
	AWSB7ABDF8FE8D0597AF	# Disk ident.
	nvme2       	# Attachment
	Yes         	# TRIM/UNMAP support
	0           	# Rotation rate in RPM

and here is one of the ones which is wrong...

root@serpentine-sydy:~ # diskinfo -v nda2
nda2
         512             # sectorsize
         886571008       # mediasize in bytes (846M)
         1731584         # mediasize in sectors
         131072          # stripesize
         0               # stripeoffset
         No              # TRIM/UNMAP support
         Unknown         # Rotation rate in RPM


But both machines, in dmesg, have lines which look like this:

nda2 at nvme2 bus 0 scbus2 target 0 lun 1
nda2: <Amazon EC2 NVMe Instance Storage 0 AWSB7ABDF8FE8D0597AF>
nda2: Serial Number AWSB7ABDF8FE8D0597AF
nda2: nvme version 1.0 x0 (max x0) lanes PCIe Gen0 (max Gen0) link
nda2: 143051MB (292968750 512 byte sectors)

So on both of then the detection says its the right size.

The oddest this is that this is completely reproducible betwene data 
centres - the above machines above are in Sydney, but I get precisely 
the same result from the machines in North Virginia. So its something 
about the config but what on earth could it be ?

I am very puzzled - I have a set of database-nly machine sin Frankfurt, 
and they behave fine! Poissibly I should just clone those to Australia 
and the US, but I would like to find out what the magic difference is 
between them.

-pete.