Re: Instance drives in AWS comming up with the wrong size

From: Pete French <pete_at_twisted.org.uk>
Date: Fri, 25 Feb 2022 10:08:13 UTC

On 24/02/2022 21:18, Chuck Tuffli wrote:
> On Tue, Feb 22, 2022 at 1:16 PM Pete French <pete@twisted.org.uk> wrote:
> ...
>> root@serpentine-vgay:/usr/home/webadmin # nvmecontrol identify nda2
>> Size:                        292968750 blocks
>> Capacity:                    292968750 blocks
>> Utilization:                 292968750 blocks
> ...
>> LBA Format #00: Data Size:   512  Metadata Size:     0  Performance: Best
> 
> This says the capacity is 140GB which matches with your expectations
> if I'm understanding correctly. Can you run:
>      nvmecontrol identify nvme2 | grep "Serial Number"
> via both ssh and from the serial console?

Serial Number:               AWS26C9F8A45429C4403

in both cases

But I found an easier way to reporduce the bug. On ssh, if I do 'su -' 
instead of 'su' then the wrong value appears:


$ su
Password:
root@serpentine-vgay:/usr/home/webadmin # diskinfo nda2
nda2	512	150000000000	292968750	131072	0
root@serpentine-vgay:/usr/home/webadmin # exit
$ su -
Password:
root@serpentine-vgay:~ # diskinfo nda2
nda2	512	1072431104	2094592	131072	0
root@serpentine-vgay:~ # logout
$

That makes sense - as 'su -' simulates a login, which is what I am doing 
on the serial console. But I was under the impresison that the main 
differences between the two were in the environment variables which get 
set, and they seem to be identical.