ZFS Root size keeps going down after upgrade to 13.2-release

From: Kaya Saman <kayasaman_at_optiplex-networks.com>
Date: Mon, 21 Aug 2023 11:46:18 UTC
Hi all,


I have just upgraded my system from 13.1 -> 13.2 and now the maximum 
file system size keeps going down and 'df' is showing as hardly any 
space left on the drive?


# df -h
Filesystem                              Size    Used   Avail Capacity  
Mounted on
zroot/ROOT/default                       37G     36G    720M 98%    /


Yesterday when performing the upgrade I did notice this which happened 
after deleting over 20GB of 'preview' files for one of my webapps in 
/usr/local/www/ which is hosted by the apache2 port. The file system was 
at around 97% yet after the 'rm -rf *' command the file system went to 100%?

Just as an FYI, it's the standard upgrade from the FAQ page:

freebsd-update -r 13.2-RELEASE upgrade


The root system is mirrored between two disks:

     NAME        STATE     READ WRITE CKSUM
     zroot       ONLINE       0     0     0
       mirror-0  ONLINE       0     0     0
         ada0p3  ONLINE       0     0     0
         ada1p3  ONLINE       0     0     0


Reading up a little I found a posting where someone had been trying to 
delete snapshots which where taking up huge areas of space. Not the same 
problem I think since the snapshots of upgrades do not take up that much 
and I have already removed the larger obsolete snapshots containing 
FreeBSD 12.x.


Here is the output from 'zfs list -o space':

zroot                                          720M   107G 0B   
2.20G             0B       105G
zroot/ROOT                                     720M  88.5G 0B     
88K             0B      88.5G
zroot/ROOT/13.1-RELEASE-p5_2023-08-20_220010   720M     8K 0B      
8K             0B         0B
zroot/ROOT/13.2-RELEASE-p2_2023-08-20_224534   720M     8K 0B      
8K             0B         0B



The actual link I found that I mentioned above was this one: 
https://forums.freebsd.org/threads/zfs-how-to-properly-remove-unnecessary-snapshots-and-not-damage-data.85436/


in which here is the output of 'bectl list':


# bectl list
BE                                Active Mountpoint Space Created
13.1-RELEASE-p5_2023-08-20_220010 -      -          34.5M 2023-08-20 22:00
13.2-RELEASE-p2_2023-08-20_224534 -      -          9.56M 2023-08-20 22:45
default                           NR     /          88.5G 2018-08-07 02:53


What I really do not understand is why is the space getting smaller for 
the file system 'size' and how do I regain it? I probably have deleted 
around 40GB (of user based files) already yet nothing is showing up and 
no space is clearing. I wonder if the recent update has done something 
to the root pool?


Would anyone be able to help on this one? Basically I just want to 
create space again on the drive as currently I am unable to even fetch 
the @ports collection with only 720M of disk space being shown as 
left.... cleaning @ports should have given me quite a few GB spare but 
nothing happening at all??


Many thanks.


Kaya