Re: ZFS pool/filesystem space oddity
- Reply: Steve O'Hara-Smith : "Re: ZFS pool/filesystem space oddity"
- In reply to: Steve O'Hara-Smith : "ZFS pool/filesystem space oddity"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Sun, 03 Mar 2024 12:33:02 UTC
On 3/3/24 04:58, Steve O'Hara-Smith wrote: > Hi, > > My NAS runs a ZFS pool called archive striped over two mirrored > pairs of 4TB drives. It all works fine but I had a space eater recently and > so have been paying closer attention to space reports than normal - which > is why I noticed the oddity that the filesystems in the pool are reported > as having more free space than the pool. > > Why is this ? Which is right ? > > From zpool iostat -v > capacity operations bandwidth > pool alloc free read write read write > ----------------- ----- ----- ----- ----- ----- ----- > archive 4.38T 2.87T 59 52 880K 1.14M > > One of the filesystems in the archive pool: > > ✓ steve@holdall ~ $ df -H /data > Filesystem Size Used Avail Capacity Mounted on > archive/data 7.0T 4.0T 3.0T 57% /data ZFS filesystems do not report space in a way that df expects and will result in it giving oddities like a size that appears to adjust as size=used+available and avail=realsize-. Space is further complicated with topics like snapshots, refreservation, compression, etc. zpool also shows space including reserved space that zfs and other tools would not see/report as free while also understanding pool redundancy that is outside the scope of the filesystem. I use `zfs list -ro space archive` a lot though customized output is easily available such as `zfs list -t snapshot -ro name,used -s used`. These keep you looking at what is expected of the filesystem instead of the pool and does so with more knowledge of how ZFS is working.