ZFS Problem - full disk, can't recover space :(.
Dr Josef Karthauser
josef.karthauser at unitedlane.com
Sun Mar 27 08:13:10 UTC 2011
On 27 Mar 2011, at 08:58, Jeremy Chadwick wrote:
> On Sun, Mar 27, 2011 at 08:13:44AM +0100, Dr Josef Karthauser wrote:
>> On 26 Mar 2011, at 21:54, Alexander Leidinger wrote:
>>>> Any idea on were the 23G has gone, or how I pursuade the zpool to
>>>> return it? Why is the filesystem referencing storage that isn't being
>>>> used?
>>>
>>> I suggest a
>>> zfs list -r -t all void/store
>>> to make really sure we/you see what we want to see.
>>>
>>> Can it be that an application has the 23G still open?
>>>
>>>> p.s. this is FreeBSD 8.2 with ZFS pool version
>>>> 15.
>>>
>>> The default setting of showing snapshots or not changed somewhere. As
>>> long as you didn't configure the pool to show snapshots (zpool get
>>> listsnapshots <pool>), they are not shown by default.
>>
>> Definitely no snapshots:
>>
>> infinity# zfs list -tall
>> NAME USED AVAIL REFER MOUNTPOINT
>> void 99.1G 24.8G 2.60G legacy
>> void/home 33.5K 24.8G 33.5K /home
>> void/j 87.5G 24.8G 54K /j
>> void/j/buttsby 136M 9.87G 2.40M /j/buttsby
>> void/j/buttsby/home 34.5K 9.87G 34.5K /j/buttsby/home
>> void/j/buttsby/local 130M 9.87G 130M /j/buttsby/local
>> void/j/buttsby/tmp 159K 9.87G 159K /j/buttsby/tmp
>> void/j/buttsby/var 3.97M 9.87G 104K /j/buttsby/var
>> void/j/buttsby/var/db 2.40M 9.87G 1.55M /j/buttsby/var/db
>> void/j/buttsby/var/db/pkg 866K 9.87G 866K /j/buttsby/var/db/pkg
>> void/j/buttsby/var/empty 21K 9.87G 21K /j/buttsby/var/empty
>> void/j/buttsby/var/log 838K 9.87G 838K /j/buttsby/var/log
>> void/j/buttsby/var/mail 592K 9.87G 592K /j/buttsby/var/mail
>> void/j/buttsby/var/run 30.5K 9.87G 30.5K /j/buttsby/var/run
>> void/j/buttsby/var/tmp 23K 9.87G 23K /j/buttsby/var/tmp
>> void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha
>> void/j/legacy-brightstorm 29.2G 10.8G 29.2G /j/legacy-brightstorm
>> void/j/legacy-obleo 1.29G 1.71G 1.29G /j/legacy-obleo
>> void/j/mesh 310M 3.70G 2.40M /j/mesh
>> void/j/mesh/home 21K 3.70G 21K /j/mesh/home
>> void/j/mesh/local 305M 3.70G 305M /j/mesh/local
>> void/j/mesh/tmp 26K 3.70G 26K /j/mesh/tmp
>> void/j/mesh/var 2.91M 3.70G 104K /j/mesh/var
>> void/j/mesh/var/db 2.63M 3.70G 1.56M /j/mesh/var/db
>> void/j/mesh/var/db/pkg 1.07M 3.70G 1.07M /j/mesh/var/db/pkg
>> void/j/mesh/var/empty 21K 3.70G 21K /j/mesh/var/empty
>> void/j/mesh/var/log 85K 3.70G 85K /j/mesh/var/log
>> void/j/mesh/var/mail 24K 3.70G 24K /j/mesh/var/mail
>> void/j/mesh/var/run 28.5K 3.70G 28.5K /j/mesh/var/run
>> void/j/mesh/var/tmp 23K 3.70G 23K /j/mesh/var/tmp
>> void/local 282M 1.72G 282M /local
>> void/mysql 22K 78K 22K /mysql
>> void/tmp 55K 2.00G 55K /tmp
>> void/usr 1.81G 2.19G 275M /usr
>> void/usr/obj 976M 2.19G 976M /usr/obj
>> void/usr/ports 289M 2.19G 234M /usr/ports
>> void/usr/ports/distfiles 54.8M 2.19G 54.8M /usr/ports/distfiles
>> void/usr/ports/packages 21K 2.19G 21K /usr/ports/packages
>> void/usr/src 311M 2.19G 311M /usr/src
>> void/var 6.86G 3.14G 130K /var
>> void/var/crash 22.5K 3.14G 22.5K /var/crash
>> void/var/db 6.86G 3.14G 58.3M /var/db
>> void/var/db/mysql 6.80G 3.14G 4.79G /var/db/mysql
>> void/var/db/mysql/innodbdata 2.01G 3.14G 2.01G /var/db/mysql/innodbdata
>> void/var/db/pkg 2.00M 3.14G 2.00M /var/db/pkg
>> void/var/empty 21K 3.14G 21K /var/empty
>> void/var/log 642K 3.14G 642K /var/log
>> void/var/mail 712K 3.14G 712K /var/mail
>> void/var/run 49.5K 3.14G 49.5K /var/run
>> void/var/tmp 27K 3.14G 27K /var/tmp
>>
>> This is the problematic filesystem:
>>
>> void/j/legacy-alpha 56.6G 3.41G 56.6G /j/legacy-alpha
>>
>> No chance that an application is holding any data - I rebooting and came up
>> in single user mode to try and get this resolved, but no cookie.
>
> Are these filesystems using compression? Have any quota or reservation
> filesystem settings set?
>
> "zfs get all" might help, but it'll be a lot of data. We don't mind.
>
Ok, here you are. ( http://www.josef-k.net/misc/zfsall.txt.bz2 )
I suspect that the problem is the same as reported here:
http://web.archiveorange.com/archive/v/Lmwutp4HZLFDEkQ1UlX5 namely that there was a bug with the handling of sparse files on zfs. The file in question that caused the problem is a bayes database from spam assassin.
Joe
More information about the freebsd-fs
mailing list