kern/154930: [zfs] cannot delete/unlink file from full volume
-> ENOSPC
soralx at cydem.org
soralx at cydem.org
Sun Jun 19 14:20:12 UTC 2011
The following reply was made to PR kern/154930; it has been noted by GNATS.
From: <soralx at cydem.org>
To: <bug-followup at FreeBSD.org>, <mm at FreeBSD.org>
Cc: <mandree at FreeBSD.org>
Subject: Re: kern/154930: [zfs] cannot delete/unlink file from full volume
-> ENOSPC
Date: Sun, 19 Jun 2011 06:54:26 -0700
All,
I encountered a similar snag, only worse, with no solution.
The server has a 7 TB ZFS volume, which consists of 5 WD2001FASS
connected to PERC 5/i in RAID5 (originally there were 4 disks, but the
RAID was expanded by adding another disk and rebuilding). On June 12th
(the date of OS rebuild, both world and kernel; the last update was on
June 6), the pool was at v15, and had 12 GB free. On June 14, the pool
already had 0 bytes free (found out from periodic daily run output
email). It is unknown whether the FS was accessed at all during these
two days; however, `find ./ -newerBt 2011-06-10 -print` returns just
one small file dated 2011-06-11, and `find ./ -newermt 2011-06-10`
returns ~20 files with total size <1 GB (BTW, these are backups, and
nobody could have touched them, so it's a mystery why they have a
recent modified time). So, the question is: where could have 12 GB
suddenly disappear to?
Further, on June 18th, the pool was mounted when a `zpool upgrade tst`
command was issued. The upgrade to v28 succeeded, but then I found out
that deleting files, large or small, is impossible:
# rm ./qemu0.raw
rm: ./qemu0.raw: No space left on device
# truncate -s0 ./qemu0.raw
truncate: ./qemu0.raw: No space left on device
# cat /dev/null > ./qemu0.raw
./qemu0.raw: No space left on device.
Snapshots, compression, or deduplication have never been used on the volume.
Also, these messages appear in dmesg:
Solaris: WARNING: metaslab_free_dva(): bad DVA 0:5978620460544
Solaris: WARNING: metaslab_free_dva(): bad DVA 0:5978620461568
Solaris: WARNING: metaslab_free_dva(): bad DVA 0:5993168926208
Contrary to what the pool's name might suggest, this is not test storage,
but has valuable data on it. Help!
>Environment:
System: FreeBSD cydem.org 8.2-STABLE FreeBSD 8.2-STABLE #0: Sun Jun 12 07:55:32 PDT 2011 soralx at cydem.org:/usr/obj/usr/src/sys/CYDEM amd64
`df`:
Filesystem 1K-blocks Used Avail Capacity Mounted on
tst 7650115271 7650115271 0 100% /stor1-tst
`zpool list`:
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
tst 5.44T 5.36T 80.6G 98% 1.00x ONLINE -
`zfs list`:
NAME USED AVAIL REFER MOUNTPOINT
tst 7.12T 0 7.12T /stor1-tst
`zpool status`:
pool: tst
state: ONLINE
scan: scrub in progress since Sat Jun 18 06:32:37 2011
1.82T scanned out of 5.36T at 350M/s, 2h56m to go
0 repaired, 34.02% done
config:
NAME STATE READ WRITE CKSUM
tst ONLINE 0 0 0
mfid1 ONLINE 0 0 0
errors: No known data errors
[this completed with 'scrub repaired 0 in 5h47m with 0 errors' at 133% done,
i.e. it scrubbed all 7.12 TB].
--
[SorAlx] ridin' VN2000 Classic LT
More information about the freebsd-fs
mailing list