ZFS - directory entry

Dirk-Willem van Gulik dirkx at webweaving.org
Wed Dec 14 17:17:17 UTC 2016


> On 14 Dec 2016, at 17:14, Alan Somers <asomers at freebsd.org> wrote:
> 
> On Wed, Dec 14, 2016 at 8:27 AM, Dirk-Willem van Gulik
> <dirkx at webweaving.org> wrote:
>> A rather odd directory entry (in /root, the home dir of root/toor) appeared on a bog standard FreeBSD 10.2 (p18) lightly loaded machine under ZFS during/post a backup:
>> 
>> $ ls -la /root | tail -q
>> ----------   1 root  wheel  9223372036854775807 Jan  1  1970 ?%+?kD?H???x,?5?Dh;*s!?h???jw??????\h?:????????``?13?@?????OA????????Puux????<T]???R??Qv?g???]??%?R?
>> 
>> OS and ZFS is installed with a bog standard sysinstall. ‘SMART’ nor smartd have reported anything. nothing in dmesg, syslog of boot log. Any suggestions as how to debug or get to the root of this ?
>> 
>> And in particular - what is a risk of a reboot (to get a kernel with debug, etc) causing the issue to ‘go away’ - and hence stopping the forensic ?
>> 
>> Dw.
>> 
>> sudo zpool list -v
>> NAME         SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
>> tank        25.2T  9.27T  16.0T         -    17%    36%  1.53x  ONLINE  -
>>  raidz3    25.2T  9.27T  16.0T         -    17%    36%
>>    ada0p3      -      -      -         -      -      -
>>    ada1p3      -      -      -         -      -      -
>>    ada2p3      -      -      -         -      -      -
>>    ada3p3      -      -      -         -      -      -
>>    ada4p3      -      -      -         -      -      -
>>    ada5p3      -      -      -         -      -      -
>>    ada6p3      -      -      -         -      -      -

Most wonderful - I did not know about the inode/zdb magic. Thanks!

> Two things to try:
> 1) zpool scrub.  This will reveal any corrupt metadata objects

Ok - some 300 hours to go :) So am now trying figure out why it is running at just 8M/s (prefetch_disable=1, vfs.zfs.scrub_delay=0).

> 2) Maybe the filename is created in an encoding not supported by your
> current terminal.  Try "LANG=en_US.UTF-8 ls -l”

No cookie (and not overly likely - barebone install which is not visible to anything ‘modern’ but ssh et.al.).

> 3) Use zdb to examine the file.  First, do "ls -li /root" to get the
> object id.  It's the same as the inode number.  Then, assuming /root
> is in the tank/root filesystem, do "zdb -ddddd tank/root <object id>".
> That might reveal some clues.

A:
	zdb -ddddd tank/root  7426414

gives; after a 50  second pause (pre/during ‘zpool scrub’):

	Dataset tank/root [ZPL], ID 40, cr_txg 6, 902M, 14669 objects, rootbp DVA[0]=<0:4c000:4000> DVA[1]=<0:4c00004c000:4000> [L0 DMU objset] fletcher4 uncompressed LE contiguous unique double size=800L/800P birth=225L/225P fill=14669 cksum=9c7252c3b:ad096bfa68f:7b6298f1d2648:4235b444c02eba0

	    Object  lvl   iblk   dblk  dsize  lsize   %full  type

	zdb: dmu_bonus_hold(7426414) failed, errno 2

So I guess I should wait for the scrub to complete. Cannot recall scrub to be this slow.

Dw.





More information about the freebsd-hackers mailing list