[ZFS] ARC accounting bug ?
Ben RUBSON
ben.rubson at gmail.com
Sat Aug 27 16:15:22 UTC 2016
> On 27 Aug 2016, at 07:22, Shane Ambler <FreeBSD at ShaneWare.Biz> wrote:
>
> On 26/08/2016 19:09, Ben RUBSON wrote:
>> Hello,
>>
>> Before opening a bug report, I would like to know whether what I see
>> is "normal" or not, and why.
>>
>> ### Test :
>>
>> # zfs import mypool
>> # zfs set primarycache=metadata mypool
>
> Well that sets the primarycache for the pool and all subsets that
> inherit the property. Do any sub filesystems have local settings?
No.
> zfs get -r primary cache mypool
>
> And mypool is the only zpool on the machine?
Yes.
>> # while [ 1 ]; do find /mypool/ >/dev/null; done
>>
>> # zfs-mon -a
>>
>> ZFS real-time cache activity monitor
>> Seconds elapsed: 162
>>
>> Cache hits and misses:
>> 1s 10s 60s tot
>> ARC hits: 79228 76030 73865 74953
>> ARC misses: 22510 22184 21647 21955
>> ARC demand data hits: 0 0 0 0
>> ARC demand data misses: 4 7 8 7
>> ARC demand metadata hits: 79230 76030 73865 74953
>> ARC demand metadata misses: 22506 22177 21639 21948
>> ZFETCH hits: 47 29 32 31
>> ZFETCH misses:101669 98138 95433 96830
>>
>> Cache efficiency percentage:
>> 10s 60s tot
>> ARC: 77.41 77.34 77.34
>> ARC demand data: 0.00 0.00 0.00
>> ARC demand metadata: 77.42 77.34 77.35
>> ZFETCH: 0.03 0.03 0.03
>>
>> ### Question :
>>
>> I don't understand why I have so many ARC misses. There is no other
>> activity on the server (as soon as I stop the find loop, no more ARC
>> hits). As soon as the first find loop is done, there is no more disk
>> activity (according to zpool iostat -v 1), no read/write operations
>> on mypool.
>> So I'm pretty sure all metadata comes from ARC.
>> So why are there so many ARC misses ?
>
> Running zfs-mon on my desktop, I seem to get similar results.
Thank you for having tested it Shane.
> What I am seeing leads me to think that not all metadata is cached,
> maybe filename isn't cached, which can be a large string.
>
> while [ 1 ]; do find /usr/ports > /dev/null; done
>
> will list the path to every file and I see about 2 hits to a miss, yet
>
> while [ 1 ]; do ls -lR /usr/ports > /dev/null; done
>
> lists every filename as well as it's size, mod date, owner, permissions
> and it sits closer to 4 hits to every miss.
>
> And if the system disk cache contains the filenames that zfs isn't caching we won't need disk access to get the zfs misses.
Playing with these commands :
# dtrace -n 'sdt:zfs::arc-hit {@[execname, stack()] = count();}'
# dtrace -n 'sdt:zfs::arc-miss {@[execname, stack()] = count();}'
We can see that these are readdir calls which produce arc-misses, and that readdir calls also produce arc-hits.
It would be interesting to know why some lead to hits, and some lead to misses.
(note that ls -lR / rsync commands produces exactly the same dtrace results/numbers as find command)
Ben
More information about the freebsd-fs
mailing list