ZFS L2ARC hit ratio
Wiktor Niesiobedzki
bsd at vink.pl
Tue Jun 21 20:22:32 UTC 2011
Hi,
I've recently migrated my 8.2 box to recent stable:
FreeBSD kadlubek.vink.pl 8.2-STABLE FreeBSD 8.2-STABLE #22: Tue Jun 7
03:43:29 CEST 2011 root at kadlubek:/usr/obj/usr/src/sys/KADLUB i386
And upgraded my ZFS/ZPOOL to newest versions. Though through my
monitoring I've noticed some declination in L2ARC hit ratio (server is
not busy, so it doesn't look that suspicious). I've made some tests
today and I guess, that there might be some problem:
I've did the following on cold cache:
sysctl kstat.zfs.misc.arcstats.hits kstat.zfs.misc.arcstats.l2_hits
kstat.zfs.misc.arcstats.misses kstat.zfs.misc.arcstats.l2_misses &&
cat 4gb_file>/dev/null && sysctl kstat.zfs.misc.arcstats.hits
kstat.zfs.misc.arcstats.l2_hits kstat.zfs.misc.arcstats.misses
kstat.zfs.misc.arcstats.l2_misses
And after computing the differences I've got:
kstat.zfs.misc.arcstats.hits 1213775
kstat.zfs.misc.arcstats.l2_hits 21
kstat.zfs.misc.arcstats.misses 37364
kstat.zfs.misc.arcstats.l2_misses 37343
That's pretty normal. After that, I've noticed the growth in L2ARC
usage by 4gb, but, when I do the same operation again, the results are
worrying:
kstat.zfs.misc.arcstats.hits 1188662
kstat.zfs.misc.arcstats.l2_hits 305
kstat.zfs.misc.arcstats.misses 36933
kstat.zfs.misc.arcstats.l2_misses 36628
+/- the same.
I've did some gstating during these tests, and I've noticed around 2
reads per second from my cache device accounting for about 32kb per
second. Not that much.
My first guess, is that for some reason, we claim that L2ARC record is
outdated and thus not using it at all.
Any clues, why L2ARC isn't kicking in this situation at all? I notice
some substantial (like 5-10%) hits from L2ARC during the cronjobs
though, but this simple scenario is just failing...
For the record below are some other details:
%zfs get all tank
NAME PROPERTY VALUE SOURCE
tank type filesystem -
tank creation Sat Dec 5 3:37 2009 -
tank used 572G -
tank available 343G -
tank referenced 441G -
tank compressratio 1.00x -
tank mounted yes -
tank quota none default
tank reservation none default
tank recordsize 128K default
tank mountpoint /tank default
tank sharenfs off default
tank checksum on default
tank compression off default
tank atime off local
tank devices on default
tank exec on default
tank setuid on default
tank readonly off default
tank jailed off default
tank snapdir hidden default
tank aclinherit restricted default
tank canmount on default
tank xattr off temporary
tank copies 1 default
tank version 5 -
tank utf8only off -
tank normalization none -
tank casesensitivity sensitive -
tank vscan off default
tank nbmand off default
tank sharesmb off default
tank refquota none default
tank refreservation none default
tank primarycache all default
tank secondarycache all default
tank usedbysnapshots 0 -
tank usedbydataset 441G -
tank usedbychildren 131G -
tank usedbyrefreservation 0 -
tank logbias latency default
tank dedup off default
tank mlslabel -
tank sync standard default
%zpool status tank
pool: tank
state: ONLINE
scan: scrub repaired 0 in 7h23m with 0 errors on Wed Jun 15 07:53:29 2011
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ad6.eli ONLINE 0 0 0
ad8.eli ONLINE 0 0 0
ad10.eli ONLINE 0 0 0
cache
gptid/7644bfda-e141-11de-951e-004063f2d074 ONLINE 0 0 0
errors: No known data errors
Cheers,
Wiktor Niesiobedzki
More information about the freebsd-fs
mailing list