ZFS L2ARC hit ratio

Artem Belevich art at freebsd.org
Tue Jun 21 22:15:54 UTC 2011


On Tue, Jun 21, 2011 at 12:59 PM, Wiktor Niesiobedzki <bsd at vink.pl> wrote:
> Hi,
>
> I've recently migrated my 8.2 box to recent stable:
> FreeBSD kadlubek.vink.pl 8.2-STABLE FreeBSD 8.2-STABLE #22: Tue Jun  7
> 03:43:29 CEST 2011     root at kadlubek:/usr/obj/usr/src/sys/KADLUB  i386
>
> And upgraded my ZFS/ZPOOL to newest versions. Though through my
> monitoring I've noticed some declination in L2ARC hit ratio (server is
> not busy, so it doesn't look that suspicious). I've made some tests
> today and I guess, that there might be some problem:
>
> I've did the following on cold cache:
> sysctl kstat.zfs.misc.arcstats.hits kstat.zfs.misc.arcstats.l2_hits
> kstat.zfs.misc.arcstats.misses kstat.zfs.misc.arcstats.l2_misses &&
> cat 4gb_file>/dev/null && sysctl kstat.zfs.misc.arcstats.hits
> kstat.zfs.misc.arcstats.l2_hits kstat.zfs.misc.arcstats.misses
> kstat.zfs.misc.arcstats.l2_misses
>
> And after computing the differences I've got:
> kstat.zfs.misc.arcstats.hits    1213775
> kstat.zfs.misc.arcstats.l2_hits 21
> kstat.zfs.misc.arcstats.misses  37364
> kstat.zfs.misc.arcstats.l2_misses       37343
>
> That's pretty normal. After that, I've noticed the growth in L2ARC
> usage by 4gb, but, when I do the same operation again, the results are
> worrying:
> kstat.zfs.misc.arcstats.hits    1188662
> kstat.zfs.misc.arcstats.l2_hits 305
> kstat.zfs.misc.arcstats.misses  36933
> kstat.zfs.misc.arcstats.l2_misses       36628
>
> +/- the same.
>
> I've did some gstating during these tests, and I've noticed around 2
> reads per second from my cache device accounting for about 32kb per
> second. Not that much.
>
> My first guess, is that for some reason, we claim that L2ARC record is
> outdated and thus not using it at all.
>
> Any clues, why L2ARC isn't kicking in this situation at all? I notice
> some substantial (like 5-10%) hits from L2ARC during the cronjobs
> though, but this simple scenario is just failing...
>
> For the record below are some other details:
> %zfs get all tank
> NAME  PROPERTY              VALUE                  SOURCE
> tank  type                  filesystem             -
> tank  creation              Sat Dec  5  3:37 2009  -
> tank  used                  572G                   -
> tank  available             343G                   -
> tank  referenced            441G                   -
> tank  compressratio         1.00x                  -
> tank  mounted               yes                    -
> tank  quota                 none                   default
> tank  reservation           none                   default
> tank  recordsize            128K                   default
> tank  mountpoint            /tank                  default
> tank  sharenfs              off                    default
> tank  checksum              on                     default
> tank  compression           off                    default
> tank  atime                 off                    local
> tank  devices               on                     default
> tank  exec                  on                     default
> tank  setuid                on                     default
> tank  readonly              off                    default
> tank  jailed                off                    default
> tank  snapdir               hidden                 default
> tank  aclinherit            restricted             default
> tank  canmount              on                     default
> tank  xattr                 off                    temporary
> tank  copies                1                      default
> tank  version               5                      -
> tank  utf8only              off                    -
> tank  normalization         none                   -
> tank  casesensitivity       sensitive              -
> tank  vscan                 off                    default
> tank  nbmand                off                    default
> tank  sharesmb              off                    default
> tank  refquota              none                   default
> tank  refreservation        none                   default
> tank  primarycache          all                    default
> tank  secondarycache        all                    default
> tank  usedbysnapshots       0                      -
> tank  usedbydataset         441G                   -
> tank  usedbychildren        131G                   -
> tank  usedbyrefreservation  0                      -
> tank  logbias               latency                default
> tank  dedup                 off                    default
> tank  mlslabel                                     -
> tank  sync                  standard               default
>
> %zpool status tank
>  pool: tank
>  state: ONLINE
>  scan: scrub repaired 0 in 7h23m with 0 errors on Wed Jun 15 07:53:29 2011
> config:
>
>  NAME                                          STATE     READ WRITE CKSUM
>  tank                                          ONLINE       0     0     0
>   raidz1-0                                    ONLINE       0     0     0
>     ad6.eli                                   ONLINE       0     0     0
>     ad8.eli                                   ONLINE       0     0     0
>     ad10.eli                                  ONLINE       0     0     0
>  cache
>   gptid/7644bfda-e141-11de-951e-004063f2d074  ONLINE       0     0     0
>
> errors: No known data errors
>
>
> Cheers,
>
> Wiktor Niesiobedzki
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>

L2ARC is filled with items evicted from ARC. The catch is that L2ARC
writes are intentionally throttled. When L2ARC is empty writes happen
at a higher rate, but it's still intentionally low so that
read-optimized cache device does not wear out too soon. The bottom
line is that not all the data spilled out of ARC ends up in L2ARC on
the first try. Re-run your experiment again and you would probably see
some improvement in L2ARC hit rates.

You can use following sysctls that control L2ARC write speed:
vfs.zfs.l2arc_write_boost: 8388608
vfs.zfs.l2arc_write_max: 8388608

Word of caution -- before you tweak this, do check total amount of
writes your SSD can handle and how long it would take for L2ARC writes
to write that much. I've recently discovered that on one of my boxes
160GB X-25M (G2) ended up at it's official limit in about three
months.

--Artem


More information about the freebsd-fs mailing list