r273165. ZFS ARC: possible memory leak to Inact
Dmitriy Makarov
supportme at ukr.net
Tue Nov 4 12:19:27 UTC 2014
Hi Current,
It seems like there is constant flow (leak) of memory from ARC to Inact in FreeBSD 11.0-CURRENT #0 r273165.
Normally, our system (FreeBSD 11.0-CURRENT #5 r260625) keeps ARC size very close to vfs.zfs.arc_max:
Mem: 16G Active, 324M Inact, 105G Wired, 1612M Cache, 3308M Buf, 1094M Free
ARC: 88G Total, 2100M MFU, 78G MRU, 39M Anon, 2283M Header, 6162M Other
But after an upgrade to (FreeBSD 11.0-CURRENT #0 r273165) we observe enormous numbers of Inact memory in the top:
Mem: 21G Active, 45G Inact, 56G Wired, 357M Cache, 3308M Buf, 1654M Free
ARC: 42G Total, 6025M MFU, 30G MRU, 30M Anon, 819M Header, 5214M Other
Funny thing is that when we manually allocate and release memory, using simple python script:
#!/usr/local/bin/python2.7
import sys
import time
if len(sys.argv) != 2:
print "usage: fillmem <number-of-megabytes>"
sys.exit()
count = int(sys.argv[1])
megabyte = (0,) * (1024 * 1024 / 8)
data = megabyte * count
as:
# ./simple_script 10000
all those allocated megabyes 'migrate' from Inact to Free, and afterwards they are 'eaten' by ARC with no problem.
Until Inact slowly grows back to the number it was before we ran the script.
Current workaround is to periodically invoke this python script by cron.
This is an ugly workaround and we really don't like it on our production
To answer possible questions about ARC efficience:
Cache efficiency drops dramatically with every GiB pushed off the ARC.
Before upgrade:
Cache Hit Ratio: 99.38%
After upgrade:
Cache Hit Ratio: 81.95%
We believe that ARC misbehaves and we ask your assistance.
----------------------------------
Some values from configs.
HW: 128GB RAM, LSI HBA controller with 36 disks (stripe of mirrors).
top output:
In /boot/loader.conf :
vm.kmem_size="110G"
vfs.zfs.arc_max="90G"
vfs.zfs.arc_min="42G"
vfs.zfs.txg.timeout="10"
-----------------------------------
Thanks.
Regards,
Dmitriy
More information about the freebsd-current
mailing list