kern/187594: [zfs] [patch] ZFS ARC behavior problem and fix

Karl Denninger karl at denninger.net
Thu Mar 27 11:52:49 UTC 2014


On 3/27/2014 4:11 AM, mikej wrote:
> I've been running the latest patch now on r263711 and want to give it 
> a +1
>
> No ZFS knobs set and I must go out of my way to have my system swap.
>
> I hope this patch gets a much wider review and can be put into the
> tree permanently.
>
> Karl, thanks for the working on this.
>
> Regards,
>
> Michael Jung
No problem; I was being driven insane by the stalls and related bad 
behavior... and there's that old saw about complaining about something 
without proposing a fix for it (I've done it!) being "less than optimum" 
so.... :-)

Hopefully wider review (and, if the general consensus is similar to what 
I've seen here and what you're reporting as well, inclusion in the 
codebase) will come.

On my sandbox system I have to get truly abusive before I can get the 
system to swap now, but that load is synthetic and we all know what 
sometimes happens when you try to extrapolate from synthetic loads to 
real production ones.

What really has my attention is the impact on systems running live 
production loads.

It has entirely changed the character of those machines, working 
equally-well for both pure ZFS machines and mixed UFS/ZFS systems. One 
of these systems that gets pounded on pretty good and has a 
moderately-large configuration (~10TB of storage, 2 Xeon quad-core 
processors and 24GB of RAM serving a combination of Samba users 
internally, a decently-large Postgres installation supporting an 
externally-facing web forum and blog application, email and similar 
things) has been completely transformed from being "frequently 
challenged" by its workload to literally loafing 90%+ of the day. DBMS 
response times have seen their standard deviation drop by an order of 
magnitude with best-response times down for one of the most-common query 
sequences (~30 separate ops) from ~180ms to ~140.

This particular machine has a separate pool for the system itself (root, 
usr and var) which was formerly UFS because it had to be in order to 
avoid the worst of the "stall" bad behavior.  It also has two other 
pools on it, one for read-nearly-only data sets that are comprised of 
very large files that are almost archival in character and a second that 
has the system's "working set" on it.  The latter has a separate intent 
log; I had a cache SSD drive on it as well but have recently dropped 
that as with these changes it no longer produces a material improvement 
in performance.  I'm frankly not sure the intent log is helping any more 
either but I've yet to drop it and instrument the results -- it used to 
be *necessary* to avoid nasty problems during busy periods.

I now have that machine set up booting from ZFS with the system on a 
mirrored pool dedicated to system images, with lz4 *and* dedup on (for 
that filesystem's root), which allows me to clone it almost instantly, 
start a jail on the clone and then do a "buildworld buildkernel -j8" 
while only allocating storage to actual changes. Dedup ratio on that 
mirror set is 1.4x and lz4 is showing a net compression ratio of 2.01x. 
Even better I cannot provoke misbehavior by doing this sort of thing 
during the middle of the day where formerly that was just begging for 
trouble; the impact on user perceptible performance during it is zero 
although I can see the degradation in performance (a modest increase in 
system latency) in the stats.

Oh, did I mention that everything except the boot/root/usr/var 
filesystems (including swap) are geli-encrypted on this machine as well 
and that the nightly PC backup jobs bury the GIG-E interface on which 
they're attached -- and sustain that performance against the ZFS disks 
for the duration?  (The machine does have AESNI loaded....)

Finally swap allocation remains at zero throughout all of this.

At present, coming off the overnight that has an activity spike for 
routine in-house backup activity from connected PCs but is otherwise the 
"low point" of activity shows 1GB of free memory, an "auto-tuned" amount 
of 12.9GB of ARC cache (with a maximum size of 22.3) and inactive pages 
have remained stable.  Wired memory is almost 19GB with Postgres using a 
sizable chunk of it.  Cache efficiency is claimed to be 98.9% (!)  
That'll go down somewhat over the day but during the busiest part of the 
day it remains well into the 90s which I'm sure has a heck of a lot to 
do with the performance improvements....

Cross-posted over to -STABLE in the hope of expanding review and testing 
by others.

-- 
-- Karl
karl at denninger.net


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2711 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.freebsd.org/pipermail/freebsd-fs/attachments/20140327/6fc696a4/attachment.bin>


More information about the freebsd-fs mailing list