cvs commit: src/sys/ufs/ufs dirhash.h ufs_dirhash.c
David Malone
dwmalone at maths.tcd.ie
Fri Apr 11 12:59:57 UTC 2008
> Now instead of freeing a recycled dirhash pointer in ip->i_dirhash I lock
> it and re-create it. This makes various races simpler. The dirhash has
> to be minimally constructed for dirhash_free_locked to function correctly
> if we bail out early.
Ah - OK - I think that probably partially explains your accounting
problems, mentioned below, but it may not be worth fixing. It is
probably worth looking over to make sure that it isn't just hiding
some other problem. Dirhash has been quite good at finding bugs in
other bits of code.
> The conditions in ufsdirhash_build() which result in creating a new
> directory hash remain the same except we are more agressive about
> reclaiming when the sysctl is lowered to ease testing.
I suspected that, but was having trouble seeing it.
> In the normal ufs_lookup() path we enter 3 dirhash functions which can now
> assume the lock is held rather than locking/unlocking. And in lookup in
> particular we are no longer required to drop and reacquire the lock,
> potentially multiple times. I'm sure it's a net reduction in lock
> operations. That really wasn't the primary motivation however. I mostly
> wanted to enable shared locking of directories. I doubt the number of mtx
> operations is dominating the performance of directory operations in any
> event.
Agreed.
> >
> > In the case of multiple directories, I guess we'll have two
> > mutex locks replaced by one mutex and one lockmanager
> > (shared) lock? This is probably the usual case, which isn't
> > too bad.
> I'm not sure I understand this.
I was probably thinking out aloud - please feel free to ignore it ;-)
> > should actually be kept. Actually, this probably isn't
> > possible because the list lock is dropped while we're doing
> > the free?
> Because we drop the list lock we have to restart the processing from the
> beginning.
OK - understood.
> I guess it is a risk that we could have many locked dirhashes
> that we'll have to skip causing the recycle loop to potentially take a
> long time. However, the old code was strange here as well. It'd only
> examine the head of the list and only recycled if the score reached 0.
That was the thing it had been designed to do. It only checked the
head of the list because the objects that are candidates for recycling
are at the head of the list. The idea was to keep plucking thing
from the head until we got far enough down that the object looked
like it was still active.
> Really we could just make this a simple LRU and be done with it. The
> score seems redundant.
If I remember correctly, that isn't a good idea. I think Ian found
that pure LRU didn't work that well for recycling, which is why the
hybrid LRU/LOU scheme is in place. LRU resulted in thrashing when
a large number of directories were being accessed at the same time.
Roughly what happened was you would access the directory, build the
whole hash, use it once and then discard it before another access.
Since building the whole hash is a little more costly than a single
linear access, this resulted in a net loss. By ensuring that an
established hashes are used for slightly longer, you could avoid
this hit.
David.
More information about the cvs-src
mailing list