Performance constraints of HUGE directories on UFS2? Any
O(F(N)) values?
Dan Nelson
dnelson at allantgroup.com
Mon May 19 08:20:06 PDT 2003
In the last episode (May 19), Gabriel Ambuehl said:
> Hi,
> I was wondering how bad the performance penalties of really large
> directories (say >20K entries) on UFS2 are? Reason why I'm asking is
> that I'd like to know if it is required to split up so big
> directories (along the lines of $firstchar/$secondchar/$filename) or
> if UFS2 is performant enough not to care all too much.
>
> I guess I'm after a O(F(N)) value in a way (I haven't yet decided
> which one would be good enough though, suppose I'd like to hear its
> O(log(N)) in which case I don't need to care for splitting the dirs
> ;-).
I think "options UFS_DIRHASH" in your kernel config is what you want.
It creates a hash table in memory for large directories.
http://www.cnri.dit.ie/Downloads/fsopt.pdf has some benchmark results,
and for certain cases dirhash really helps.
--
Dan Nelson
dnelson at allantgroup.com
More information about the freebsd-questions
mailing list