kernel memory allocator: UMA or malloc?
Garrett Wollman
wollman at bimajority.org
Fri Mar 14 02:06:49 UTC 2014
<<On Thu, 13 Mar 2014 18:50:21 -0700, John-Mark Gurney <jmg at funkthat.com> said:
> So, this is where a UMA half alive object could be helpful... Say that
> you always need to allocate an iovec + 8 mbuf clusters to populate the
> iovec... What you can do is have a uma uminit function that allocates
> the memory for the iovec and 8 mbuf clusters, and populates the iovec
> w/ the correct addresses... Then when you call uma_zalloc, the iovec
> is already initalized, and you just go on your merry way instead of
> doing all that work... when you uma_zfree, you don't have to worry
> about loosing the clusters as the next uma_zalloc might return the
> same object w/ the clusters already present... When the system gets
> low on memory, it will call your fini function which will need to
> free the clusters....
I thought about this, but I don't think it helps, because the mbufs
are going to get handed into the network stack and queued in TCP and
then in the interface for potentially a long period of time -- with no
callback to NFS that would tell it that the mbufs are now free --
whereas the iovec (and in my implementation, the uio) can get freed
immediately and recycled.
If we had the ability to get 64k chunks of direct-mapped physmem --
from a boot-time-reserved region of memory -- and use those as mbufs,
then it might be a win, because then you could cache a buffer, the
mbuf that points to it, the iovec that points to the mbuf, and the uio
that points to the iovec all in the same allocation, and get a
callback when the last reference to that buffer drops. I expect that
would significantly improve performance on high-end servers like the
ones I've built, but we'd need to arrange for Rick to get a decent
64-bit server for testing. On a 96-GiB server, I'd be perfectly
willing to reserve a couple of 1 GiB superpages for this purpose.
-GAWollman
More information about the freebsd-hackers
mailing list