Big physically contiguous mbuf clusters
Konstantin Belousov
kostikbel at gmail.com
Thu Jan 30 13:45:30 UTC 2014
On Wed, Jan 29, 2014 at 03:11:21PM -0800, Navdeep Parhar wrote:
> On Wed, Jan 29, 2014 at 02:21:21PM -0800, Adrian Chadd wrote:
> > Hi,
> >
> > On 29 January 2014 10:54, Garrett Wollman <wollman at csail.mit.edu> wrote:
> > > Resolved: that mbuf clusters longer than one page ought not be
> > > supported. There is too much physical-memory fragmentation for them
> > > to be of use on a moderately active server. 9k mbufs are especially
> > > bad, since in the fragmented case they waste 3k per allocation.
> >
> > I've been wondering whether it'd be feasible to teach the physical
> > memory allocator about >page sized allocations and to create zones of
> > slightly more physically contiguous memory.
>
> I think this would be very useful. For example, a zone_jumbo32 would
> hit a sweet spot -- enough to fit 3 jumbo frames and some loose change
> for metadata. I'd like to see us improve our allocators and VM system
> to work better with larger contiguous allocations, rather than
> deprecating the larger zones. It seems backwards to push towards
> smaller allocation units when installed physical memory in a typical
> system continues to rise.
>
> Allocating 3 x 4K instead of 1 x 9K for a jumbo means 3x the number of
> vtophys translations, 3x the phys_addr/len traffic on the PCIe bus
> (scatter list has to be fed to the chip and now it's 3x what it has to
> be), 3x the number of "wrapper" mbuf allocations (one for each 4K
> cluster) which will then be stitched together to form a frame, etc. etc.
If the platform supports IOMMU, then physical contiguity of the pages
could be ignored, since with proper busdma tag VT-d driver allocates
continous bus address space for device view mapping.
Of course, this is moot right now due to drivers have no idea about IOMMU
presence, and since IOMMU busdma both disabled by default and having
non-trivial setup cost.
>
> Regards,
> Navdeep
>
> >
> > For servers with lots of memory we could then keep these around and
> > only dip into them for temporary allocations (eg not VM pages that may
> > be held for some unknown amount of time.)
> >
> > Question is - can we enforce that kind of behaviour?
> >
> >
> >
> > -a
> > _______________________________________________
> > freebsd-net at freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-net
> > To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 834 bytes
Desc: not available
URL: <http://lists.freebsd.org/pipermail/freebsd-net/attachments/20140130/b7dcd70c/attachment.sig>
More information about the freebsd-net
mailing list