svn commit: r306661 - in stable/11/sys/dev/cxgbe: . tom
John Baldwin
jhb at freebsd.org
Mon Oct 10 18:25:43 UTC 2016
On Monday, October 10, 2016 02:09:01 PM Slawa Olhovchenkov wrote:
> On Mon, Oct 03, 2016 at 11:15:44PM +0000, John Baldwin wrote:
>
> > Author: jhb
> > Date: Mon Oct 3 23:15:44 2016
> > New Revision: 306661
> > URL: https://svnweb.freebsd.org/changeset/base/306661
> >
> > Log:
> > MFC 303405: Add support for zero-copy aio_write() on TOE sockets.
> >
> > AIO write requests for a TOE socket on a Chelsio T4+ adapter can now
> > DMA directly from the user-supplied buffer. This is implemented by
> > wiring the pages backing the user-supplied buffer and queueing special
> > mbufs backed by raw VM pages to the socket buffer. The TOE code
> > recognizes these special mbufs and builds a sglist from the VM page
> > array associated with the mbuf when queueing a work request to the TOE.
> >
> > Because these mbufs do not have an associated virtual address, m_data
> > is not valid. Thus, the AIO handler does not invoke sosend() directly
> > for these mbufs but instead inlines portions of sosend_generic() and
> > tcp_usr_send().
> >
> > An aiotx_buffer structure is used to describe the user buffer (e.g.
> > it holds the array of VM pages and a reference to the AIO job). The
> > special mbufs reference this structure via m_ext. Note that a single
> > job might be split across multiple mbufs (e.g. if it is larger than
> > the socket buffer size). The 'ext_arg2' member of each mbuf gives an
> > offset relative to the backing aiotx_buffer. The AIO job associated
> > with an aiotx_buffer structure is completed when the last reference to
> > the structure is released.
> >
> > Zero-copy aio_write()'s for connections associated with a given
> > adapter can be enabled/disabled at runtime via the
> > 'dev.t[45]nex.N.toe.tx_zcopy' sysctl.
> >
> > Sponsored by: Chelsio Communications
>
> Do you have any public available application patches for support this?
> May be nginx?
Applications need to use aio_read(), ideally with at least 2 buffers (so
queue two reads, then when a read completes, consume the data and do the
next read). I'm not sure nginx will find this but so useful as web servers
tend to send a lot more data than they receive. The only software I have
patched explicitly for this is netperf.
--
John Baldwin
More information about the svn-src-all
mailing list