virtio_net / netmap RX dropping frames
Vincenzo Maffione
v.maffione at gmail.com
Thu Oct 26 15:12:25 UTC 2017
So you are using netmap only in the guest (and not in the host).
And you are running a sender and a receiver inside the VM, both on the VM
interface.
Something like this
# pkt-gen -i eth1 -f rx
# pkt-gen -i eth1 -f tx
?
What happens if you use pkt-gen rather than your application?
2017-10-26 16:31 GMT+02:00 Joe Buehler <aspam at cox.net>:
> Vincenzo Maffione wrote:
> > I guess you are using a FreeBSD guest. Is this the case? If you have the
>
> Sorry, I am using LINUX, ubuntu 16.04 LTS for both host and VM. I am
> posting here at standing request of netmap driver author.
>
> The host has 24 CPUs @ 2.5 GHz and 128G of memory and is *idle* so I am
> a bit disappointed
>
> > chance, try a linux guest to check if virtio-net works better there
> > (I've used netmap on the netmap-patched virtio-net in Linux guests,
> > never tried on FreeBSD).
> > The netmap ring size is just the NIC ring size. If you change the
> > virtio-net NIC ring size (sysctl on FreeBSD, I guess).
>
> OK I'll look into that. I increased the ring size on the host ixgbe but
> that had no effect so I guess it must be virtio_net.
>
> > Anyway, for your specific use-case (VM accessing the physical 10G NIC)
> > there is a way better solution, which is the netmap passthrough.
>
> Unfortunately I don't have control of the host, just the VM, so pt
> netmap is not an option.
>
> My initial query regarded frame drops but the latency is also pretty
> bad. The LINUX ping utility inside the VM says 0.2 mS consistently
> without netmap in use. My app sees that value for almost all frames but
> it spikes (up to 0.6 mS!) for a few frames, which is not acceptable for
> this application -- was expecting much better due to network stack
> bypass. And this is just 100 frames/sec...
>
> Joe Buehler
>
--
Vincenzo Maffione
More information about the freebsd-net
mailing list