NFS poor performance in ipfw_nat
KIRIYAMA Kazuhiko
kiri at kx.openedu.org
Thu Sep 20 06:15:52 UTC 2018
At Wed, 19 Sep 2018 13:57:04 +0000,
Rick Macklem wrote:
>
> KIRIYAMA Kazuhiko wrote:
> [good stuff snipped]
> >
> > Thanks for your advice. Add '-lro' and '-tso' to ifconfig,
> > transfer rate up to almost native NIC speed:
> >
> > # dd if=/dev/zero of=/.dake/tmp/foo.img bs=1k count=1m
> > 1048576+0 records in
> > 1048576+0 records out
> > 1073741824 bytes transferred in 10.688162 secs (100460852 bytes/sec)
> > #
> >
> > BTW in VM on behyve, transfer rate to NFS mount of VM server
> > (bhyve) is appreciably low level:
> >
> > # dd if=/dev/zero of=/.dake/tmp/foo.img bs=1k count=1m
> > 1048576+0 records in
> > 1048576+0 records out
> > 1073741824 bytes transferred in 32.094448 secs (33455687 bytes/sec)
> >
> >This was limited by disk transfer speed:
> >
> ># dd if=/dev/zero of=/var/tmp/foo.img bs=1k count=1m
> >1048576+0 records in
> >1048576+0 records out
> >1073741824 bytes transferred in 21.692358 secs (49498623 bytes/sec)
> >#
> It sounds like this is resolved, thanks to Andrey.
I've surprised that disk transfer speed is slower than net
transfer speed. Incidentally for my laptop PC eMMC:
# dd if=/dev/zero of=/var/tmp/foo.img bs=1k count=1m
1048576+0 records in
1048576+0 records out
1073741824 bytes transferred in 30.276720 secs (35464271 bytes/sec)
#
and for my VM behyve hypervisor RAID-Z3:
# dd if=/dev/zero of=/var/tmp/foo.img bs=1k count=1m
1048576+0 records in
1048576+0 records out
1073741824 bytes transferred in 24.832563 secs (43239267 bytes/sec)
#
HDD slightly prevailed over eMMC ;-p
>
> If you have more problems like this, another thing to try is reducing the I/O
> size with mount options at the client.
> For example, you might try adding "rsize=4096,wsize=4096" to your mount and
> then increase the size by powers of 2 (8192, 16384,32768) and see which size
> works best. (This is another way to work around TSO problems. It also helps
> when a net interface or packet filter can't keep up with a burst of 40+ ethernet
> packets, which is what is generated when 64K I/O is used.)
>
> Btw, doing "nfsstat -m" on the client will show you what mount options are
> actually being used. This can be useful information.
>
> Good to hear it has been resolved, rick
> [more stuff snipped]
>
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
>
---
KIRIYAMA Kazuhiko
More information about the freebsd-net
mailing list