Re: VirtIO/ipfw/natd throughput problem in hosted VM
- Reply: Jim Long : "Re: VirtIO/ipfw/natd throughput problem in hosted VM"
- In reply to: Jim Long : "VirtIO/ipfw/natd throughput problem in hosted VM"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Mon, 29 Jan 2024 17:54:49 UTC
On Mon, Jan 29, 2024 at 12:47 PM Jim Long <freebsd-questions@umpquanet.com> wrote: > I'm running FreeBSD 14.0-RELEASE in a quad-core, 12G VM commercially > hosted under KVM (I'm told). It was installed from the main disc1.iso > image, not any of the VM-centric ISOs. > > # grep -i network /var/run/dmesg.boot > virtio_pci0: <VirtIO PCI (legacy) Network adapter> port 0xc000-0xc03f mem > 0xfebd1000-0xfebd1fff,0xfe000000-0xfe003fff irq 11 at device 3.0 on pci0 > vtnet0: <VirtIO Networking Adapter> on virtio_pci0 > # ifconfig public > public: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> > metric 0 mtu 1500 > > options=4c079b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,TXCSUM_IPV6> > ether fa:16:3e:ca:b5:9c > inet 10.1.170.27 netmask 0xffffff00 broadcast 10.1.170.255 > media: Ethernet autoselect (10Gbase-T <full-duplex>) > status: active > nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> > > (10.1.170.27 is my obfuscated routable public IP.) > > Using ipfw *without* any "divert" rule, I get good network speed. > Transfering two larger files, one time apiece: > > # ipfw show > 65000 2966704 2831806570 allow ip from any to any > 65535 135 35585 deny ip from any to any > > # 128MB @ > 94MB/s: > # rm -f random-data-test-128M > # time rsync -Ppv example.com:random-data-test-128M . > random-data-test-128M > 134,217,728 100% 94.26MB/s 0:00:01 (xfr#1, to-chk=0/1) > > sent 43 bytes received 134,250,588 bytes 53,700,252.40 bytes/sec > total size is 134,217,728 speedup is 1.00 > > real 0m1.645s > user 0m0.826s > sys 0m0.788s > > # 1024MB @ > 105MB/s: > # rm -f random-data-test-1G > # time rsync -Ppv example.com:random-data-test-1G . > random-data-test-1G > 1,073,741,824 100% 105.98MB/s 0:00:09 (xfr#1, to-chk=0/1) > > sent 43 bytes received 1,074,004,060 bytes 102,286,105.05 bytes/sec > total size is 1,073,741,824 speedup is 1.00 > > real 0m9.943s > user 0m4.701s > sys 0m5.769s > > > > But with an "ipfw divert" rule in place (and natd running as 'natd -n > public'), across 5 transfers of a 2M file of /dev/random, I get very > poor transfer speeds: > > # ipfw add 65000 divert natd all from any to any via public > # ipfw show > 60000 3 292 divert 8668 ip from any to any via public > 65000 2950208 2817524670 allow ip from any to any > 65535 135 35585 deny ip from any to any > > Test 1 of 5, < 180kB/s: > > # rm -f random-data-test-2M > # time rsync -Ppv example.com:random-data-test-2M . > random-data-test-2M > 2,097,152 100% 179.08kB/s 0:00:11 (xfr#1, to-chk=0/1) > > sent 43 bytes received 2,097,752 bytes 167,823.60 bytes/sec > total size is 2,097,152 speedup is 1.00 > > real 0m12.199s > user 0m0.085s > sys 0m0.027s > > Test 2 of 5, < 115kB/s: > > # rm -f random-data-test-2M > # rsync -Ppv example.com:random-data-test-2M . > random-data-test-2M > 2,097,152 100% 114.40kB/s 0:00:17 (xfr#1, to-chk=0/1) > > sent 43 bytes received 2,097,752 bytes 107,579.23 bytes/sec > total size is 2,097,152 speedup is 1.00 > > real 0m19.300s > user 0m0.072s > sys 0m0.051s > > Test 3 of 5, < 37kB/s (almost 57s elapsed time): > > # rm -f random-data-test-2M > # time rsync -Ppv example.com:random-data-test-2M . > random-data-test-2M > 2,097,152 100% 36.49kB/s 0:00:56 (xfr#1, to-chk=0/1) > > sent 43 bytes received 2,097,752 bytes 36,483.39 bytes/sec > total size is 2,097,152 speedup is 1.00 > > real 0m56.868s > user 0m0.080s > sys 0m0.023s > > Test 4 of 5, < 112kB/s: > > # rm -f random-data-test-2M > # time rsync -Ppv example.com:random-data-test-2M . > random-data-test-2M > 2,097,152 100% 111.89kB/s 0:00:18 (xfr#1, to-chk=0/1) > > sent 43 bytes received 2,097,752 bytes 102,331.46 bytes/sec > total size is 2,097,152 speedup is 1.00 > > real 0m19.544s > user 0m0.095s > sys 0m0.015s > > Test 5 of 5, 130kB/s: > > # rm -f random-data-test-2M > # time rsync -Ppv example.com:random-data-test-2M . > random-data-test-2M > 2,097,152 100% 130.21kB/s 0:00:15 (xfr#1, to-chk=0/1) > > sent 43 bytes received 2,097,752 bytes 127,139.09 bytes/sec > total size is 2,097,152 speedup is 1.00 > > real 0m16.583s > user 0m0.072s > sys 0m0.035s > > > How can I tweak my network stack to get reasonable throughput from natd? > I'm happy to respond to requests for additional details. > > > Thank you! > > > > The most glaringly obvious thing to me is to use in-kernel nat instead of natd. Packets won't have to leave the kernel at that point. It's detailed in ipfw(8). ~Paul -- __________________ :(){ :|:& };: