[Bug 268490] [igb] [lagg] [vlan]: Intel i210 performance severely degraded

From: <bugzilla-noreply_at_freebsd.org>
Date: Mon, 17 Apr 2023 22:03:27 UTC
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=268490

--- Comment #52 from Daniel Duerr <duerrd561@gmail.com> ---
(In reply to Santiago Martinez from comment #51)
Hi Santiago,

Thanks for the follow-up, apologies for the delayed response.

I've recreated the original problem on a clean 12.4-RELEASE-p1 kernel build
from source:

[root@nfs ~]# cd /usr/src
[root@nfs src]# uname -a
FreeBSD nfs.tidepool.cloud 12.4-RELEASE-p1 FreeBSD 12.4-RELEASE-p1
releng/12.4-n235813-52442e904dfc GENERIC-NODEBUG  amd64
[root@nfs src]# ifconfig igb0 | grep mtu
igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
[root@nfs src]# ifconfig igb1 | grep mtu
igb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
[root@nfs src]# ifconfig lagg0 | grep mtu
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
[root@nfs src]# ifconfig lagg0.8 | grep mtu
lagg0.8: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
[root@nfs src]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  1] local 172.27.6.135 port 5001 connected with 172.27.6.129 port 26020
[  2] local 172.27.6.135 port 5001 connected with 172.27.6.129 port 29025
[  3] local 172.27.6.135 port 5001 connected with 172.27.6.129 port 33549
recv failed: Connection reset by peer
[ ID] Interval       Transfer     Bandwidth
[  1] 0.00-79.37 sec  60.0 Bytes  6.05 bits/sec
recv failed: Connection reset by peer
[  2] 0.00-79.38 sec  60.0 Bytes  6.05 bits/sec
recv failed: Connection reset by peer
[  3] 0.00-79.38 sec  60.0 Bytes  6.05 bits/sec
[SUM] 0.00-122.10 sec   180 Bytes  11.8 bits/sec

You can see the performance is dismal, as expected. Now, I've tried your MTU
workaround by reducing the MTU on the logical like you said:

[root@nfs src]# ifconfig lagg0.8 mtu 8974
[root@nfs src]# ifconfig igb0 | grep mtu
igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
[root@nfs src]# ifconfig igb1 | grep mtu
igb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
[root@nfs src]# ifconfig lagg0 | grep mtu
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
[root@nfs src]# ifconfig lagg0.8 | grep mtu
lagg0.8: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 8974
[root@nfs src]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  1] local 172.27.6.135 port 5001 connected with 172.27.6.129 port 56026
[ ID] Interval       Transfer     Bandwidth
[  1] 0.00-10.00 sec  1.14 GBytes   981 Mbits/sec
[  2] local 172.27.6.135 port 5001 connected with 172.27.6.129 port 61809
[ ID] Interval       Transfer     Bandwidth
[  2] 0.00-10.00 sec   977 MBytes   819 Mbits/sec
[  3] local 172.27.6.135 port 5001 connected with 172.27.6.129 port 28069
[ ID] Interval       Transfer     Bandwidth
[  3] 0.00-10.00 sec  1.12 GBytes   963 Mbits/sec

Your workaround appears to also work for me, and the speeds are normal (great)
again. Sounds like you expected to see this based on your knowledge of the
other bug with MTU. Should I chime in on the other bug and provide any feedback
there? 
 And, should I make this MTU reduction on the logical interface permanent in my
rc.conf for the time being?

-- 
You are receiving this mail because:
You are the assignee for the bug.