Re: Chelsio Forwarding performance and RELENG_13 vs RELENG_12 (solved)
- In reply to: mike tancsa : "Re: Chelsio Forwarding performance and RELENG_13 vs RELENG_12 (solved)"
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Wed, 23 Nov 2022 19:40:31 UTC
On 11/3/2022 2:20 PM, mike tancsa wrote: > Yes, I think 4 queues are enough for 10G. >>> >> Sadly, no luck. Still about the same rate of overflows :( >> >> > FYI, I worked around the issue by using two 520-CR NICs instead of the > one 540-CR NIC and performance is solid again with no dropped packets > > Another configuration point on this. Moving to RELENG_13 has some different defaults with respect to power/performance ratios for my motherboard and CPU (SuperMicro X11SCH-F, Xeon(R) E-2226G). On RELENG_13, the hwpstate_intel attaches by default and is used to scale up and down the CPU frequency. I am guessing due to the somewhat bursty nature of the load, the CPU scaling down to 800Hz could not scale back up fast enough to deal with a sudden burst of traffic going from say 300Mb/s to 800Mb/s and some packets would overflow the NIC's buffers. Printing out the cpu frequency once per second, it would be constantly floating up and down from 900 to 4300. At first, I couldnt quite get my head around the fact that the most lost packets would happen at the lowest pps periods. Once I started to graph the cpu freq, CPU temp, pps, Mb/s, the pattern really stood out. Sure enough, setting dev.hwpstate_intel.0.epp=0 on the cores from the default of 50 (see HWPSTATE_INTEL(4) ) made the difference. # sysctl -a dev.cpufreq.0.freq_driver dev.cpufreq.0.freq_driver: hwpstate_intel0 # ---Mike