reno cwnd growth while app limited...

Scheffenegger, Richard Richard.Scheffenegger at netapp.com
Wed Sep 11 12:04:23 UTC 2019


Hi,

I was just looking at some graph data running two parallel dctcp flows against a cubic receiver (some internal validation) with traditional ecn feedback.

[cid:image002.jpg at 01D568A9.CB143AC0]


Now, in the beginning, a single flow can not overutilize the link capacity, and never runs into any loss/mark... but the snd_cwnd grows unbounded (since DCTCP is using the newreno "cc_ack_received" mechanism).

However, newreno_ack_received is only to grow snd_cwnd, when CCF_CWND_LIMITED is set, which remains set as long as snd_cwnd < snd_wnd (the receiver signaled receive-window).

But is this still* the correct behavior?

Say, the data flow rate is application limited (ever n milliseconds, a few kB), and the receiver has a large window signalled - cwnd will grow until it matches the receivers window. If then the application chooses to no longer restrict itself, it would possibly burst out significantly more data than the queuing of the path can handle...

So, shouldn't there be a second condition for cwnd growth, that e.g. pipe (flightsize) is close to cwnd (factor 0.5 during slow start, and say 0.85 during congestion avoidance), to prevent sudden large bursts when a flow comes out of being application limited? The intention here would be to restrict the worst case burst that could be sent out (which is dealt will differently in other stacks), to ideally still fit into the path's queues...

RFC5681 is silent on application limited flows though (but one could thing of application limiting a flow being another form of congestion, during which cwnd shouldn't grow...)

In the example above, growing cwnd up to about 500 kB and then remaining there should be approximately the expected setting - based on the average of two competing flows hovering at aroud 200-250 kB...

*) I'm referring to the much higher likelihood nowadays, that the application itself pacing and transfer volume violates the design principle of TCP, where the implicit assumption was that the sender has unlimited data to send, with the timing controlled at the full disgression of TCP.


Richard Scheffenegger
Consulting Solution Architect
NAS & Networking

NetApp
+43 1 3676 811 3157 Direct Phone
+43 664 8866 1857 Mobile Phone
Richard.Scheffenegger at netapp.com<mailto:Richard.Scheffenegger at netapp.com>


[Welcome to Data Driven]<https://datavisionary.netapp.com/>

<https://datavisionary.netapp.com/>
[Facebook]<https://www.facebook.com/NetApp?fref=ts> [Twitter] <https://twitter.com/NetApp>
 #DataDriven

https://ts.la/richard49892



More information about the freebsd-transport mailing list