Arg. TCP slow start killing me.
Jason Wolfe
nitroboost at gmail.com
Sun Nov 13 21:54:37 UTC 2011
Erich,
Forgot to mention net.inet.tcp.delayed_ack can be a detriment in latent
paths, might try setting it to 0 to see if it improves your throughput.
Jason Wolfe
On Sun, Nov 13, 2011 at 2:48 PM, Jason Wolfe <nitroboost at gmail.com> wrote:
> Erich,
>
> Slow start is actually just the initial ramp up limited by RFC 3390 being
> enabled by default (usually 3/4 packets), but this is only the case for the
> first few seconds of the stream. You can effectively speed that up with
> something like this though:
>
> net.inet.tcp.rfc3390=0
> net.inet.tcp.slowstart_flightsize=10
> net.inet.tcp.sendspace=262144
> net.inet.tcp.recvspace=262144
>
> The first 2 allow 10 packets to be sent before an ACK, and the 2nd 2 just
> bump as the starting window size. With your memory and the massive max you
> set no reason to force them to slowly step up from such a low initial size.
> Looks like the numbers you used for initial are actually the default
> increment/step size of the window growth.
>
> Also since you mentioned latency playing a factor here, try this sysctl.
> If overruns are an issue you'll likely see a bit of an increase in
> retransmits, but could potentially show a sizable positive impact in the
> saw tooth.
>
> net.inet.tcp.inflight.enable=0
>
> Is it possible to upgrade to 8.2-STABLE? Cubic has shown some really
> great improvement in my latent paths, a steady 10% overall increase in same
> cases.
>
> Jason Wolfe
>
> On Sun, Nov 13, 2011 at 2:16 PM, Erich Weiler <weiler at soe.ucsc.edu> wrote:
>
>> So, I have a FreeBSD 8.1 box that I'm using as a firewall (pfSense 2.0
>> really, which uses 8.1 as a base), and I'm filtering packets inbound and
>> I'm seeing a typical sawtooth pattern where I get high bandwidth, then a
>> packet drops somewhere, and the TCP connections back off a *lot*, then
>> slowly get faster, then backoff, etc. These are all higher latency WAN
>> connections.
>>
>> I get an average of 1.5 - 2.0 Gb/s incoming, but I see it spike to like
>> 3Gb/s every once in a while, then drop again. I'm trying to maintain that
>> 3Gb/s for as long as possible between it dropping.
>>
>> Given that 8.1 does not have the more advanced TCP congestion algorithms
>> like cubic and H-TPC that might help that to some degree, I'm trying to
>> "fake it". ;)
>>
>> My box has 24GB RAM on it. Is there some tunable I can set that would
>> effectively buffer incoming packets, even though the buffers would
>> eventually fill up, just to "delay" the TCP dropped packet signal telling
>> the hosts on the internet to back off? Like, could I effectively buffer
>> 10GB of packets in the queue before it sent the backoff signal? Would
>> setting kern.ipc.nmbclusters or something similar help?
>>
>> Right now I have:
>>
>> loader.conf.local:
>>
>> vm.kmem_size_max=12G
>> vm.kmem_size=10G
>>
>> sysctl.conf:
>>
>> kern.ipc.maxsockbuf=16777216
>> kern.ipc.nmbclusters=262144
>> net.inet.tcp.recvbuf_max=16777216
>> net.inet.tcp.recvspace=8192
>> net.inet.tcp.sendbuf_max=16777216
>> net.inet.tcp.sendspace=16384
>>
>> I guess the goal is to keep the bandwidth high without dropoffs for as
>> long as possible, with out as many TCP resets on the streams.
>>
>> Any help much appreciated! I'm probably missing a key point, but that's
>> why I'm posting to the list. ;)
>>
>> cheers,
>> erich
>>
>> _______________________________________________
>> freebsd-net at freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-net
>> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
>>
>
>
More information about the freebsd-net
mailing list