ipfw bandwidth shaping problems (intermittent latency)
rmkml
rmkml at wanadoo.fr
Wed Jun 25 08:34:15 PDT 2003
Hi Luigi,
On my test :
I have PIV2ghz, 2 network card intel giga onboard, 1Goram,
this gw is connected to two host on 100Mbit network,
host A in internal intf,
and host B in external intf
this three host is dedicated to the test, no other traffic
fbsd47release, (no patch/cvs)
configure ipfw with pipe bw = 2400Kbit/s (no other rules/pipe/queue)
send 20 ping A -> B,
in 10ieme ping : +-10ms
and in 20ieme ping : +-10ms
other ping have +- 1ms
on same gw with fbsd48release (no patch/cvs),
no pb ... (+- 1ms)
in kernel conf, I add IPFIREWALL and DUMMYNET and HZ=1000 (not IPFW2)
and no other process in system ...
You possible confirm this test on fbsd47 ?
Regard.
PS: sorry for my bad speak english.
Luigi Rizzo wrote:
> ok then it is very clear what is going on.
>
> You have some connection which occasionally sends a burst of
> data (more than 15 pkts because you have drops, but perhaps
> less than 50 as you don't see drops with a queue of 50), which
> in turn causes the long delay.
>
> 123ms at 2.4Mbit/s are approx 300Kbits or 32Kbytes of data + headers,
> or 22-23 full-sized packets.
>
> Given the above numbers, I suspect (indeed, I would bet money on
> this!) that the offender is a TCP connection which has the window
> fully open and sends data intermittently -- e.g. a persistent http
> connection, or even the typical ssh connection in response to a
> command that causes a large burst of data.
>
> cheers
> luigi
>
> On Wed, Jun 25, 2003 at 03:12:20PM +0100, Andy Coates wrote:
> > Luigi Rizzo (rizzo at icir.org) wrote:
> > > On Wed, Jun 25, 2003 at 11:28:01AM +0100, Andy Coates wrote:
> > > > Andy Coates (andy at bribed.net) wrote:
> > > ...
> > > > > However recently under a more utilised link we start to see every third
> > > > > packet or so have a higher latency than the rest. For example, an icmp
> > > ...
> > > > > ping to the host would show 10ms, 10ms, and then 190ms, then back to 10ms
> > > > > for another 2 pings and upto 190ms again.
> > >
> > > well your numbers below are actually quite different from
> > > your description above -- anyways, pleaspost the output of
> > >
> > > ipfw pipe show
> >
> > Sorry, the numbers were just examples - the ones below were the ones currently.
> > It also depends on the amount being utilised, as when we're pushing 1500Kbit/s
> > it can go a lot higher and vary more.
> >
> > With just the straight "pipe 1 config bw 2400Kbit/s":
> >
> > 00001: 2.400 Mbit/s 0 ms 50 sl. 1 queues (1 buckets) droptail
> > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
> >
> > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
> > 0 tcp xx.xx.xx.xx/2103 xx.xx.xx.xx/1238 424 44493 0 0 0
> >
> > 00002: 2.400 Mbit/s 0 ms 50 sl. 1 queues (1 buckets) droptail
> > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
> >
> > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
> > 0 tcp xx.xx.xx.xx/22 xx.xx.xx.xx/33231 477 360621 0 0 0
> >
> > Bear in mind i've been tweaking the values so they keep restting the figures.
> >
> > When I add the "queue X" option, the slots change, as below:
> >
> >
> > 00001: 2.400 Mbit/s 0 ms 15 sl. 1 queues (1 buckets) droptail
> > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
> >
> > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
> > 0 tcp xx.xx.xx.xx/32855 xx.xx.xx.xx/22 1967 219937 0 0 0
> >
> > 00002: 2.400 Mbit/s 0 ms 15 sl. 1 queues (1 buckets) droptail
> > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
> >
> > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
> > 0 tcp xx.xx.xx.xx/22 xx.xx.xx.xx/33231 2571 2236734 0 0 18
> >
> > That was just for 15 slots, i've been trying different numbers since there
> > would be a level at which they don't drop - but the latency still remains.
> >
> >
> > > to see what is going on in the pipe. My impression is still that
> > > you have a fair amount of (bursty) traffic going through the
> > > pipe which causes queues to build up.
> >
> > I wouldn't call it bursty to the point where its varying by 50% or more.
> > And this happens at all levels of traffic (the numbers just vary, but the
> > odd packet still peaks).
> >
> > I'm going to try and upgrade to 4.8-STABLE (it'll be a while since its a
> > production server and will have to be scheduled), but "rmkml" has been
> > helping me off-list try different versions and there might be something
> > different between 4.7 and 4.8
> >
> >
> > Andy.
> >
> >
> >
> >
> > > cheers
> > > luigi
> > >
> > > > > I decided to play with the queue settings, and tried:
> > > > >
> > > > > pipe 1 config bw 2400Kbit/s queue 15
> > > > >
> > > > > This then brought that third ping down to 70ms, so an improvement. This
> > > > > still isn't acceptable however, since the amount of bandwidth being used
> > > > > is only 1000Kbit/s so I can't see where the problem is.
> > > > >
> > > > > Is there anything else I can change to improve the response/latency? Or
> > > > > is this some type of bug?
> > > >
> > > > Just to clarify what I mean:
> > > >
> > > > 64 bytes from x.x.x.x: icmp_seq=0 ttl=248 time=70.803 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=1 ttl=248 time=3.850 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=2 ttl=248 time=3.551 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=3 ttl=248 time=123.844 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=4 ttl=248 time=3.759 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=5 ttl=248 time=3.600 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=6 ttl=248 time=3.507 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=7 ttl=248 time=3.687 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=8 ttl=248 time=3.594 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=9 ttl=248 time=3.527 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=10 ttl=248 time=23.543 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=11 ttl=248 time=123.615 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=12 ttl=248 time=3.637 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=13 ttl=248 time=3.661 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=14 ttl=248 time=103.323 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=15 ttl=248 time=13.101 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=16 ttl=248 time=3.569 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=17 ttl=248 time=23.151 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=18 ttl=248 time=92.962 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=19 ttl=248 time=3.555 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=20 ttl=248 time=43.122 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=21 ttl=248 time=72.781 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=22 ttl=248 time=3.547 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=23 ttl=248 time=122.583 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=24 ttl=248 time=42.509 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=25 ttl=248 time=3.540 ms
> > > > 64 bytes from x.x.x.x: icmp_seq=26 ttl=248 time=52.660 ms
> > > >
> > > >
> > > > Thats just plain odd to me.
> > > >
> > > > Andy.
> > > > _______________________________________________
> > > > freebsd-ipfw at freebsd.org mailing list
> > > > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
> > > > To unsubscribe, send any mail to "freebsd-ipfw-unsubscribe at freebsd.org"
> > > _______________________________________________
> > > freebsd-ipfw at freebsd.org mailing list
> > > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
> > > To unsubscribe, send any mail to "freebsd-ipfw-unsubscribe at freebsd.org"
> >
> > --
> > n: Andy Coates e: andy at bribed.net
> > _______________________________________________
> > freebsd-ipfw at freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
> > To unsubscribe, send any mail to "freebsd-ipfw-unsubscribe at freebsd.org"
> _______________________________________________
> freebsd-ipfw at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
> To unsubscribe, send any mail to "freebsd-ipfw-unsubscribe at freebsd.org"
More information about the freebsd-ipfw
mailing list