[Bug 225535] Delays in TCP connection over Gigabit Ethernet connections; Regression from 6.3-RELEASE
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Tue Jan 30 13:06:10 UTC 2018
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=225535
--- Comment #7 from Aleksander Derevianko <aeder at list.ru> ---
I have now running tests on Moxa DA-820 with kern.eventtimer.periodic=1.
One interesting thing: if you look on tests with 6.3-RELEASE, 98% of samples is
1115322 send_sync 0 0 0 0
= send(small data) recv(small data) send(40Kb) recv(40Kb)
- so every send() and recv() call is executed in <1ms.
7% of samples have
73629 send_sync 0 1 0 0
- so first recv() have 1ms execution time, but it's OK because it is first
recv() after nanosleep(~200ms) - and we can't expect that nanosleep() will act
exactly the same on different computers. Moreover, nanosleep() period is
evaluated from two local calls to clock_gettime( CLOCK_MONOTONIC, ...) and
subtraction operation.
But, for 10.3-RELEASE we have approximatelly even division between:
155042 send_sync 0 0 0 0
122890 send_sync 0 0 0 1
case with delay on first recv() - less then 0.02%
147 send_sync 0 1 0 0
This last 1 is for second recv() - and in this case, we have already
syncronised computers over first small recv().
P.S. Test with kern.eventtimer.periodic=1 produce the following results for
short test time:
root at fspa2:~/clock/new_res # grep times periodic.txt | awk '{print $3 " " $4 "
" $6 " " $8 " " $10 ;}' | sort | uniq -c
1484 send_sync 0 0 0 0
1314 send_sync 0 0 0 1
1 send_sync 0 0 0 230
4 send_sync 0 1 0 0
root at fspa1:~/clock/new_res # grep times periodic.txt | awk '{print $3 " " $4 "
" $6 " " $8 " " $10;}' | sort | uniq -c
1698 send_sync 0 0 0 0
1134 send_sync 0 0 0 1
1 send_sync 0 0 0 229
11 send_sync 0 1 0 0
Very quick very huge delay.
One moment:
fspa1:
times 2550: send_sync 0 recv_sync 0 send_data 0 recv_data 1 eval 52 sleep 247
times 2551: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 248
times 2552: send_sync 0 recv_sync 0 send_data 0 recv_data 1 eval 52 sleep 247
times 2553: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 247
times 2554: send_sync 0 recv_sync 0 send_data 0 recv_data 229 eval 52 sleep 19
times 2555: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 248
times 2556: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 247
times 2557: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 247
times 2558: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 248
times 2559: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 247
fspb:
times 2550: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 248
times 2551: send_sync 0 recv_sync 0 send_data 0 recv_data 1 eval 52 sleep 247
times 2552: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 248
times 2553: send_sync 0 recv_sync 0 send_data 0 recv_data 1 eval 52 sleep 247
times 2554: send_sync 0 recv_sync 0 send_data 0 recv_data 230 eval 52 sleep 18
times 2555: send_sync 0 recv_sync 0 send_data 0 recv_data 1 eval 52 sleep 247
times 2556: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 247
times 2557: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 248
times 2558: send_sync 0 recv_sync 0 send_data 0 recv_data 1 eval 52 sleep 247
times 2559: send_sync 0 recv_sync 0 send_data 0 recv_data 0 eval 52 sleep 247
The problem arise on the SAME cycle in both computers!
How it is possible? Seems like in one of computers both send and receive was
blocked (buffered) on some level.
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-net
mailing list