em(4) patch for test
Michael VInce
mv at roq.com
Sun Oct 23 00:24:08 PDT 2005
Here is my second round of my non scientific benchmarking and tests, I
hope this is useful.
I been having fun benchmarking these machines but I am starting to get
sick of it as well :) but I find it important to know that things are
going to work right when they are launched to do their real work.
The final results look good after patching and running ab tests I was
unable to get errors out of netstat -i output, even when grilling the
server-C machine to a rather high load.
####################
Test 1 (Non patched)
####################
netperf test, non patched machines. load results below with top -S, no
Ierrs or Oerrs.
A> /usr/local/netperf/netperf -l 60 -H server-C -t TCP_STREAM -i 10,2 -I
99,5 -- -m 4096 -s 57344 -S 57344
Server-C load snap shot
last pid: 1644; load averages: 0.94, 0.48,
0.23
up 0+06:30:41 12:29:59
239 processes: 7 running, 130 sleeping, 102 waiting
CPU states: 0.5% user, 0.0% nice, 2.3% system, 9.4% interrupt, 87.9%
idle
Mem: 125M Active, 1160M Inact, 83M Wired, 208K Cache, 112M Buf, 1893M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
13 root 1 171 52 0K 8K CPU1 0 0:00 99.02% idle: cpu1
14 root 1 171 52 0K 8K RUN 0 386:01 98.97% idle: cpu0
12 root 1 171 52 0K 8K CPU2 2 389:02 97.75% idle: cpu2
11 root 1 171 52 0K 8K RUN 3 385:10 61.87% idle: cpu3
62 root 1 -68 -187 0K 8K WAIT 3 1:03 14.16% irq64: em0
112 root 1 -44 -163 0K 8K CPU2 3 1:35 11.23% swi1: net
1644 root 1 4 0 1640K 1016K sbwait 3 0:09 7.25% netserver
30 root 1 -64 -183 0K 8K CPU3 3 0:19 2.15% irq16:
uhci0
Server-A load snap shot
last pid: 1550; load averages: 0.34, 0.32,
0.21
up 0+07:28:38 12:41:33
134 processes: 3 running, 52 sleeping, 78 waiting, 1 lock
CPU states: 0.8% user, 0.0% nice, 10.2% system, 42.1% interrupt, 47.0%
idle
Mem: 13M Active, 27M Inact, 70M Wired, 24K Cache, 213M Buf, 1810M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
11 root 1 171 52 0K 16K RUN 438:50 54.98% idle
83 root 1 -44 -163 0K 16K WAIT 2:41 17.63% swi1: net
59 root 1 -68 -187 0K 16K RUN 1:48 11.23% irq64: em2
1547 root 1 4 0 3812K 1356K sbwait 0:17 8.84% netperf
27 root 1 -68 -187 0K 16K *Giant 0:51 2.78% irq16:
em0 uhci0
####################
Test 2 (Non patched)
####################
On the Apache beat up test with: A> 'ab -k -n 25500 -c 900
http://server-c/338kbyte.file'
You can see some error output from netstat -i on both machines,
C> netstat -i | egrep 'Name|em0.*Link'
Name Mtu Network Address Ipkts Ierrs Opkts
Oerrs Coll
em0 1500 <Link#1> 00:14:22:12:4c:03 85133828 2079 63248162
0 0
top highest load.
last pid: 2170; load averages: 35.56, 16.54,
8.52
up 0+07:28:47 13:28:05
1182 processes:125 running, 954 sleeping, 103 waiting
CPU states: 5.4% user, 0.0% nice, 37.2% system, 32.4% interrupt, 25.0%
idle
Mem: 372M Active, 1161M Inact, 131M Wired, 208K Cache, 112M Buf, 1595M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
13 root 1 171 52 0K 8K CPU1 0 0:00 100.00% idle:
cpu1
62 root 1 -68 -187 0K 8K WAIT 3 5:57 89.79% irq64: em0
30 root 1 -64 -183 0K 8K CPU0 0 2:11 36.08% irq16:
uhci0
12 root 1 171 52 0K 8K RUN 2 439:16 2.98% idle: cpu2
14 root 1 171 52 0K 8K RUN 0 435:36 2.98% idle: cpu0
11 root 1 171 52 0K 8K RUN 3 430:35 2.98% idle: cpu3
2146 root 1 -8 0 4060K 2088K piperd 2 0:01 0.15% rotatelogs
129 root 1 20 0 0K 8K syncer 0 0:08 0.05% syncer
112 root 1 -44 -163 0K 8K WAIT 0 4:50 0.00% swi1: net
110 root 1 -32 -151 0K 8K WAIT 3 1:05 0.00% swi4:
clock sio
2149 www 66 113 0 44476K 31276K RUN 3 0:08 0.00% httpd
Server-A netstat and highest top. (non patched)
A> netstat -i | egrep 'em2.*Link|Name'
Name Mtu Network Address Ipkts Ierrs Opkts
Oerrs Coll
em2 1500 <Link#3> 00:14:22:15:ff:8e 61005124 690 84620697
0 0
last pid: 1698; load averages: 0.65, 0.29,
0.13
up 0+08:15:10 13:28:05
136 processes: 6 running, 53 sleeping, 76 waiting, 1 lock
CPU states: 3.4% user, 0.0% nice, 58.6% system, 32.0% interrupt, 6.0%
idle
Mem: 21M Active, 27M Inact, 71M Wired, 36K Cache, 213M Buf, 1793M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
1698 root 1 112 0 24800K 9136K RUN 0:39 61.61% ab
83 root 1 -44 -163 0K 16K RUN 4:07 13.38% swi1: net
59 root 1 -68 -187 0K 16K RUN 2:44 11.04% irq64: em2
11 root 1 171 52 0K 16K RUN 479:02 8.79% idle
27 root 1 -68 -187 0K 16K *Giant 1:16 3.03% irq16:
em0 uhci0
84 root 1 -32 -151 0K 16K RUN 0:22 0.00% swi4:
clock sio
While we are at it, the apache2 server-status results
Server uptime: 1 minute
Total accesses: 17179 - Total Traffic: 5.4 GB
CPU Usage: u17.2734 s80.8672 cu0 cs0 - 164% CPU load
286 requests/sec - 92.2 MB/second - 329.9 kB/request
901 requests currently being processed, 379 idle workers
Looking at those Apache results show its serving ability must be partly
just bandwidth limited at around 900mbits/sec or 92megabytes/sec mark, I
should probably test with smaller files, to test its ability of handling
more requests at the same time.
It appears (but really just guessing) that at least some of the errors
out of netstat -i is when apache is not configured properly and its
failing to hand out requests and on the client side with ab, it spits
back these apr_connect(): Invalid argument (22).
###
Patched Results, both machines A (client) and C (server) had the patched
em driver.
####################
Test 1 (Patched)
####################
A> /usr/local/netperf/netperf -l 360 -H server-C -t TCP_STREAM -i 10,2
-I 99,5 -- -m 4096 -s 57344 -S 57344
Server-C load snap shot
last pid: 1905; load averages: 0.63, 0.43,
0.20
up 0+08:49:40 23:11:02
239 processes: 6 running, 130 sleeping, 103 waiting
CPU states: 0.1% user, 0.0% nice, 3.0% system, 5.3% interrupt, 91.6%
idle
Mem: 121M Active, 25M Inact, 80M Wired, 960K Cache, 68M Buf, 3033M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
13 root 1 171 52 0K 8K CPU1 0 0:00 99.02% idle: cpu1
14 root 1 171 52 0K 8K RUN 0 524:16 98.29% idle: cpu0
12 root 1 171 52 0K 8K RUN 2 528:10 98.10% idle: cpu2
11 root 1 171 52 0K 8K RUN 3 526:49 57.42% idle: cpu3
62 root 1 -68 -187 0K 8K WAIT 3 0:40 14.70% irq64: em0
112 root 1 -44 -163 0K 8K WAIT 2 0:51 12.35% swi1: net
1885 root 1 4 0 1520K 960K sbwait 2 0:26 9.28% netserver
30 root 1 -64 -183 0K 8K RUN 3 0:11 2.73% irq16:
uhci0
Server-A load snap shot
last pid: 1690; load averages: 0.43, 0.31,
0.14
up 0+08:49:30 23:11:06
132 processes: 3 running, 50 sleeping, 78 waiting, 1 lock
CPU states: 0.0% user, 0.0% nice, 7.1% system, 35.3% interrupt, 57.5%
idle
Mem: 10M Active, 6896K Inact, 24M Wired, 4K Cache, 9072K Buf, 1930M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
11 root 1 171 52 0K 16K RUN 525:59 54.15% idle
83 root 1 -44 -163 0K 16K WAIT 0:59 17.04% swi1: net
59 root 1 -68 -187 0K 16K RUN 0:34 11.33% irq64: em2
1674 root 1 4 0 3812K 1356K sbwait 0:21 9.03% netperf
27 root 1 -68 -187 0K 16K *Giant 0:15 3.81% irq16:
em0 uhci0
No errors for netstat -i created on netperf tests.
netstat -i | egrep 'Name|em0.*Link'
Name Mtu Network Address Ipkts Ierrs Opkts
Oerrs Coll
em0 1500 <Link#1> 00:14:22:12:4c:03 23889010 0 14542263
0 0
####################
Test 2 (Patched)
####################
Apache ab test ( with: A> 'ab -k -n 25500 -c 900
http://server-c/338kbyte.file' )
Server-C load snap shot
last pid: 2004; load averages: 35.26, 10.84,
4.23
up 0+08:59:39 23:21:01
1184 processes:137 running, 946 sleeping, 101 waiting
CPU states: 7.9% user, 0.0% nice, 37.8% system, 29.3% interrupt, 25.0%
idle
Mem: 418M Active, 25M Inact, 106M Wired, 960K Cache, 71M Buf, 2711M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
13 root 1 171 52 0K 8K CPU1 0 0:00 99.02% idle: cpu1
112 root 1 -44 -163 0K 8K CPU3 3 2:08 62.26% swi1: net
62 root 1 -68 -187 0K 8K WAIT 3 1:36 42.53% irq64: em0
30 root 1 -64 -183 0K 8K CPU0 3 0:24 5.96% irq16:
uhci0
11 root 1 171 52 0K 8K RUN 3 534:13 3.17% idle: cpu3
12 root 1 171 52 0K 8K RUN 2 536:57 3.03% idle: cpu2
14 root 1 171 52 0K 8K RUN 0 532:58 3.03% idle: cpu0
1980 root 1 -8 0 4060K 2088K piperd 2 0:01 0.30% rotatelogs
Server-A load snap shot
last pid: 1700; load averages: 0.66, 0.30,
0.20
up 0+08:59:30 23:21:06
132 processes: 5 running, 49 sleeping, 77 waiting, 1 lock
CPU states: 3.8% user, 0.0% nice, 61.7% system, 34.6% interrupt, 0.0%
idle
Mem: 17M Active, 6932K Inact, 24M Wired, 4K Cache, 9440K Buf, 1921M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
1700 root 1 118 0 25020K 9172K RUN 0:42 62.70% ab
83 root 1 -44 -163 0K 16K RUN 1:54 16.06% swi1: net
59 root 1 -68 -187 0K 16K RUN 1:07 12.79% irq64: em2
11 root 1 171 52 0K 16K RUN 533:11 3.56% idle
27 root 1 -68 -187 0K 16K *Giant 0:29 3.03% irq16:
em0 uhci0
Interestingly there were no errors this time from the ab tests
A> netstat -i | egrep 'em2.*Link|Name'
Name Mtu Network Address Ipkts Ierrs Opkts
Oerrs Coll
em2 1500 <Link#3> 00:14:ff:15:fe:8e 24573749 0 35135895
0 0
C> netstat -i | egrep 'Name|em0.*Link'
Name Mtu Network Address Ipkts Ierrs Opkts
Oerrs Coll
em0 1500 <Link#1> 00:14:ff:12:4k:03 35135932 0 24572541
0 0
Maybe it was another luck of the draw test for the patched em driver but
I got higher speeds according to the Apache server-status and 'ab' stats.
Also note that with some of the tests I was posting earlier this week I
was running apache 2.054 worker mode, now I am on 2.0.55 worker mode and
I appear to be getting faster speeds from this version, if it was my
apache that just needed a restart or its the new build I don't know (
the down side of not doing lab strict enough benchmarking)
Snap shot of 'server-status'
Restart Time: Saturday, 22-Oct-2005 23:19:55 PDT
Parent Server Generation: 0
Server uptime: 1 minute 1 second
Total accesses: 20668 - Total Traffic: 6.5 GB
CPU Usage: u21.0469 s87.8828 cu0 cs0 - 179% CPU load
339 requests/sec - 109.2 MB/second - 330.0 kB/request
901 requests currently being processed, 379 idle workers
Pushed 109megs/sec over 6.5gig traffic in 1 minute.
Here are the ab client side results
Concurrency Level: 900
Time taken for tests: 75.783501 seconds
Complete requests: 25500
Failed requests: 0
Write errors: 0
Keep-Alive requests: 25500
Total transferred: 8784117972 bytes
HTML transferred: 8776728432 bytes
Requests per second: 336.48 [#/sec] (mean)
Time per request: 2674.712 [ms] (mean)
Time per request: 2.972 [ms] (mean, across all concurrent requests)
Transfer rate: 113194.03 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 13 199.1 0 6207
Processing: 52 2601 1512.4 2313 20428
Waiting: 1 173 632.8 22 7090
Total: 52 2615 1543.0 2315 20428
I did watch the gateway (B) pf state table and did an ab test with and
without pf running, I didn't see any difference in results when having
pf running with stateful rules, ab's Time per requests stayed low and
transfer rates stayed high. Most of the time the total states were
exactly 900 (plus 1 for ssh session) which would make sense considering
the 900 keep-alive concurrency level on the ab test.
pftop output
RULE ACTION DIR LOG Q IF PR K PKTS BYTES STATES MAX INFO
0 Pass In Q em2 tcp M 37362067 1856847K 901
inet from any to server-c port = http
In frustration of not getting any errors I ran ab for longer doing 50000
requests to push server-c into an even higher load (45.68) , but I still
didnt get any errors out of netstat -i
ab -k -n 50000 -c 900 http://server-c/338kbyte.file
Server-C load snap shot.
last pid: 2274; load averages: 45.68, 27.33,
13.88
up 0+09:36:38 23:58:00
1202 processes:151 running, 951 sleeping, 100 waiting
CPU states: 7.1% user, 0.0% nice, 37.2% system, 30.7% interrupt, 25.0%
idle
Mem: 417M Active, 25M Inact, 118M Wired, 960K Cache, 83M Buf, 2699M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
13 root 1 171 52 0K 8K CPU1 0 0:00 99.02% idle: cpu1
112 root 1 -44 -163 0K 8K CPU3 3 5:25 63.77% swi1: net
62 root 1 -68 -187 0K 8K CPU3 3 3:40 45.12% irq64: em0
30 root 1 -64 -183 0K 8K CPU0 3 0:52 6.15% irq16:
uhci0
110 root 1 -32 -151 0K 8K RUN 0 0:46 1.81% swi4:
clock sio
2253 root 1 -8 0 4060K 2088K piperd 2 0:02 0.59% rotatelogs
2257 www 69 20 0 44476K 31292K kserel 2 0:15 0.05% httpd
12 root 1 171 52 0K 8K RUN 2 568:59 0.00% idle: cpu2
11 root 1 171 52 0K 8K RUN 3 566:19 0.00% idle: cpu3
14 root 1 171 52 0K 8K RUN 0 564:44 0.00% idle: cpu0
Server-A load snap shot.
last pid: 1789; load averages: 0.90, 0.52,
0.25
up 0+09:36:30 23:58:06
132 processes: 7 running, 48 sleeping, 76 waiting, 1 lock
CPU states: 5.2% user, 0.0% nice, 58.8% system, 36.0% interrupt, 0.0%
idle
Mem: 18M Active, 6924K Inact, 24M Wired, 4K Cache, 9472K Buf, 1908M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
1789 root 1 118 0 25980K 10312K RUN 1:28 62.95% ab
83 root 1 -44 -163 0K 16K RUN 2:44 16.46% swi1: net
59 root 1 -68 -187 0K 16K RUN 1:45 13.13% irq64: em2
27 root 1 -68 -187 0K 16K *Giant 0:42 3.22% irq16:
em0 uhci0
11 root 1 171 52 0K 16K RUN 565:16 0.00% idle
84 root 1 -32 -151 0K 16K RUN 0:25 0.00% swi4:
clock sio
A> netstat -i | egrep 'em2.*Link|Name'
Name Mtu Network Address Ipkts Ierrs Opkts
Oerrs Coll
em2 1500 <Link#3> 00:14:22:15:ff:8e 48321027 0 53902829
0 0
C> netstat -i | egrep 'Name|em0.*Link'
Name Mtu Network Address Ipkts Ierrs Opkts
Oerrs Coll
em0 1500 <Link#1> 00:14:22:12:4c:03 53903066 0 48320427
0 0
##################
Detailed netstat -s output,
(Patched)
This output was after the first patched ab test.
C> netstat -s
tcp:
25376005 packets sent
6513686 data packets (207546682 bytes)
709236 data packets (1023477672 bytes) retransmitted
528 data packets unnecessarily retransmitted
0 resends initiated by MTU discovery
14624527 ack-only packets (405291 delayed)
0 URG only packets
0 window probe packets
4233326 window update packets
1722 control packets
35938560 packets received
4028982 acks (for 207454007 bytes)
1240 duplicate acks
0 acks for unsent data
30734610 packets (943374026 bytes) received in-sequence
219 completely duplicate packets (27559 bytes)
0 old duplicate packets
0 packets with some dup. data (0 bytes duped)
0 out-of-order packets (0 bytes)
0 packets (0 bytes) of data after window
0 window probes
1182576 window update packets
0 packets received after close
0 discarded for bad checksums
0 discarded for bad header offset fields
0 discarded because packet too short
573 connection requests
1496 connection accepts
0 bad connection attempts
0 listen queue overflows
0 ignored RSTs in the windows
2066 connections established (including accepts)
2078 connections closed (including 873 drops)
892 connections updated cached RTT on close
919 connections updated cached RTT variance on close
913 connections updated cached ssthresh on close
3 embryonic connections dropped
4028982 segments updated rtt (of 3716704 attempts)
82407 retransmit timeouts
0 connections dropped by rexmit timeout
0 persist timeouts
0 connections dropped by persist timeout
0 keepalive timeouts
0 keepalive probes sent
0 connections dropped by keepalive
332121 correct ACK header predictions
30711908 correct data packet header predictions
1496 syncache entries added
0 retransmitted
0 dupsyn
1 dropped
1496 completed
0 bucket overflow
0 cache overflow
0 reset
0 stale
0 aborted
0 badack
0 unreach
0 zone failures
0 cookies sent
0 cookies received
10 SACK recovery episodes
0 segment rexmits in SACK recovery episodes
0 byte rexmits in SACK recovery episodes
84 SACK options (SACK blocks) received
0 SACK options (SACK blocks) sent
0 SACK scoreboard overflow
udp:
28 datagrams received
0 with incomplete header
0 with bad data length field
0 with bad checksum
0 with no checksum
0 dropped due to no socket
0 broadcast/multicast datagrams dropped due to no socket
0 dropped due to full socket buffers
0 not for hashed pcb
28 delivered
30 datagrams output
ip:
35938589 total packets received
0 bad header checksums
0 with size smaller than minimum
0 with data size < data length
0 with ip length > max ip packet size
0 with header length < data size
0 with data length < header length
0 with bad options
0 with incorrect version number
0 fragments received
0 fragments dropped (dup or out of space)
0 fragments dropped after timeout
0 packets reassembled ok
35938588 packets for this host
0 packets for unknown/unsupported protocol
0 packets forwarded (0 packets fast forwarded)
1 packet not forwardable
0 packets received for unknown multicast group
0 redirects sent
26082806 packets sent from this host
0 packets sent with fabricated ip header
706706 output packets dropped due to no bufs, etc.
0 output packets discarded due to no route
0 output datagrams fragmented
0 fragments created
0 datagrams that can't be fragmented
0 tunneling packets that can't find gif
0 datagrams with bad address in header
(non patched)
netstat -s output Server-C after ab benchmarking
C>netstat -s
tcp:
80819322 packets sent
38131626 data packets (2094911993 bytes)
3391261 data packets (596376915 bytes) retransmitted
2466 data packets unnecessarily retransmitted
0 resends initiated by MTU discovery
32726002 ack-only packets (471792 delayed)
0 URG only packets
0 window probe packets
9363807 window update packets
577867 control packets
99932094 packets received
22930997 acks (for 2088935510 bytes)
166038 duplicate acks
0 acks for unsent data
68050890 packets (2507301359 bytes) received in-sequence
10977 completely duplicate packets (83343 bytes)
3 old duplicate packets
4 packets with some dup. data (393 bytes duped)
178 out-of-order packets (185678 bytes)
13973 packets (1584 bytes) of data after window
0 window probes
8540928 window update packets
10 packets received after close
0 discarded for bad checksums
0 discarded for bad header offset fields
0 discarded because packet too short
239473 connection requests
233007 connection accepts
96 bad connection attempts
356 listen queue overflows
0 ignored RSTs in the windows
345945 connections established (including accepts)
472549 connections closed (including 6743 drops)
7009 connections updated cached RTT on close
7052 connections updated cached RTT variance on close
6223 connections updated cached ssthresh on close
126535 embryonic connections dropped
22879742 segments updated rtt (of 19373996 attempts)
617076 retransmit timeouts
1 connection dropped by rexmit timeout
0 persist timeouts
0 connections dropped by persist timeout
2076 keepalive timeouts
8 keepalive probes sent
2068 connections dropped by keepalive
258255 correct ACK header predictions
67282248 correct data packet header predictions
233228 syncache entries added
438 retransmitted
2 dupsyn
224 dropped
233007 completed
220 bucket overflow
0 cache overflow
7 reset
1 stale
356 aborted
0 badack
0 unreach
0 zone failures
0 cookies sent
363 cookies received
385 SACK recovery episodes
305 segment rexmits in SACK recovery episodes
432188 byte rexmits in SACK recovery episodes
3004 SACK options (SACK blocks) received
245 SACK options (SACK blocks) sent
0 SACK scoreboard overflow
udp:
78 datagrams received
0 with incomplete header
0 with bad data length field
0 with bad checksum
0 with no checksum
3 dropped due to no socket
0 broadcast/multicast datagrams dropped due to no socket
0 dropped due to full socket buffers
0 not for hashed pcb
75 delivered
112 datagrams output
ip:
99583810 total packets received
0 bad header checksums
0 with size smaller than minimum
0 with data size < data length
0 with ip length > max ip packet size
0 with header length < data size
0 with data length < header length
0 with bad options
0 with incorrect version number
0 fragments received
0 fragments dropped (dup or out of space)
0 fragments dropped after timeout
0 packets reassembled ok
99583796 packets for this host
0 packets for unknown/unsupported protocol
0 packets forwarded (0 packets fast forwarded)
4 packets not forwardable
0 packets received for unknown multicast group
0 redirects sent
83968104 packets sent from this host
0 packets sent with fabricated ip header
3372686 output packets dropped due to no bufs, etc.
0 output packets discarded due to no route
0 output datagrams fragmented
0 fragments created
0 datagrams that can't be fragmented
0 tunneling packets that can't find gif
0 datagrams with bad address in header
Gleb Smirnoff wrote:
> Michael,
>
> big thanks for a very detailed report!
>
> On your next test round, can you please also keep an eye on
>the CPU load. Is it increased measurably by the patch or not.
>
> Thanks again!
>
>
>
More information about the freebsd-net
mailing list