Packet loss every 30.999 seconds
Bruce Evans
brde at optusnet.com.au
Wed Dec 19 10:09:34 PST 2007
On Thu, 20 Dec 2007, Bruce Evans wrote:
> On Wed, 19 Dec 2007, David G Lawrence wrote:
>> Considering that the CPU clock cycle time is on the order of 300ps, I
>> would say 125ns to do a few checks is pathetic.
>
> As I said, 125 nsec is a short time in this context. It is approximately
> the time for a single L2 cache miss on a machine with slow memory like
> freefall (Xeon 2.8 GHz with L2 cache latency of 155.5 ns). As I said,
Perfmon counts for the cache misses during sync(1);
==> /tmp/kg1/z0 <==
vfs.numvnodes: 630
# s/kx-dc-accesses
484516
# s/kx-dc-misses
20852
misses = 4%
==> /tmp/kg1/z1 <==
vfs.numvnodes: 9246
# s/kx-dc-accesses
884361
# s/kx-dc-misses
89833
misses = 10%
==> /tmp/kg1/z2 <==
vfs.numvnodes: 20312
# s/kx-dc-accesses
1389959
# s/kx-dc-misses
178207
misses = 13%
==> /tmp/kg1/z3 <==
vfs.numvnodes: 80802
# s/kx-dc-accesses
4122411
# s/kx-dc-misses
658740
misses = 16%
==> /tmp/kg1/z4 <==
vfs.numvnodes: 138557
# s/kx-dc-accesses
7150726
# s/kx-dc-misses
1129997
misses = 16%
===
I forgot to only count active vnodes in the above. vfs.freevnodes was
small (< 5%).
I set kern.maxvnodes to 200000, but vfs.numvnodes saturated at 138557
(probably all that fits in kvm or main memory on i386 with 1GB RAM).
With 138557 vnodes, a null sync(2) takes 39673 us according to kdump -R.
That is 35.1 ns per miss. This is consistent with lmbench2's estimate
of 42.5 ns for main memory latency.
Watching vfs.*vnodes confirmed that vnode caching still works like you
said:
o "find /home/ncvs/ports -type f" only gives a vnode for each directory
o a repeated "find /home/ncvs/ports -type f" is fast because everything
remains cached by VMIO. FreeBSD performed very badly at this benchmark
before VMIO existed and was used for directories
o "tar cf /dev/zero /home/ncvs/ports" gives a vnode for files too.
Bruce
More information about the freebsd-net
mailing list