svn commit: r304436 - in head: . sys/netinet
Slawa Olhovchenkov
slw at zxy.spb.ru
Sun Aug 28 17:46:50 UTC 2016
On Sun, Aug 28, 2016 at 10:20:08AM -0700, Adrian Chadd wrote:
Hi,
thanks for answer!
> Hi,
>
> There are some no brainers here so far(tm):
>
> working from the bottom up:
>
> * yeah, the ixgbe locking is a bit silly. Kip's work with iflib and
> converting ixgbe to use that instead of its own locking for managing
> things should remove the bottom two locks
I think no MFC to stbale/10 planed?
> * the rtalloc1_fib thing - that's odd, because it shouldn't be
> contending there unless there's some temporary redirect that's been
> learnt. What's the routing table look like on your machine? I Remember
# netstat -rn
Routing tables
Internet:
Destination Gateway Flags Netif Expire
default 37.220.36.1 UGS lagg0
37.220.36.0/24 link#6 U lagg0
37.220.36.11 link#6 UHS lo0
127.0.0.1 link#5 UH lo0
Internet6:
Destination Gateway Flags Netif Expire
::/96 ::1 UGRS lo0
::1 link#5 UH lo0
::ffff:0.0.0.0/96 ::1 UGRS lo0
fe80::/10 ::1 UGRS lo0
fe80::%lo0/64 link#5 U lo0
fe80::1%lo0 link#5 UHS lo0
ff01::%lo0/32 ::1 U lo0
ff02::/16 ::1 UGRS lo0
ff02::%lo0/32 ::1 U lo0
> investigating the rtentry reference counting a while ago and concluded
> that .. it's terrible, and one specific corner case was checking for
> routes from redirects. I'll look at my notes again and see what I
> find.
>
> kernel`vm_object_madvise+0x39e
> kernel`vm_map_madvise+0x3bb
> kernel`sys_madvise+0x82
> kernel`amd64_syscall+0x40f
> kernel`0xffffffff806c8bbb
> 97389657
>
> .. something's doing frequent madvise calls, which may be causing some
In any case, this is create load on different CPU cores.
> hilarity between threads. What's the server? nginx?
yes
> Then the rest of the big entries are just a combination of rtentry
> locking, tcp timer locking, zfs locking and madvise locking. There's
> some sowakeup locking there as well, from the socket producer/consumer
> locking.
>
>
>
>
> -adrian
More information about the svn-src-head
mailing list