How to work with in 1GbE network ?
KIRIYAMA Kazuhiko
kiri at truefc.org
Thu Feb 20 01:00:42 UTC 2020
On Thu, 20 Feb 2020 07:17:35 +0900,
alan somers wrote:
>
> Make sure that dns resolution is working, forward and reverse.
That's it ! I've mistaken to reset /etc/resolv.conf in which
included local IP DNS.
Thanks for pointed out my rudimentary mistakes.
>
> On Wed, Feb 19, 2020, 2:53 PM Eric Joyner <erj at freebsd.org> wrote:
>
> > Have you tried turning off jumbo frames?
> >
> > - Eric
> >
> > On Tue, Feb 18, 2020 at 10:04 PM KIRIYAMA Kazuhiko <kiri at truefc.org>
> > wrote:
> >
> > > Hi, all
> > >
> > > I wonder how to work ixgbe in 1GbE network. I tried to test
> > > in below:
> > >
> > > internet
> > > |
> > > +-------+--------+
> > > | Netgear JGS516 |
> > > +---+-----+------+ +----------------------+
> > > | +---------+ 13.0-CURRENT(r356739)| src_host
> > > | +----------------------+
> > > | +----------------------+
> > > +----+ 13.0-CURRENT(r353025)| dest_host
> > > +----------------------+
> > >
> > > And try to NFS mount dest_host in src_host, but mount does
> > > not work smoothly. It takes about 9 second !!! :
> > >
> > > # /usr/bin/time
> > > time* timeout*
> > > # /usr/bin/time -h mount -t nfs dest_host:/.dake /.dake
> > > 9.15s real 0.04s user 0.02s sys
> > > # nfsstat -m
> > > dest_host:/.dake on /.dake
> > >
> > >
> > nfsv3,tcp,resvport,hard,cto,lockd,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=65536,readahead=1,wcommitsize=16777216,timeout=120,retrans=2
> > > # /usr/bin/time -h umount /.dake
> > > 27.26s real 0.04s user 0.02s sys
> > >
> > > src_host to dest_host was set to mtu 9000:
> > >
> > > # route get dest_host
> > > route to: xxx.xxx.xxx.xxx.foo
> > > destination: xxx.xxx.xxx.xxx.foo
> > > mask: xxx.xxx.xxx.xxx
> > > fib: 0
> > > interface: ix0
> > > flags: <UP,DONE,PINNED>
> > > recvpipe sendpipe ssthresh rtt,msec mtu weight expire
> > > 0 0 0 0 9000 1 0
> > > #
> > >
> > > What's wrong ? src_host environments are as follows:
> > >
> > > # uname -a
> > > FreeBSD src_host 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r356739M: Tue Jan
> > 28
> > > 21:49:59 JST 2020 root at msrvkx:/usr/obj/usr/src/amd64.amd64/sys/XIJ
> > > amd64
> > > # ifconfig ix0
> > > ix0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
> > >
> > >
> > options=4e538bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6,NOMAP>
> > > ether 3c:ec:ef:01:a4:e0
> > > inet xxx.xxx.xxx.xxx netmask 0xfffffff8 broadcast xxx.xxx.xxx.xxx
> > > media: Ethernet autoselect (1000baseT
> > > <full-duplex,rxpause,txpause>)
> > > status: active
> > > nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
> > > # sysctl -a|grep jumbo
> > > kern.ipc.nmbjumbo16: 680520
> > > kern.ipc.nmbjumbo9: 1209814
> > > kern.ipc.nmbjumbop: 4083125
> > > vm.uma.mbuf_jumbo_16k.stats.xdomain: 0
> > > vm.uma.mbuf_jumbo_16k.stats.fails: 0
> > > vm.uma.mbuf_jumbo_16k.stats.frees: 0
> > > vm.uma.mbuf_jumbo_16k.stats.allocs: 0
> > > vm.uma.mbuf_jumbo_16k.stats.current: 0
> > > vm.uma.mbuf_jumbo_16k.domain.0.wss: 0
> > > vm.uma.mbuf_jumbo_16k.domain.0.imin: 0
> > > vm.uma.mbuf_jumbo_16k.domain.0.imax: 0
> > > vm.uma.mbuf_jumbo_16k.domain.0.nitems: 0
> > > vm.uma.mbuf_jumbo_16k.limit.bucket_cnt: 0
> > > vm.uma.mbuf_jumbo_16k.limit.bucket_max: 18446744073709551615
> > > vm.uma.mbuf_jumbo_16k.limit.sleeps: 0
> > > vm.uma.mbuf_jumbo_16k.limit.sleepers: 0
> > > vm.uma.mbuf_jumbo_16k.limit.max_items: 680520
> > > vm.uma.mbuf_jumbo_16k.limit.items: 0
> > > vm.uma.mbuf_jumbo_16k.keg.domain.0.free: 0
> > > vm.uma.mbuf_jumbo_16k.keg.domain.0.pages: 0
> > > vm.uma.mbuf_jumbo_16k.keg.efficiency: 99
> > > vm.uma.mbuf_jumbo_16k.keg.align: 7
> > > vm.uma.mbuf_jumbo_16k.keg.ipers: 1
> > > vm.uma.mbuf_jumbo_16k.keg.ppera: 4
> > > vm.uma.mbuf_jumbo_16k.keg.rsize: 16384
> > > vm.uma.mbuf_jumbo_16k.keg.name: mbuf_jumbo_16k
> > > vm.uma.mbuf_jumbo_16k.bucket_size_max: 253
> > > vm.uma.mbuf_jumbo_16k.bucket_size: 253
> > > vm.uma.mbuf_jumbo_16k.flags:
> > > 0x43a10000<TRASH,LIMIT,CTORDTOR,VTOSLAB,OFFPAGE,FIRSTTOUCH>
> > > vm.uma.mbuf_jumbo_16k.size: 16384
> > > vm.uma.mbuf_jumbo_9k.stats.xdomain: 0
> > > vm.uma.mbuf_jumbo_9k.stats.fails: 0
> > > vm.uma.mbuf_jumbo_9k.stats.frees: 0
> > > vm.uma.mbuf_jumbo_9k.stats.allocs: 0
> > > vm.uma.mbuf_jumbo_9k.stats.current: 0
> > > vm.uma.mbuf_jumbo_9k.domain.0.wss: 0
> > > vm.uma.mbuf_jumbo_9k.domain.0.imin: 0
> > > vm.uma.mbuf_jumbo_9k.domain.0.imax: 0
> > > vm.uma.mbuf_jumbo_9k.domain.0.nitems: 0
> > > vm.uma.mbuf_jumbo_9k.limit.bucket_cnt: 0
> > > vm.uma.mbuf_jumbo_9k.limit.bucket_max: 18446744073709551615
> > > vm.uma.mbuf_jumbo_9k.limit.sleeps: 0
> > > vm.uma.mbuf_jumbo_9k.limit.sleepers: 0
> > > vm.uma.mbuf_jumbo_9k.limit.max_items: 1209814
> > > vm.uma.mbuf_jumbo_9k.limit.items: 0
> > > vm.uma.mbuf_jumbo_9k.keg.domain.0.free: 0
> > > vm.uma.mbuf_jumbo_9k.keg.domain.0.pages: 0
> > > vm.uma.mbuf_jumbo_9k.keg.efficiency: 75
> > > vm.uma.mbuf_jumbo_9k.keg.align: 7
> > > vm.uma.mbuf_jumbo_9k.keg.ipers: 1
> > > vm.uma.mbuf_jumbo_9k.keg.ppera: 3
> > > vm.uma.mbuf_jumbo_9k.keg.rsize: 9216
> > > vm.uma.mbuf_jumbo_9k.keg.name: mbuf_jumbo_9k
> > > vm.uma.mbuf_jumbo_9k.bucket_size_max: 253
> > > vm.uma.mbuf_jumbo_9k.bucket_size: 253
> > > vm.uma.mbuf_jumbo_9k.flags: 0x43010000<TRASH,LIMIT,CTORDTOR,FIRSTTOUCH>
> > > vm.uma.mbuf_jumbo_9k.size: 9216
> > > vm.uma.mbuf_jumbo_page.stats.xdomain: 0
> > > vm.uma.mbuf_jumbo_page.stats.fails: 0
> > > vm.uma.mbuf_jumbo_page.stats.frees: 2199
> > > vm.uma.mbuf_jumbo_page.stats.allocs: 67734
> > > vm.uma.mbuf_jumbo_page.stats.current: 65535
> > > vm.uma.mbuf_jumbo_page.domain.0.wss: 0
> > > vm.uma.mbuf_jumbo_page.domain.0.imin: 0
> > > vm.uma.mbuf_jumbo_page.domain.0.imax: 0
> > > vm.uma.mbuf_jumbo_page.domain.0.nitems: 0
> > > vm.uma.mbuf_jumbo_page.limit.bucket_cnt: 0
> > > vm.uma.mbuf_jumbo_page.limit.bucket_max: 18446744073709551615
> > > vm.uma.mbuf_jumbo_page.limit.sleeps: 0
> > > vm.uma.mbuf_jumbo_page.limit.sleepers: 0
> > > vm.uma.mbuf_jumbo_page.limit.max_items: 4083125
> > > vm.uma.mbuf_jumbo_page.limit.items: 67298
> > > vm.uma.mbuf_jumbo_page.keg.domain.0.free: 0
> > > vm.uma.mbuf_jumbo_page.keg.domain.0.pages: 67298
> > > vm.uma.mbuf_jumbo_page.keg.efficiency: 97
> > > vm.uma.mbuf_jumbo_page.keg.align: 7
> > > vm.uma.mbuf_jumbo_page.keg.ipers: 1
> > > vm.uma.mbuf_jumbo_page.keg.ppera: 1
> > > vm.uma.mbuf_jumbo_page.keg.rsize: 4096
> > > vm.uma.mbuf_jumbo_page.keg.name: mbuf_jumbo_page
> > > vm.uma.mbuf_jumbo_page.bucket_size_max: 253
> > > vm.uma.mbuf_jumbo_page.bucket_size: 253
> > > vm.uma.mbuf_jumbo_page.flags:
> > > 0x43a10000<TRASH,LIMIT,CTORDTOR,VTOSLAB,OFFPAGE,FIRSTTOUCH>
> > > vm.uma.mbuf_jumbo_page.size: 4096
> > > # sysctl -a | grep nmbclusters
> > > kern.ipc.nmbclusters: 8166250
> > > # sysctl -a | grep intr_storm_threshold
> > > hw.intr_storm_threshold: 0
> > > #
> > >
> > > and dest_host environments are as follows:
> > >
> > > # uname -a
> > > FreeBSD dest_host 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r353025: Thu Oct
> > 3
> > > 19:38:47 JST 2019 admin at dest_host
> > :/ds/obj/current/13.0/r353025/ds/src/current/13.0/r353025/amd64.amd64/sys/GENERIC
> > > amd64
> > > # ifconfig igb0
> > > igb0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0
> > > mtu 9000
> > >
> > >
> > options=4a520b9<RXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,NOMAP>
> > > ether 0c:c4:7a:b3:cf:d4
> > > inet xxx.xxx.xxx.xxx netmask 0xfffffff8 broadcast xxx.xxx.xxx.xxx
> > > media: Ethernet autoselect (1000baseT <full-duplex>)
> > > status: active
> > > nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
> > > # sysctl -a|grep jumbo
> > > kern.ipc.nmbjumbo16: 339123
> > > kern.ipc.nmbjumbo9: 602886
> > > kern.ipc.nmbjumbop: 2034741
> > > # sysctl -a | grep nmbclusters
> > > kern.ipc.nmbclusters: 4069482
> > > # sysctl -a | grep intr_storm_threshold
> > > hw.intr_storm_threshold: 0
> > > #
> > >
> > > Best regards
> > > ---
> > > Kazuhiko Kiriyama
> > > _______________________________________________
> > > freebsd-net at freebsd.org mailing list
> > > https://lists.freebsd.org/mailman/listinfo/freebsd-net
> > > To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
> > >
> > _______________________________________________
> > freebsd-net at freebsd.org mailing list
> > https://lists.freebsd.org/mailman/listinfo/freebsd-net
> > To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
> >
> _______________________________________________
> freebsd-net at freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe at freebsd.org"
>
---
Kazuhiko Kiriyama
More information about the freebsd-net
mailing list