Throughput rate testing configurations
George V. Neville-Neil
gnn at neville-neil.com
Thu Jun 12 15:03:42 UTC 2008
At Wed, 11 Jun 2008 09:51:27 -0700,
security wrote:
>
> Steve Bertrand wrote:
> > Hi everyone,
> >
> > I see what I believe to be less-than-adequate communication
> > performance between many devices in parts of our network.
> >
> > Can someone recommend software (and config recommendations if
> > possible) that I can implement to test both throughput and pps
> > reliably, initially/primarily in a simple host-sw-host configuration?
> >
> > Perhaps I'm asking too much, but I'd like to have something that can
> > push the link to it's absolute maximum capacity (for now, up to 1Gbps)
> > for a long sustained time, that I can just walk away from and let it
> > do it's work, and review the reports later where it had to scale down
> > due to errors.
> >
> > What I'm really trying to achieve is:
> >
> > - test the link between hosts alone
> > - throw in a switch
> > - test the link while r/w to disk
> > - test the link while r/w to GELI disk
> > - test the link with oddball MTU sizes
> >
> Iperf or netperf are probably what you're looking for. Both try real
> had NOT to tweak other subsystems while they run, so if you want to
> throw disk activity in, you'll need to run another tool or roll your own
> to create disk activity. You probably don't want to run them for
> extended periods in a production network. Depending on the adapters at
> each end, you may or may not be able to drive the link to saturation or
> alter frame size. The Intel adapters I've seen allow jumbo frames, and
> generally good performance (as opposed to say the realtek). It's also
> useful to have a managed switch in between so you can look at the
> counters on it.
>
I personally prefer netpipe because it tries odd sized (non power of
2) messages and tends to help edge cases come to light.
/usr/ports/benchmarks/netpipe
Later,
George
More information about the freebsd-net
mailing list