FreeBSD 5.x performance tips (ISC)

Robert Watson rwatson at freebsd.org
Wed Jan 14 07:41:21 PST 2004


On Mon, 12 Jan 2004, Peter Losher wrote:

> So, as many of you know ISC hosts a quad-Xeon server running FreeBSD 5.1
> (-p10 to be precise) which hosts half of ftp.freebsd.org, etc.  Many of
> you helped out with some teething pains w/ virtual memory sizes, and
> kernel panics.  Thanks :) 
> 
> The issue with the system now is that while the kernel is SMP-aware, and
> as I watch 5.2-REL get downloaded today, this system is like the arm
> muscle that is developed to lift that barbell, but not enough blood is
> getting everywhere, so the barbell is slowly moving up while the muscle
> cramps like hell.  In this case the system is ~70% idle, and around 150
> processes are locked and the performance starts to seriously decrease at
> times. (Entropy stops getting collected, etc.)  Not a pretty sight.  The
> CPU's are all spinlocking on an I/O channel. so high I/O translates into
> artificial high cpu and load averages. 
> 
> So where can I look for pointers on how I can squeeze better performance
> out of this configuration? I already have the usual sysctl entries
> installed. Any chance moving to 5.2 will help the situation? 

Not sure how you feel about running more debugging stuff on this system,
but it might actually be quite interesting to see the results of running
mutex profiling on it.  The documentation on mutex profiling isn't great,
there's basically just a note in NOTES:

  # MUTEX_PROFILING - Profiling mutual exclusion locks (mutexes).  This
  # records four numbers for each acquisition point (identified by
  # source file name and line number): longest time held, total time held,
  # number of non-recursive acquisitions, and average time held.  Measurements
  # are made and stored in nanoseconds (using nanotime(9)), but are presented
  # in microseconds, which should be sufficient for the locks which actually
  # want this (those that are held long and / or often).  The MUTEX_PROFILING
  # option has the following sysctl namespace for controlling and viewing its
  # operation:
  #
  #  debug.mutex.prof.enable - enable / disable profiling
  #  debug.mutex.prof.acquisitions - number of mutex acquisitions held
  #  debug.mutex.prof.records - number of acquisition points recorded
  #  debug.mutex.prof.maxrecords - max number of acquisition points
  #  debug.mutex.prof.rejected - number of rejections (due to full table)
  #  debug.mutex.prof.hashsize - hash size
  #  debug.mutex.prof.collisions - number of hash collisions
  #  debug.mutex.prof.stats - profiling statistics
  #
  options         MUTEX_PROFILING

What you want to do is compile it in, wait for load to reach about
"average" -- perhaps a few minutes after booting, then turn profiling on
using the sysctl.  Wait a couple of minutes to get a sample, turn it off,
and dump the records from the stats sysctl, which should generate text.
The mutex profiling code isn't highly exercised, but can provide some
interesting insight into where your system is bottlenecked with the
current locking scheme.  It sounds like you're smacked up against the
Giant lock, which is not surprising given your workload.  The socket
locking work Sam has in Perforce may help quite a bit, but they aren't yet
ready to merge.  Once they stabilize a bit, your environment would
probably be an excellent one to measure their impact in. 

Robert N M Watson             FreeBSD Core Team, TrustedBSD Projects
robert at fledge.watson.org      Senior Research Scientist, McAfee Research





More information about the freebsd-performance mailing list