Two controllers or a dual...

Doug Ledford dledford at redhat.com
Tue Feb 16 21:51:59 PST 1999


"Robert G. Brown" wrote:
> 
> Dear List Humans:
> 
> We are trying to design systems intended to hold as much as 100-200 GB
> of disk (to hold events collected at Fermilab and simulations thereof,
> where the actual total dataset sizes can run into the terabyte range
> quite easily).
> 
> A lot of the processing is likely to be disk I/O bound, and the question
> has arisen -- is it better to get a single dual scsi controller (e.g.
> the 7895) or two separate scsi controllers (e.g. 2940's)?  Presumably
> the dual controller will share an interrupt for both channels, and two
> controllers would be on different interrupts. 

This is likely to be true, but also likely to be a "Red Herring".  At
least under linux, the current 2.2.x SMP locking is such that same
interrupt or different interrupt doesn't matter, they are two different
interrupt contexts that are locked out from each other.

> Does the answer depend on
> whether or not the controllers are U, UW, U2W?  I realize that 2 U2W
> controllers will saturate the PCI bus anyway, but they should still give
> some gain over a single U2W controller.

The U2W controllers and disks have made a *massive* difference in
performance tests I have performed.  Additionally, there is at least one
non-official patch I recommend for large memory systems that makes a
major difference to disk read speeds.  Using a PII-400 dual machine, two
U2W channels, two Seagate Cheetah 9GB disks per channel, all four disks
having a stripe in a RAID0 array.  The per char portion of a bonnie test
was in the range of 15 to 20MB/s both in and out, the block output was
around 65MB/sec, the block input was about 60MB/sec before the patch and
73.8MB/s consistent after the patch.  Note that 73.8MB/s was consistent
because it was the maximum media speed of those four cheetahs in the
area of the stripes.  So, since I went from being CPU bound at 62MB/s to
disk bound at 73.8MB/s, I don't know the real speedup I got, but it was
significant :)  However, having noted the benefit, here's the catch.  It
only effects machines with 256MB of RAM or more.  This machine had 512MB
of RAM.  A larger machine would see a greater impact.

> The system(s) in question will probably run linux, but the question
> itself is open to anyone with either measurements (ideal) or theoretical
> statements to make for either operating system.
> 
>   Thank you,
> 
>           rgb
> 
> P.S. -- if anyone wishes to comment on building the required disk out of
> e.g. 3 or 4 Cheetahs per U2W controller vs buying a commercial disk
> array (speed, cost comparisons, support in linux) that would be welcome
> as well.

I've had good luck with things such as using a single controller that
has dual Ultra2 channels (the 3950U2B or 3950U2D) and then loading up
around 3 or 4 Cheetah drives per channel.  If nothing else, the LVD bus
is worth the upgrade to U2W regardless of the speed issues.  You'll get
flaky busses out of SE in a heartbeat of cable length, but on my
personal LVD bus here at home, I have a 10' external 68pin cable that
runs from my computer to my drive tower, and then inside the drive tower
I have about 6' or 7' of additional cable, then an external terminator. 
I tested that exact bus as a SE bus and it blew up instantly, but as LVD
I've never had a problem with it.  That's in the range of 5m of bus. 
I've still got a good 7m of leeway to work with as well :)  I should
also note that I got all the cabling and the drive tower from Stay
OnLine in RTP so it's all locally accessible to you at least Robert :)

-- 
  Doug Ledford   <dledford at redhat.com>
   Opinions expressed are my own, but
      they should be everybody's.


To Unsubscribe: send mail to majordomo at FreeBSD.org
with "unsubscribe aic7xxx" in the body of the message




More information about the aic7xxx mailing list