Duration of Blocked Interrupts

Robert G. Brown rgb at phy.duke.edu
Wed Apr 29 10:10:45 PDT 1998


On Tue, 28 Apr 1998, A.L.Fransen wrote:

> Another sidenote: At my work, we use/sell SGI-computers. The O2 (the
> 'toaster') has both the aic7xxx-controller and a 100Mbitcard on a
> PCI-bus.
> Guess what?
> Problems. if (harddrive-usage >= 90% AND 100Mbit) then coredump.
> Why? Because the aic7xxx has a sort of 'ISA'-way of handling interrupts
> (And I quote an SGI-support engineer!), and the 100Mbit card still had
> 'old' 10mbit kernel-tuning. We had to increase pci-bus latency to about
> 128, it then functioned properly.

How very fascinating.  I wonder what Doug Ledford makes of this.  The
(nonlinear and occasionally destructive) scaling interaction between
functionally disparate drivers in high performance systems continues to
be of great interest and will get even more interesting as PII's go to
450 MHz, the Merced chip comes out, 1 Gbps ethernet becomes commonplace;
I expect to see a whole new round of problems emerge when my first 440
BX system(s) arrive shortly.

> Robert G. Brown wrote:
> > 
> > On Tue, 28 Apr 1998, Alan Cox wrote:
> > 
> > > 100Mbit cards have ring buffers often of about 20 frames - just servicing
> > > a messier ISA interrupt will do as much delaying as the longer AIC handler
> > > paths.
> > 
> > Right, so a design spec is "no ISA cards in a high performance
> > system";-).  The PCI specs make that pretty clear.  I'm afraid I do have
> > a serial/modem card and a sound card on the ISA bus, but they are low
> > interrupt density and I rarely use them.  Ditto "No IDE devices", right?
> > 
> Interesting. A a sidenote, I might add, the later Intelchipsets (HX, TX,
> FX, BX etc.) actually disable some microcode to speed up the PCI-bus if
> no ISA-devices are found. I tried this with a simple generic benchmark,
> and it does increase overall speed with up to 10%.

As a sidenote to your sidenote (and I promise, no more inclusions:-)
there is a really super white paper on the web:

http://www.zeitnet.com/atm/pci1-2.html

This article (purportedly on the design specs for a high-performance ATM
adapter) is really "nearly everything you wanted to know about the PCI
bus, latency, IDE, and all that".  I enclose the following snippet from
this paper for your reading pleasure:

%< Snip Snip============================================================

4.2. Bus Latency

Predictable and low bus latency are very important for ATM adapters. If
bus latency was not low and predictable, one would need to add data
memory to the adapter or provide a large buffer. As mentioned in Section
2, applications running on ATM networks require guaranteed latency
parameters. Isochronous (time sensitive) applications depend on
predictable latency values.

In a system with PCI devices, the latency value is a configuration
parameter that the adapter can request. The boot software determines the
latency timer value based on the load in the system. For a given latency
timer value, the maximum latency is fixed and predictable.

The PCI bus latency specification guidelines for PCI devices states that
the typical latency is short (likely under 2usec and predictable. If for
example the LT timer is set to 40, which is a typical value for the PCI
bus, the maximum latency would be 1.6usec and the peak bandwidth would
be 100MB/sec. This bandwidth is still above ATM bandwidth requirement.

The latency is more difficult to predict for existing PCI systems that
have ISA or other expansion bus devices. This is because devices on the
expansion bus do not comply with the latency requirements of the PCI.
The PCI Specification suggests using 30usec as a worst case latency in
such systems.

Using 30 microseconds as a worst case analysis of how long the ATM
adapter would have to wait to receive control of the bus, the amount of
data that would need to be stored on the adapter can be calculated. As
discussed in Section 4.1, the bus bandwidth required for sustained line
rate data transfer is about 20MB/s.  Therefore:

 FIFO Size
        = 30usec X 20 MB/s
        = 600 Bytes


Therefore, even at line speed, only a 1K FIFO is needed for the worst
case latency. For the typical case of 2 microsecond bus latency, a FIFO
of less than 100 Bytes is necessary.

%< Snip Snip============================================================

Me again.  It is amusing to note that BIOS settings basically NEVER
permit one to achieve 2 usec PCI timings even on a PCI-only system;
presumably the PCI-compliant devices could handle it but the chipset
just doesn't.  As a practical note, they almost always are at least 64
usec by default, and somethings have to be set up at the absurd levels
of 128 usec (as you note) to get things to work correctly.  At this
point, with a latency of 0.1 msec, there isn't much advantage to having
a superfast system at least on the I/O side and a lot of things might
well break (for example, the ATM device they're engineering with a 1K
FIFO cache:-).

I love this stuff....

   rgb

Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu




To Unsubscribe: send mail to majordomo at FreeBSD.org
with "unsubscribe aic7xxx" in the body of the message



More information about the aic7xxx mailing list