performance with LSI SAS 1064
Eric Anderson
anderson at freebsd.org
Thu Aug 30 18:48:17 PDT 2007
Scott Long wrote:
> 54MB/s is reasonable for 10k 2.5" disks. You might be able to squeeze
> some more performance by upgrading to FreeBSD 7.0. I _do_not_ recommend
> playing with the queue depth controls unless your console logs are
> getting quickly filled with messages about it.
Yea, 55-65MB/s is about right for that drive.. Also, when I played with
the tagged queue depth previously, I never had any issue, and it solved
several SCSI (fabric/fiber channel thought) issues I was having. The
performance didn't change measurably when changing it down to 64, but
below that it did see a performance hit.
Eric
> Lutieri G. wrote:
>> This is my disks:
>>
>> Seagate Savvio(ST913401ss) 10K.1 SAS 3Gb/s 73-GB Hard Drive. In the
>> manual file i found this information:
>>
>> Queue tagging (up to 64 queue tags supported)
>>
>> Is this the max # for setting using camcontrol?! syntax like this:
>> camcontrol tags da0 -N 64 ??
>>
>> 2007/8/30, Eric Anderson <anderson at freebsd.org>:
>>> Scott Long wrote:
>>>> Lutieri G. wrote:
>>>>> 2007/8/30, Eric Anderson <anderson at freebsd.org>:
>>>>>> I'm confused - you said in your first post you were getting 3MB/s,
>>>>>> where
>>>>>> above you show something like 55MB/s.
>>>>> Sorry! using blogbench i got 3MB/s and 100% busy. Once is 100% busy i
>>>>> thinked that 3MB/s is the maximum speed. But i was wrong...
>>>> %busy is a completely useless number for a anything but untagged,
>>>> uncached disk subsystems. It's only an indirect measure of latency,
>>>> and
>>>> there are better tools for measuring latency (gstat).
>>>>
>>>>>> You didn't say what kind of disks, or how many, the configuration,
>>>>>> etc -
>>>>>> so it's hard to answer much. The 55MB/s seems pretty decent for many
>>>>>> hard drives in a sequential use state (which is what dd tests
>>>>>> really).
>>>>>>
>>>>> SAS disks. Seagate, i don't know what is the right model of disks.
>>>>>
>>>>> Ok. If 55Mb/s is a decent speed i'm happy. I'm getting problems with
>>>>> squid cache and maybe should be a problem related with disks. But...
>>>>> i'm investigating and discharging problems.
>>>>>
>>>>>
>>>>>> Your errors before were probably caused because your queue depth
>>>>>> is set
>>>>>> to 255 (or 256?) and the adapter can't do that many. You should use
>>>>>> camcontrol to reduce it, to maybe 32. See the camcontrol man page
>>>>>> for
>>>>>> the right usage. It's something that needs setting on every boot,
>>>>>> so a
>>>>>> startup file is a good place for it maybe.
>>>>>>
>>>>> Is there any way of get the right number to reduce?!
>>>>>
>>>> If you're seeing erratic performance in production _AND_ you're seeing
>>>> lots of accompanying messages on the console about tag depth jumping
>>>> around, you can use camcontrol to force the depth to a lower number of
>>>> you're choosing. This kind of problem is pretty rare, though.
>>> Scott, you are far more of a SCSI guru than I, so please correct me if
>>> this is incorrect. Can't you get a good estimate, by knowing the queue
>>> depth of the target(s), and dividing it by the number of initiators? So
>>> in his case, he has one initiator, and (let's say) one target. If the
>>> queue depth of the target (being the Seagate SAS drive) is 128 (see
>>> Seagate's paper here:
>>> http://www.seagate.com/staticfiles/support/disc/manuals/enterprise/savvio/Savvio%2015K.1/SAS/100407739b.pdf
>>>
>>> ), then he should have to reduce it down from 25[56] to 128, correct?
>>>
>>> With QLogic cards connected to a fabric, I saw queue depth issues under
>>> heavy load.
>>>
>>> Eric
>>>
>>>
>>>
>>>
>>>
>>
>>
>
More information about the freebsd-scsi
mailing list