practical maximum number of drives
aurfalien
aurfalien at gmail.com
Wed Feb 5 18:42:44 UTC 2014
Cool.
But I was more curious about what lead you to using 1 HBA over using a few more.
You mentioned something about interrupts, what problems manifested as a result of multi HBAs?
- aurf
On Feb 5, 2014, at 12:52 AM, Daniel Kalchev <daniel at digsys.bg> wrote:
> Ok, two things.
>
> First, it was a typo -- the number is 122 devices and I actually got it from the likes of this FAQ entry: http://www.supermicro.com/support/faqs/faq.cfm?faq=10004
> I never use these for anything other than HBA.
>
> It is interesting to see that LSI claims 3000 devices. Might be, firmware has changed? Or there are different variations of the chip/implementation?
>
> Daniel
>
> On 05.02.14 10:08, Rich wrote:
>> The SAS2008 has a limit of 112 drives?
>>
>> http://www.lsi.com/downloads/Public/SAS%20ICs/LSISAS2008/SCG_LSISAS2008_PB_043009.pdf
>> claims "up to 3000 devices."
>>
>> SAS2008 is a PCIe gen 2 x8 chip.
>>
>> I suspect the bottleneck order would go SAS expander then SAS2008 then PCIe.
>>
>> - Rich
>>
>> On Wed, Feb 5, 2014 at 1:48 AM, Daniel Kalchev <daniel at digsys.bg> wrote:
>>> I also wonder how you managed to go over the LSI2008's limit of 112
>>> drives...
>>>
>>>
>>> On 05.02.14 07:36, aurfalien wrote:
>>>> Hi Graham,
>>>>
>>>> When you say behaved better with 1 HBA, what were the issues that made you
>>>> go that route?
>>>>
>>>> Also, curious that you have that many drives on 1 PCI card, is it PCI 3
>>>> etc... and is saturation an issue?
>>>>
>>>> - aurf
>>>>
>>>> On Feb 4, 2014, at 8:27 PM, Graham Allan <allan at physics.umn.edu> wrote:
>>>>
>>>>> This may well be a question with no real answer but since we're speccing
>>>>> out a new ZFS-based storage system, I've been asked what the maximum number
>>>>> of drives it can support would be (for a hypothetical expansion option).
>>>>> While there are some obvious limits such as SAS addressing, I assume there
>>>>> must be more fundamental ones in the kernel or drivers, and the practical
>>>>> limits will be very different from the hypothetical ones.
>>>>>
>>>>> So far the largest system we've built is using three 45-drive chassis on
>>>>> one SAS2008 (mps) controller, so 135 drives total. Over many months of
>>>>> running we had several drives fail and be replaced, and eventually the OS
>>>>> (9.1) failed to assign new da devices. It was time to patch the system and
>>>>> reboot anyway, which solved it, but we did wonder if we were running into
>>>>> some kind of limit around 150 drives - though I don't see why.
>>>>>
>>>>> Interestingly we initially built this system with each drive chassis on
>>>>> its own SAS2008 HBA, but it ultimately behaved better daisy-chained with
>>>>> only one. I think I saw a hint somewhere this could be to do with interrupt
>>>>> sharing...
>>>>>
>>>>> Thanks for any insights,
>>>>>
>>>>> Graham
>>>>> _______________________________________________
>>>>> freebsd-fs at freebsd.org mailing list
>>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>>> _______________________________________________
>>>> freebsd-fs at freebsd.org mailing list
>>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>>>
>>>
>>> _______________________________________________
>>> freebsd-fs at freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
>>> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
>
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
More information about the freebsd-fs
mailing list