vinum in 4.x poor performer?
Marc G. Fournier
scrappy at hub.org
Wed Feb 9 12:20:19 PST 2005
On Wed, 9 Feb 2005, Mark A. Garcia wrote:
> Marc G. Fournier wrote:
>
>>
>> Self-followup .. the server config is as follows ... did I do maybe
>> mis-configure the array?
>>
>> # Vinum configuration of neptune.hub.org, saved at Wed Feb 9 00:13:52 2005
>> drive d0 device /dev/da1s1a
>> drive d1 device /dev/da2s1a
>> drive d2 device /dev/da3s1a
>> drive d3 device /dev/da4s1a
>> volume vm
>> plex name vm.p0 org raid5 1024s vol vm sd name vm.p0.s0 drive d0 plex vm.p0
>> len 142314496s driveoffset 265s plexoffset 0s
>> sd name vm.p0.s1 drive d1 plex vm.p0 len 142314496s driveoffset 265s
>> plexoffset 1024s
>> sd name vm.p0.s2 drive d2 plex vm.p0 len 142314496s driveoffset 265s
>> plexoffset 2048s
>> sd name vm.p0.s3 drive d3 plex vm.p0 len 142314496s driveoffset 265s
>> plexoffset 3072s
>>
>> bassed on an initial config file that looks like:
>>
>> neptune# cat /root/raid5
>> drive d0 device /dev/da1s1a
>> drive d1 device /dev/da2s1a
>> drive d2 device /dev/da3s1a
>> drive d3 device /dev/da4s1a
>> volume vm
>> plex org raid5 512k
>> sd length 0 drive d0
>> sd length 0 drive d1
>> sd length 0 drive d2
>> sd length 0 drive d3
>>
> It's worth pointing out that your performance on the raid-5 can change for
> the better if you avoid having the stripe size be a power of 2. This is
> especially true if the (n)umber of disks are a 2^n.
I read that somewhere, but then every example shows 256k as being the
strip size :( Now, with a 5 drives RAID5 array (which I'll be moving that
server to over the next couple of weeks), 256k isn't an issue? or is
there something better i should set it to?
----
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email: scrappy at hub.org Yahoo!: yscrappy ICQ: 7615664
More information about the freebsd-questions
mailing list