ZFS confusion

krad kraduk at gmail.com
Mon Jan 27 13:13:32 UTC 2014


Neither of these setups is ideal, The best practice for your vdev is to use
2^n + your parity drives
This means in your case with raidz3 you would do something

2 + 3
4 + 3
8 + 3

the 1st two are far from ideal as the ratios are low 8 + 3, so 11 drives
per raidz3 vdev would be optimal. This would fit nicely with your 26 drive
enclosure as you would use 2x11 drive raidz3 vdevs, 2 hot spares, and two
devices left for l2arc/zil. Probably best chop up the ssds, mirror the zil
and stripe the l2arc, assuming you dont want to do down the route using
generic SSD's rather than write/read optimized ones

the reason you want 2^n for data drives is that each block/record stands a
chance of being broken up into equal chunks and striped nice and neatly
across the drives.

9 drives would give bad numbers with raidz3, as any data would be / 6 so
128 / 6 = 21.33333.. not ideal. However for raidz 9 is fine as it gives you
8 data drives






On 27 January 2014 12:39, Kaya Saman <kayasaman at gmail.com> wrote:

> On 01/27/2014 12:12 PM, Trond Endrestøl wrote:
>
>> On Mon, 27 Jan 2014 11:42-0000, Kaya Saman wrote:
>>
>>  Many thanks everyone (Trond, Dennis, Steve)!!
>>>
>>> So RAIDz2 or 3 is going to be preferred per the advice given.
>>>
>>> Now I just need to figure out how to make that work best with my current
>>> 5
>>> block of disks....  perhaps wait for a while then add some more disks in
>>> the
>>> mix then create the raidz(x) platform??
>>>
>>> It would be really good if raidz could be expandable, ie by adding extra
>>> 'new'
>>> disks into the same vdev.
>>>
>> It's there!
>>
>> Try: zpool attach <pool_name> <existing_member> <new_member1>
>> [new_member2 ...]
>>
>>
>>
> Yep, though of course as I'm just testing with temp files currently I was
> unable to make it work:
>
> zpool attach test_pool /tmp/disk1 /tmp/disk6
> cannot attach /tmp/disk6 to /tmp/disk1: can only attach to mirrors and
> top-level disks
>
> This is ok though as my block of 5 disks arrive tomorrow with the chassis
> arriving either tomorrow or day after that.
>
>
> Does 'attaching' create a hybrid raidz2/3 + 1 array or am I confusing
> again?
>
> Being familiar with raid 1 and 0, I know that the attach command will
> simply mirror a disk:
>
> zpool attach <pool> disk1 disk2
>
> would create a raid1 mirror between the disks... is this the same
> principle or does the command function differently in raidz?
>
>
> Just checking out the man page gives this:
>
>
>      zpool attach [-f] pool device new_device
>
>          Attaches new_device to an existing zpool device. The existing
> device
>          cannot be part of a raidz configuration. If device is not
> currently
>          part of a mirrored configuration, device automatically transforms
>          into a two-way mirror of device and new_device.  If device is
> part of
>          a two-way mirror, attaching new_device creates a three-way mirror,
>          and so on. In either case, new_device begins to resilver
> immediately.
>
> -- unless things have been updated since my FBSD version?
>
>
> Regards,
>
>
> Kaya
>
> _______________________________________________
> freebsd-questions at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-
> unsubscribe at freebsd.org"
>


More information about the freebsd-questions mailing list