ZFS confusion
Kaya Saman
kayasaman at gmail.com
Mon Jan 27 18:15:32 UTC 2014
Many thanks I really appreciate the advice :-)
Best Regards,
Kaya
On 01/27/2014 04:52 PM, krad wrote:
> Look into under provisioning the SSD drives as well, this can preserve
> write performance in the long term and decrease write wear. Looking at
> the number of drives, and general spec of what you are putting
> together, I would try to stretch to 256 GB ssd but only provision them
> to use say 128-160 GB of the capacity.
>
> I'm not 100% sure this is still all necessary now as TRIM support is
> much better now under zfs but here is how i did my ssd drives under
> linux. You may well be able to do it under freebsd but I havent
> figured out how.
>
> root at ubuntu-10-10:~# hdparm -N /dev/sdb
>
> /dev/sdb:
> max sectors = 312581808/312581808, HPA is disabled
>
> root at ubuntu-10-10:~# hdparm -Np281323627 /dev/sdb
>
> /dev/sdb:
> setting max visible sectors to 281323627 (permanent)
> Use of -Nnnnnn is VERY DANGEROUS.
> You have requested reducing the apparent size of the drive.
> This is a BAD idea, and can easily destroy all of the drive's contents.
> Please supply the --yes-i-know-what-i-am-doing flag if you really want this.
> Program aborted.
>
> root at ubuntu-10-10:~# hdparm -Np281323627 --yes-i-know-what-i-am-doing /dev/sdb
>
> /dev/sdb:
> setting max visible sectors to 281323627 (permanent)
> max sectors = 281323627/312581808, HPA is enabled
>
> root at ubuntu-10-10:~#
>
>
> On 27 January 2014 13:56, Kaya Saman <kayasaman at gmail.com
> <mailto:kayasaman at gmail.com>> wrote:
>
> Many thanks for the explanation :-)
>
>
> On 01/27/2014 01:13 PM, krad wrote:
>
> Neither of these setups is ideal, The best practice for your
> vdev is to use 2^n + your parity drives
> This means in your case with raidz3 you would do something
>
> 2 + 3
> 4 + 3
> 8 + 3
>
> the 1st two are far from ideal as the ratios are low 8 + 3, so
> 11 drives per raidz3 vdev would be optimal. This would fit
> nicely with your 26 drive enclosure as you would use 2x11
> drive raidz3 vdevs, 2 hot spares, and two devices left for
> l2arc/zil. Probably best chop up the ssds, mirror the zil and
> stripe the l2arc, assuming you dont want to do down the route
> using generic SSD's rather than write/read optimized ones
>
>
> Yep was going to use your suggestion for l2arc/zil on 2x 128GB
> Corsair Force Series GS, 2.5" which have quite good w/r speeds -
> also I use these on other servers which tend to be quite good and
> reliable.
>
> I think the way to create a mirrored zil and stiped l2arc would be
> to use GPT to partition the drives, then use the zfs features
> across the partitions.
>
>
> Hmm.... so it also looks like I'm gona have to wait a while for
> some more drives then in order to create an 11 disk raidz3 pool.
>
>
> But at least things will be done properly and in a good manner
> rather then going down a patch with "no return".
>
>
>
>
> Regards,
>
>
> Kaya
>
>
More information about the freebsd-questions
mailing list