Adding to a zpool -- different redundancies and risks

Norman Gray Norman.Gray at glasgow.ac.uk
Fri Dec 13 14:49:53 UTC 2019


David, hello.

On 13 Dec 2019, at 4:49, David Christensen wrote:

> On 2019-12-12 04:42, Norman Gray wrote:

>
> So, two raidz2 vdev's of nine 5.5 TB drives each, striped into one 
> pool.  Each vdev can store 7 * 5.5 = 38.5 TB and the pool can store 
> 38.5 + 38.5 = 77 TB.

>>> 3.  The output of 'zpool list' for the existing pool.
>>
>> # zpool list pool
>> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP
>> HEALTH  ALTROOT
>> pool    98T  75.2T  22.8T        -         -    29%    76%  1.00x
>> ONLINE  -
>
> So, your pool is 75.2 TB / 77 TB = 97.7% full.

Well, I have compression turned on, so I take it that the 98TB quoted 
here is an estimate of the capacity in that case, and that the 76% 
capacity quoted in this output is the effective capacity -- ie, 
alloc/size.

The zpool(8) manpage documents these two properties as

      alloc       Amount of storage space within the pool that has been
                  physically allocated.

      capacity    Percentage of pool space used. This property can also 
be
                  referred to by its shortened column name, "cap".

      size        Total size of the storage pool.

The term 'physically allocated' is a bit confusing.  I'm guessing that 
it takes compression into account, rather than bytes-in-sectors.

I could be misinterpreting this output, though.

>>> 4.  The 'zpool add ...' command you are contemplating.
>>
>> # zpool add -n pool raidz2 label/zd05{0,1,2,3,4,5}
>> invalid vdev specification
>> use '-f' to override the following errors:
>> mismatched replication level: pool uses 9-way raidz and new vdev uses
>> 6-way raidz
>
> I believe your understanding of the warning is correct -- ZFS is 
> saying that the added raidz2 vdev does not having the same number of 
> drives (six) as the two existing raidz2 vdev's (nine drives each).


> I believe that if you gave the -f option to 'zfs add', the six 12 TB 
> drives would be formed into a raidz2 vdev and this new vdev would be 
> striped onto your existing pool.

Yes, that's what I'd expect.   My concern is about the extent to which I 
should be comfortable overriding the warning this would give me.

My feeling is that I should be comfortable, but as the manpage stresses, 
there isn't an 'undo', here....

> The pool would then have a total capacity of 38.5 + 38.5 + 48 = 125 TB

Yes, 125TB raw capacity which, with compression, would translate to some 
larger amount of effective capacity.

> That said, read this article:
>
> https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

Thanks for the reminder of this.  I'm familiar with that article, and 
it's an interesting point of view.  I don't find it completely 
convincing, though, since I'm not convinced that the speed of 
resilvering fully compensates for the less than 100% probability of 
surviving two disk failures.  In the last couple of years I've had 
problems with water ingress over a rack, and with a failed AC which 
baked a room, so that failure modes which affect multiple disks 
simultaneously are fairly prominent in my thinking about this sort of 
issue.  Poisson failures are not the only mode to worry about!

> AIUI this architecture has another benefit -- incremental pool growth. 
> You replace one 5.5 TB drive in a mirror with a 12 TB drive, resilver, 
> replace the other 5.5 TB drive in the same mirror with another 12 TB 
> drive, resilver, and now the pool is 6.5 TB larger.  In the long run, 
> you end up with twenty-four 12 TB drives (144 TB pool).  The process 
> could then be repeated (or preempted) using even bigger drives.

What you say is true, and attractive in principle, but I think I'm 
unlikely to (want to) grow storage in that way in practice.

Best wishes,

Norman


-- 
Norman Gray  :  https://nxg.me.uk
SUPA School of Physics and Astronomy, University of Glasgow, UK


More information about the freebsd-questions mailing list