Adding to a zpool -- different redundancies and risks
Norman Gray
Norman.Gray at glasgow.ac.uk
Thu Dec 12 12:42:51 UTC 2019
David, hello.
On 12 Dec 2019, at 5:11, David Christensen wrote:
> Please post:
>
> 1 The 'zpool create ...' command you used to create the existing
> pool.
I don't have a note of the exact command, but it would have been
something like
zpool create pool raidz2 da{0,1,2,3,4,5,6,7,8} raidz2 da9
da1{0,1,2,3,4,5,6,7}
> 2. The output of 'zpool status' for the existing pool.
# zpool status pool
pool: pool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool
can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: none requested
config:
NAME STATE READ WRITE CKSUM
pool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
label/zd032 ONLINE 0 0 0
label/zd033 ONLINE 0 0 0
label/zd034 ONLINE 0 0 0
label/zd035 ONLINE 0 0 0
label/zd036 ONLINE 0 0 0
label/zd037 ONLINE 0 0 0
label/zd038 ONLINE 0 0 0
label/zd039 ONLINE 0 0 0
label/zd040 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
label/zd041 ONLINE 0 0 0
label/zd042 ONLINE 0 0 0
label/zd043 ONLINE 0 0 0
label/zd044 ONLINE 0 0 0
label/zd045 ONLINE 0 0 0
label/zd046 ONLINE 0 0 0
label/zd047 ONLINE 0 0 0
label/zd048 ONLINE 0 0 0
label/zd049 ONLINE 0 0 0
errors: No known data errors
#
(Note: since creating the pool, I realised that gpart labels were a Good
Thing, hence exported, labelled, and imported the pool, hence the
difference from the da* pool creation).
> 3. The output of 'zpool list' for the existing pool.
# zpool list pool
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP
HEALTH ALTROOT
pool 98T 75.2T 22.8T - - 29% 76% 1.00x
ONLINE -
> 4. The 'zpool add ...' command you are contemplating.
# zpool add -n pool raidz2 label/zd05{0,1,2,3,4,5}
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses 9-way raidz and new vdev uses
6-way raidz
The six new disks are 12TB; the 18 original ones 5.5TB.
> So, you have 24 drives in a 24 drive cage?
That's correct -- the maximum the chassis will take.
> What are your space and performance goals?
Not very explicit: TB/currency-unit as high as possible. Performance:
bottlenecks are likely to be elsewhere (network, processing power) so no
stringent requirements. Though this is a fairly general-purpose data
store, a large fraction of the datasets on the machine comprise a number
of 10GB single files, served via NFS.
> What are your sustainability goals as drives and/or VDEV's fail?
It doesn't have to be high availability, so if I have a drive failure, I
can consider shutting the machine down until a replacement disk arrives
and can be resilvered. This is a mirror of data where the masters are
elsewhere on the planet, so this machine is 'reliable storage but not
backed up' (and the users know this). Thus if I do decide to keep
running with one failed disk in one VDEV, and the worst comes to the
worst and the whole thing explodes... the world won't end. I will be
cross, and users will moan, in either case, but they know this is a
problem that can fundamentally be solved with more money.
I'm sure I could be more sophisticated about this (and any suggestions
are welcome), but unfortunately I don't have as much time to spend on
storage problems as I'd like, so I'd like to avoid creating a setup
which is smarter than I'm able to fix!
Best wishes,
Norman
--
Norman Gray : https://nxg.me.uk
SUPA School of Physics and Astronomy, University of Glasgow, UK
More information about the freebsd-questions
mailing list