ZFS With Gpart partitions
Dan Carroll
fbsd at dannysplace.net
Sun Jan 1 11:27:30 UTC 2012
Hello all,
I'm currently trying to fix a suspect drive and I've run into a small
problem.
I was wondering if someone can shed some light into how GPart works
(when using labels for partitions).
My drives are 2Tb WD RE4's, originally the array was using 1Tb Seagate
drives, and I was replacing about 3 of those a year, but since I
migrated to the RE4's this is my first problem.
Here is my setup.
NAME STATE READ WRITE CKSUM
areca ONLINE 0 0 0
raidz1 ONLINE 0 0 0
gpt/data0 ONLINE 0 0 0
gpt/data1 ONLINE 0 0 0
gpt/data2 ONLINE 0 0 0
gpt/data3 ONLINE 103 0 0
gpt/data4 ONLINE 0 0 0
gpt/data5 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
gpt/data6 ONLINE 0 0 0
gpt/data7 ONLINE 0 0 0
gpt/data8 ONLINE 0 0 0
gpt/data9 ONLINE 0 0 0
gpt/data10 ONLINE 0 0 0
gpt/data11 ONLINE 0 0 0
errors: No known data errors
The drives are connected via an Areca controller, each drive is created
as a Pass-Thru (just like JBod but also using the cache and BBU).
So, my problem began when I tried to replace gpt/data3.
Here is what I did.
# zpool offline areca gpt/data3
# shutdown -p now
(I could not remember the camcontrol commands to detach a device and
shutting down was not an issue, so that's the way I did it.)
Replace the failing drive and re-create the passthru device in the areca
console.
power on.
All good so far, except the drive I used as a replacement came from a
decomissioned server. It already had a gpart label on it.
As it happens it was labelled data2.
I quickly shut down the system, took the new drive out, put it into
another machine and wiped the first few megabytes of the disk with dd.
I re-inserted the drive, recreated the passthrough, powered up and
replaced the offlined drive.
Now it's resilvering.
Currently, my system looks like this:
NAME STATE READ WRITE CKSUM
areca DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
gpt/data0 ONLINE 0 0 0
gpt/data1 ONLINE 0 0 0
da8p1 ONLINE 0 0 0
replacing DEGRADED 0 0 0
gpt/data3/old OFFLINE 0 0 0
gpt/data3 ONLINE 0 0 0 931G resilvered
gpt/data4 ONLINE 0 0 0
gpt/data5 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
gpt/data6 ONLINE 0 0 0
gpt/data7 ONLINE 0 0 0
gpt/data8 ONLINE 0 0 0
gpt/data9 ONLINE 0 0 0
gpt/data10 ONLINE 0 0 0
gpt/data11 ONLINE 0 0 0
The resilvering looks like it's working fine, but I am curious about the
gpart label. When I query da8p1 I cannot find it.
# gpart show da8
=> 34 3907029101 da8 GPT (1.8T)
34 3907029101 1 freebsd-zfs (1.8T)
# glabel list da8p1
glabel: No such geom: da8p1.
It should look like this:
# gpart show da0
=> 34 3907029101 da0 GPT (1.8T)
34 3907029101 1 freebsd-zfs (1.8T)
# glabel list da0p1
Geom name: da0p1
Providers:
1. Name: gpt/data0
Mediasize: 2000398899712 (1.8T)
Sectorsize: 512
Mode: r1w1e1
secoffset: 0
offset: 0
seclength: 3907029101
length: 2000398899712
index: 0
Consumers:
1. Name: da0p1
Mediasize: 2000398899712 (1.8T)
Sectorsize: 512
Mode: r1w1e2
So it seems to me that when I inserted the second drive with a label
called data2, it wiped the label from the *original* drive.
ZFS does not seem to care about this. If the label is simply a label
and losing it does not alter the user data on the drive, then this makes
sense.
I am wondering if I can simply re-label the partition without fear of
breaking something? Reading the glabel man page I suspect that it may
be ok.
-D
More information about the freebsd-fs
mailing list