Problems replacing failing drive in ZFS pool

Charles Sprickman spork at bway.net
Wed Jul 21 06:43:13 UTC 2010


On Tue, 20 Jul 2010, alan bryan wrote:

>
>
> --- On Mon, 7/19/10, Dan Langille <dan at langille.org> wrote:
>
>> From: Dan Langille <dan at langille.org>
>> Subject: Re: Problems replacing failing drive in ZFS pool
>> To: "Freddie Cash" <fjwcash at gmail.com>
>> Cc: "freebsd-stable" <freebsd-stable at freebsd.org>
>> Date: Monday, July 19, 2010, 7:07 PM
>> On 7/19/2010 12:15 PM, Freddie Cash
>> wrote:
>> > On Mon, Jul 19, 2010 at 8:56 AM, Garrett Moore<garrettmoore at gmail.com> 
>> wrote:
>> >> So you think it's because when I switch from the
>> old disk to the new disk,
>> >> ZFS doesn't realize the disk has changed, and
>> thinks the data is just
>> >> corrupt now? Even if that happens, shouldn't the
>> pool still be available,
>> >> since it's RAIDZ1 and only one disk has gone
>> away?
>> > 
>> > I think it's because you pull the old drive, boot with
>> the new drive,
>> > the controller re-numbers all the devices (ie da3 is
>> now da2, da2 is
>> > now da1, da1 is now da0, da0 is now da6, etc), and ZFS
>> thinks that all
>> > the drives have changed, thus corrupting the
>> pool.  I've had this
>> > happen on our storage servers a couple of times before
>> I started using
>> > glabel(8) on all our drives (dead drive on RAID
>> controller, remove
>> > drive, reboot for whatever reason, all device nodes
>> are renumbered,
>> > everything goes kablooey).
>> 
>> Can you explain a bit about how you use glabel(8) in
>> conjunction with ZFS?  If I can retrofit this into an
>> exist ZFS array to make things easier in the future...
>> 
>> 8.0-STABLE #0: Fri Mar  5 00:46:11 EST 2010
>> 
>> ]# zpool status
>>   pool: storage
>>  state: ONLINE
>>  scrub: none requested
>> config:
>>
>>         NAME 
>> STATE     READ WRITE CKSUM
>>         storage
>>    ONLINE
>>    0     0
>>    0
>>           raidz1 
>> ONLINE       0
>>    0     0
>>             ad8
>>    ONLINE
>>    0     0
>>    0
>>             ad10 
>> ONLINE       0
>>    0     0
>>             ad12 
>> ONLINE       0
>>    0     0
>>             ad14 
>> ONLINE       0
>>    0     0
>>             ad16 
>> ONLINE       0
>>    0     0
>> 
>> > Of course, always have good backups.  ;)
>> 
>> In my case, this ZFS array is the backup.  ;)
>> 
>> But I'm setting up a tape library, real soon now....
>> 
>> -- Dan Langille - http://langille.org/
>> _______________________________________________
>> freebsd-stable at freebsd.org
>> mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
>> 
>
> Dan,
>
> Here's how to do it after the fact:
>
> http://unix.derkeiler.com/Mailing-Lists/FreeBSD/current/2009-07/msg00623.html

Two things:

-What's the preferred labelling method for disks that will be used with 
zfs these days?  geom_label or gpt labels?  I've been using the latter and 
I find them a little simpler.

-I think that if you already are using gpt partitioning, you can add a 
gpt label after the fact (ie: gpart -i index# -l your_label adaX).  "gpart 
list" will give you a list of index numbers.

Charles

> --Alan Bryan
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
>


More information about the freebsd-stable mailing list