ZFS pool not working on boot
Adam Jacob Muller
freebsd-fs at adam.gs
Wed Sep 19 15:13:18 PDT 2007
On Sep 19, 2007, at 4:25 AM, Wilkinson, Alex wrote:
> 0n Wed, Sep 19, 2007 at 03:24:25AM -0400, Adam Jacob Muller wrote:
>
>> I have a server with two ZFS pools, one is an internal raid0 using
>> 2 drives
>> connected via ahc. The other is an external storage array with 11
>> drives
>> also using ahc, using raidz. (This is a dell 1650 and pv220s).
>> On reboot, the pools do not come online on their own. Both pools
>> consistently show as failed.
>
> Make sure your hostid doesn't change. If it does. Then ZFS will
> fail upon bootstrap.
>
> -aW
>
No, The hostid is not changing, just rebooted and replicated the
problem. Also it seems like from reading ZFS docs that the symptoms
would be that the pool would simply need to be imported again if the
host id changed?
after another reboot, I see this:
# zpool status
pool: tank
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-D3
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank UNAVAIL 0 0 0 insufficient replicas
da1 ONLINE 0 0 0
da2 UNAVAIL 0 0 0 cannot open
... more output showing the other array with 11 drives is fine
# zpool export tank
# zpool import tank
# zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
errors: No known data errors
(11-drive raidz is fine still of course)
More information about the freebsd-fs
mailing list