svn commit: r192194 - in head/sys: boot/i386/zfsboot boot/zfs
cddl/boot/zfs
Pegasus Mc Cleaft
ken at mthelicon.com
Sun May 17 11:43:26 UTC 2009
On Saturday 16 May 2009 18:47:16 Doug Rabson wrote:
> On 16 May 2009, at 19:35, Pegasus Mc Cleaft wrote:
> > On Saturday 16 May 2009 10:48:20 Doug Rabson wrote:
> >> Author: dfr
> >> Date: Sat May 16 10:48:20 2009
> >> New Revision: 192194
> >> URL: http://svn.freebsd.org/changeset/base/192194
> >>
> >> Log:
> >> Add support for booting from raidz1 and raidz2 pools.
> >>
> >> Modified:
> >> head/sys/boot/i386/zfsboot/zfsboot.c
> >> head/sys/boot/zfs/zfsimpl.c
> >> head/sys/cddl/boot/zfs/README
> >> head/sys/cddl/boot/zfs/zfsimpl.h
> >> head/sys/cddl/boot/zfs/zfssubr.c
> >
> > I think there may be a bug when you boot the machine from a drive
> > that is a
> > member of a zfs-mirror and you have raidz pools elsewhere.
> >
> > On reboot, I would get message saying there was no bootable kernel
> > and
> > dropped me down to the "OK" prompt. At that point, lsdev would show
> > all the
> > pools (both zfs-mirror and zraid's) and "ls" would return an error
> > saying
> > there were to many open files.
> >
> > I was able to work around the problem by pulling all the drives in
> > the zraid
> > pool into single user, attach all the drives and use atacontrol
> > attach to
> > bring them online before going to multi-user and hitting /etc/rc.d/
> > zfs start.
> >
> > The only thing I haven't tried, and may be the key to the problem is
> > reloading the boot-strap on the bootable drives. Would that make any
> > difference?
>
> I'm not sure but it can't hurt. The part of the bootstrap that runs
> before /boot/loader (e.g. gptzfsboot) also has access to all the pools
> in the system (at least the ones where the drives are visible to the
> BIOS). It should figure out which pool contains the drive that was
> actually booted and load /boot/loader from that. It should also pass
> the identity of that pool down to /boot/loader so that the process
> continues with the correct pool.
>
Naww.. Still no joy with that. I updated the boot drives with the latest
gptzfsloader this morning and got the same results when I rebooted. System
still thinks there are no loadable kernels until I remove all the zpool drives
from the machine and reboot. Once I get the "BSD Daemon" screen and it starts
to load the kernel, I can quicly slap the caddies back into the machine and
they are detected when polled and everything is OK.
I currently have the pools set up as follows:
feathers$ zpool status
pool: PegaBackup
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
PegaBackup ONLINE 0 0 0
ad10p4 ONLINE 0 0 0
errors: No known data errors
pool: PegaBase
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
PegaBase ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad26 ONLINE 0 0 0
ad30 ONLINE 0 0 0
ad28 ONLINE 0 0 0
ad24 ONLINE 0 0 0
ad22 ONLINE 0 0 0
ad20 ONLINE 0 0 0
cache
ad16p4 ONLINE 0 0 0
ad14p4 ONLINE 0 0 0
errors: No known data errors
pool: PegaBoot2
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
PegaBoot2 ONLINE 0 0 0
mirror ONLINE 0 0 0
ad10p3 ONLINE 0 0 0
ad14p3 ONLINE 0 0 0
ad16p3 ONLINE 0 0 0
errors: No known data errors
More information about the freebsd-current
mailing list