Vinum configuration lost at vinum stop / start
Kim Helenius
tristan at cc.jyu.fi
Thu Nov 11 06:53:45 PST 2004
Stijn Hoop wrote:
>>>>Greetings. I posted earlier about problems with vinum raid5 but it
>>>>appears it's not restricted to that.
>>>
>>>Are you running regular vinum on 5.x? It is known broken. Please use
>>>'gvinum' instead.
>>>
>>>There is one caveat: the gvinum that shipped with 5.3-RELEASE contains an
>>>error in RAID-5 initialization. If you really need RAID-5 you either need
>>>to wait for the first patch level release of 5.3, or you can build
>>>RELENG_5 from source yourself. The fix went in on 2004-11-07.
>>
>>Thank you for your answer. I tested normal concat with both 5.2.1-RELEASE and
>>5.3-RELEASE with similar results. Plenty of people (at least I get this
>>impression after browsing several mailing lists and websites) have working
>>vinum setups with 5.2.1 (where gvinum doesn't exist) so there's definately
>>something I'm doing wrong here. So my problem is not limited to raid5.
>
>
> I don't know the state of affairs for 5.2.1-RELEASE, but in 5.3-RELEASE gvinum
> is the way forward.
Thanks again for answering. Agreed, but there still seems to be a long
way to go. A lot of 'classic' vinum functionality is still missing and
at least for me it still doesn't do the job the way I would find
trustworthy. See below.
>>I'm aware of gvinum and the bug and actually tried to cvsup & make world
>>last night but it didn't succeed due to some missing files in netgraph
>>dirs. I will try again tonight.
I tested gvinum with some interesting results. First the whole system
froze after creating a concatenated drive and trying to gvinum -rf -r
objects (resetconfig command doesn't exist). Next, I created the volume,
newfs, copied some data on it. The rebooted, and issued gvinum start.
This is what follows:
2 drives:
D d1 State: up /dev/ad4s1d A: 285894/286181
MB (99%)
D d2 State: up /dev/ad5s1d A: 285894/286181
MB (99%)
1 volume:
V vinum0 State: down Plexes: 1 Size: 572 MB
1 plex:
P vinum0.p0 C State: down Subdisks: 2 Size: 572 MB
2 subdisks:
S vinum0.p0.s0 State: stale D: d1 Size: 286 MB
S vinum0.p0.s1 State: stale D: d2 Size: 286 MB
I'm getting a bit confused. Issuing separately 'gvinum start vinum0'
does seem to fix it (all states go 'up') but surely it should come up
fine with just 'gvinum start'? This is how I would start it in loader.conf.
> OK, I think that will help you out. But the strange thing is, RELENG_5 should
> be buildable. Are you sure you are getting that?
>
> Have you read
>
> http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/current-stable.html
>
> Particularly the 19.2.2 section, 'Staying stable with FreeBSD'?
>
I have read it and used -stable in 4.x, and if I read it really
carefully I figure out that -stable does not equal "stable" which is way
I stopped tracking -stable in the first place. And when knowing I would
only need it to fix raid5 init I'm a bit reluctant to do it as I found
out I can't even create a concat volume correctly.
--
Kim Helenius
tristan at cc.jyu.fi
More information about the freebsd-questions
mailing list