docs/50664: Handbook 12.4.1 (CCD) needs work

Jim Brown jpb at sixshooter.v6.thrupoint.net
Fri Apr 18 03:10:17 UTC 2003


The following reply was made to PR docs/50664; it has been noted by GNATS.

From: Jim Brown <jpb at sixshooter.v6.thrupoint.net>
To: freebsd-gnats-submit at FreeBSD.org, murray at freebsd.org
Cc:  
Subject: Re: docs/50664: Handbook 12.4.1 (CCD) needs work
Date: Thu, 17 Apr 2003 23:06:11 -0400

 Hi,
 
 Here is an update of Handbook 12.4.1 (CCD) per docs/50664.
 
 Hope this helps,
 jpb
 ===
 
 
 Index: chapter.sgml
 ===================================================================
 RCS file: /home/ncvs/doc/en_US.ISO8859-1/books/handbook/disks/chapter.sgml,v
 retrieving revision 1.145
 diff -u -r1.145 chapter.sgml
 --- chapter.sgml	2003/03/26 02:11:55	1.145
 +++ chapter.sgml	2003/04/17 21:50:46
 @@ -342,57 +342,57 @@
  	    <author>
  	      <firstname>Christopher</firstname>
  	      <surname>Shumway</surname>
 -	      <contrib>Written by </contrib>
 +	      <contrib>Original work by </contrib>
  	    </author>
  	  </authorgroup>
  	  <authorgroup>
  	    <author>
  	      <firstname>Valentino</firstname>
  	      <surname>Vaschetto</surname>
 -	      <contrib>Marked up by </contrib>
 +	      <contrib>Original markup by </contrib>
  	    </author>
  	  </authorgroup>
 +	  <authorgroup>
 +	    <author>
 +	      <firstname>Jim</firstname>
 +	      <surname>Brown</surname>
 +	      <contrib>Revised by </contrib>
 +	    </author>
 +	  </authorgroup>
  	</sect3info>
  
 -	<title>ccd (Concatenated Disk Configuration)</title>
 +	<title>Concatenated Disk Driver (CCD) Configuration</title>
  	<para>When choosing a mass storage solution the most important
 -	  factors to consider are speed, reliability, and cost.  It is very
 -	  rare to have all three in favor; normally a fast, reliable mass
 +	  factors to consider are speed, reliability, and cost.  It is 
 +	  rare to have all three in balance; normally a fast, reliable mass
  	  storage device is expensive, and to cut back on cost either speed
 -	  or reliability must be sacrificed.  In designing my system, I
 -	  ranked the requirements by most favorable to least favorable.  In
 -	  this situation, cost was the biggest factor.  I needed a lot of
 -	  storage for a reasonable price.  The next factor, speed, is not
 -	  quite as important, since most of the usage would be over a one
 -	  hundred megabit switched Ethernet, and that would most likely be
 -	  the bottleneck.  The ability to spread the file input/output
 -	  operations out over several disks would be more than enough speed
 -	  for this network.  Finally, the consideration of reliability was
 -	  an easy one to answer.  All of the data being put on this mass
 -	  storage device was already backed up on CD-R's.  This drive was
 -	  primarily here for online live storage for easy access, so if a
 -	  drive went bad, I could just replace it, rebuild the file system,
 -	  and copy back the data from CD-R's.</para>
 -
 -	<para>To sum it up, I need something that will give me the most
 -	  amount of storage space for my money.  The cost of large IDE disks
 -	  are cheap these days.  I found a place that was selling Western
 -	  Digital 30.7GB 5400 RPM IDE disks for about one-hundred and thirty
 -	  US dollars.  I bought three of them, giving me approximately
 -	  ninety gigabytes of online storage.</para>
 +	  or reliability must be sacrificed.</para>
 +
 +          <para>In designing the system described below, cost was chosen
 +          as  the most important factor, followed by speed, then reliability.
 +          Data transfer speed for this system is ulitmately
 +          constrained by the network.  And while reliability is very 
 important,
 +          the CCD drive described below serves online data that is already
 +          fully backed up on CD-R's and can easily be replaced.</para>  
 +
 +          <para>Defining your own requirements is the first step
 +          in choosing a mass storage solution.  If your requirements prefer
 +          speed or reliability over cost, your solution will differ from
 +          the system described in this section.</para>
 +
  
  	<sect4 id="ccd-installhw">
  	  <title>Installing the Hardware</title>
  
 -	  <para>I installed the hard drives in a system that already
 -	    had one IDE disk in as the system disk.  The ideal solution
 -	    would be for each IDE disk to have its own IDE controller
 -	    and cable, but without fronting more costs to acquire a dual
 -	    IDE controller this would not be a possibility.  So, I
 -	    jumpered two disks as slaves, and one as master.  One went
 -	    on the first IDE controller as a slave to the system disk,
 -	    and the other two where slave/master on the secondary IDE
 -	    controller.</para>
 +	  <para>In addition to the IDE system disk, three Western 
 +            Digital 30GB, 5400 RPM IDE disks form the core
 +            of the CCD disk described below providing approximately
 +	    90GB of online storage.  Ideally,
 +	    each IDE disk would have its own IDE controller
 +	    and cable, but to minimize cost, additional 
 +	    IDE controllers were not used.  Instead the disks were
 +	    configured with jumpers so that each IDE controller has
 +            one master, and one slave.</para>
  
  	  <para>Upon reboot, the system BIOS was configured to
  	    automatically detect the disks attached.  More importantly,
 @@ -403,74 +403,74 @@
  ad2: 29333MB <WDC WD307AA> [59598/16/63] at ata1-master UDMA33
  ad3: 29333MB <WDC WD307AA> [59598/16/63] at ata1-slave 
 UDMA33</programlisting>
  
 -	  <para>At this point, if FreeBSD does not detect the disks, be
 -	    sure that you have jumpered them correctly.  I have heard
 -	    numerous reports with problems using cable select instead of
 -	    true slave/master configuration.</para>
 -
 -	  <para>The next consideration was how to attach them as part of
 -	    the file system.  I did a little research on &man.vinum.8;
 -	      (<xref linkend="vinum-vinum">) and
 -	      &man.ccd.4;.  In this particular configuration, &man.ccd.4;
 -	    appeared to be a better choice mainly because it has fewer
 -	    parts.  Less parts tends to indicate less chance of breakage.
 -	    Vinum appears to be a bit of an overkill for my needs.</para>
 +	  <para>Note that if FreeBSD does not detect all the disks, ensure that
 +	    you have jumpered them correctly.  Most IDE drives also  have a
 +            <quote>Cable Select</quote> jumper.  This is 
 <emphasis>not</emphasis>
 +            the jumper for the master/slave relationship.  Consult the drive
 +            documentation for help in identifying the correct jumper.</para>
 +
 +	  <para>Next, consider how to attach them as part of
 +	    the file system.  You should  research both &man.vinum.8;
 +	    (<xref linkend="vinum-vinum">) and &man.ccd.4;.  In this 
 +            particular configuration, &man.ccd.4; was chosen.</para>
  	</sect4>
  
  	<sect4 id="ccd-setup">
  	  <title>Setting up the CCD</title>
  
 -	  <para><application>CCD</application> allows me to take
 +	  <para><application>CCD</application> allows you to take
  	    several identical disks and concatenate them into one
  	    logical file system.  In order to use
 -	    <application>ccd</application>, I need a kernel with
 -	    <application>ccd</application> support built into it.  I
 -	    added this line to my kernel configuration file and rebuilt
 -	    the kernel:</para>
 +	    <application>ccd</application>, you need a kernel with
 +	    <application>ccd</application> support built in.
 +	    Add this line to your kernel configuration file, rebuild, and 
 +	    reinstall the kernel:</para>
  
  	  <programlisting>pseudo-device   ccd     4</programlisting>
  
  	  <note><para>In FreeBSD 5.0, it is not necessary to specify
  	    a number of ccd devices, as the ccd device driver is now
 -	    cloning -- new device instances will automatically be
 +	    self-cloning -- new device instances will automatically be
  	    created on demand.</para></note>
  
  	  <para><application>ccd</application> support can also be
 -	    loaded as a kernel loadable module in FreeBSD 4.0 or
 -	    later.</para>
 +	    loaded as a kernel loadable module in FreeBSD 3.0 or
 +	    later. See &man.ccd.4; for information on loading 
 +            <application>ccd</application> as a kernel loadable 
 module.</para>
  
 -	  <para>To set up <application>ccd</application>, first I need
 -	    to disklabel the disks.  Here is how I disklabeled
 -	    them:</para>
 +	  <para>To set up <application>ccd</application>, you must first use
 +	    &man.disklabel.8 to label the disks:</para>
  
  	  <programlisting>disklabel -r -w ad1 auto
  disklabel -r -w ad2 auto
  disklabel -r -w ad3 auto</programlisting>
  
 -	  <para>This created a disklabel ad1c, ad2c and ad3c that
 +	  <para>This creates a disklabel for ad1c, ad2c and ad3c that
  	    spans the entire disk.</para>
  
 -	  <para>The next step is to change the disklabel type.  To do
 -	    that I had to edit the disklabel:</para>
 +	  <para>The next step is to change the disklabel type.
 +	    You can use <application>disklabel</application> to
 +            edit the disks:</para>
  
  	  <programlisting>disklabel -e ad1
  disklabel -e ad2
  disklabel -e ad3</programlisting>
  
 -	  <para>This opened up the current disklabel on each disk
 -	    respectively in whatever editor the <envar>EDITOR</envar>
 -	    environment variable was set to, in my case, &man.vi.1;.
 -	    Inside the editor I had a section like this:</para>
 +	  <para>This openes up the current disklabel on each disk
 +	    with the editor specified by the <envar>EDITOR</envar>
 +	    environment variable, typically &man.vi.1;.</para>
  
 +	    <para>An unmodified disklabel will look something like this:</para>
 +
  	  <programlisting>8 partitions:
  #        size   offset    fstype   [fsize bsize bps/cpg]
    c: 60074784        0    unused        0     0     0   # (Cyl.    0 - 
 59597)</programlisting>
  
 -	  <para>I needed to add a new "e" partition for &man.ccd.4; to
 -	    use. This usually can be copied of the "c" partition, but
 -	    the <option>fstype</option> must be <userinput>4.2BSD</userinput>.
 -	    Once I was done,
 -	    my disklabel should look like this:</para>
 +	  <para>Add a new "e" partition for &man.ccd.4; to
 +	    use. This can usually be copied from the <quote>c</quote> partition, but
 +	    the <option>fstype</option> <emphasis>must</emphasis> 
 +            be <userinput>4.2BSD</userinput>.  The disklabel should now look
 +            something like this:</para>
  
  	  <programlisting>8 partitions:
  #        size   offset    fstype   [fsize bsize bps/cpg]
 @@ -482,12 +482,7 @@
  	<sect4 id="ccd-buildingfs">
  	  <title>Building the File System</title>
  
 -	  <para>Now that I have all of the disks labeled, I needed to
 -	    build the <application>ccd</application>.  To do that, I
 -	    used a utility called &man.ccdconfig.8;.
 -	    <command>ccdconfig</command> takes several arguments, the
 -	    first argument being the device to configure, in this case,
 -	    <devicename>/dev/ccd0c</devicename>.  The device node for
 +            <para>The device node for
  	    <devicename>ccd0c</devicename> may not exist yet, so to
  	    create it, perform the following commands:</para>
  
 @@ -496,60 +491,89 @@
  
  	  <note><para>In FreeBSD 5.0, &man.devfs.5; will automatically
  	    manage device nodes in <filename>/dev</filename>, so use of
 -	    <command>MAKEDEV</command> is not necessary.</para></note>
 -
 -	  <para>The next argument <command>ccdconfig</command> expects
 -	    is the interleave for the file system.  The interleave
 -	    defines the size of a stripe in disk blocks, normally five
 -	    hundred and twelve bytes.  So, an interleave of thirty-two
 -	    would be sixteen thousand three hundred and eighty-four
 -	    bytes.</para>
 -
 -	  <para>After the interleave comes the flags for
 -	    <command>ccdconfig</command>.  If you want to enable drive
 -	    mirroring, you can specify a flag here.  In this
 -	    configuration, I am not mirroring the
 -	    <application>ccd</application>, so I left it as zero.</para>
 -
 -	  <para>The final arguments to <command>ccdconfig</command>
 -	    are the devices to place into the array.  Putting it all
 -	    together I get this command:</para>
 -
 -	  <programlisting>ccdconfig ccd0 32 0 /dev/ad1e /dev/ad2e 
 /dev/ad3e</programlisting>
 +	    <command>MAKEDEV</command> may not be necessary.</para></note>
  
 -	  <para>This configures the <application>ccd</application>.
 -	    I can now &man.newfs.8; the file system.</para>
 +	  <para>Now that you have all of the disks labeled, you must
 +	    build the <application>ccd</application>.  To do that,
 +	    use  &man.ccdconfig.8;, with options similar to the following:
 +
 +	    <programlisting>ccdconfig ccd0 32 0 /dev/ad1e /dev/ad2e 
 /dev/ad3e</programlisting>
 +
 +	    The use and meaning of each option is shown below:</para>
 +
 +	    <programlisting>ccd0 <co id="co-ccd-dev">
 +32  <co id="co-ccd-interleave">
 +0 <co id="co-ccd-flags">
 +/dev/ad1e <co id="co-ccd-devs">
 +/dev/ad2e
 +/dev/ad3e</programlisting>
 +
 +          <calloutlist>
 +
 +            <callout arearefs="co-ccd-dev">
 +	    <para>The first argument is the device to configure, in this case,
 +	    <devicename>/dev/ccd0c</devicename>. The <filename>/dev/</filename>
 +            portion is optional.</para>
 +            </callout>
 +
 +            <callout arearefs="co-ccd-interleave">
 +
 +	    <para>The interleave for the file system.  The interleave
 +	    defines the size of a stripe in disk blocks, each normally 512 bytes.
 +	    So, an interleave of 32 would be 16,384 bytes.</para>
 +            </callout>
 +
 +            <callout arearefs="co-ccd-flags">
 +	    <para>Flags for <command>ccdconfig</command>.  If you want to enable 
 drive
 +	    mirroring, you can specify a flag here. This 
 +	    configuration does not provide mirroring for  
 +	    <application>ccd</application>, so it is set at 0 (zero).</para>
 +            </callout>
 +
 +            <callout arearefs="co-ccd-devs">
 +	    <para>The final arguments to <command>ccdconfig</command>
 +	    are the devices to place into the array.  Use the complete pathname 
 +	    for each device.</para>
 +            </callout>
 +          </calloutlist>
 +
 +
 +	  <para>After running <command>ccdconfig</command> the 
 <application>ccd</application>
 +          is configured. A file system can be installed. Refer to 
 &man.newfs.8; 
 +          for options, or simply run: </para>
  
  	  <programlisting>newfs /dev/ccd0c</programlisting>
  
 +
  	</sect4>
  
  	<sect4 id="ccd-auto">
  	  <title>Making it all Automatic</title>
  
 -	  <para>Finally, if I want to be able to mount the
 -	    <application>ccd</application>, I need to
 -	    configure it first.  I write out my current configuration to
 +	  <para>Generally, you will want to mount the
 +	    <application>ccd</application> upon each reboot. To do this, you must
 +	    configure it first.  Write out your current configuration to
  	    <filename>/etc/ccd.conf</filename> using the following command:</para>
  
  	  <programlisting>ccdconfig -g > /etc/ccd.conf</programlisting>
  
 -	  <para>When I reboot, the script <command>/etc/rc</command>
 -	    runs <command>ccdconfig -C</command> if /etc/ccd.conf
 +	  <para>During reboot, the script <command>/etc/rc</command>
 +	    runs <command>ccdconfig -C</command> if 
 <filename>/etc/ccd.conf</filename>
  	    exists. This automatically configures the
  	    <application>ccd</application> so it can be mounted.</para>
  
 -	  <para>If you are booting into single user mode, before you can
 +	  <note><para>If you are booting into single user mode, before you can
  	    <command>mount</command> the <application>ccd</application>, you
  	    need to issue the following command to configure the
  	    array:</para>
  
  	  <programlisting>ccdconfig -C</programlisting>
 +          </note>
  
 -	  <para>Then, we need an entry for the
 -	    <application>ccd</application> in
 +	  <para>To automatically mount the <application>ccd</application>,
 +            place an entry for the <application>ccd</application> in
  	    <filename>/etc/fstab</filename> so it will be mounted at
 -	    boot time.</para>
 +	    boot time:</para>
  
  	  <programlisting>/dev/ccd0c              /media       ufs     rw      2       
 2</programlisting>
  	</sect4>
 @@ -566,7 +590,7 @@
  	  storage.  &man.vinum.8; implements the RAID-0, RAID-1 and
  	  RAID-5 models, both individually and in combination.</para>
  
 -	<para>See the <xref linkend="vinum-vinum"> for more
 +	<para>See <xref linkend="vinum-vinum"> for more
  	  information about &man.vinum.8;.</para>
        </sect3>
      </sect2>
 @@ -578,16 +602,19 @@
  	<primary>RAID</primary>
  	<secondary>Hardware</secondary>
        </indexterm>
 +
        <para>FreeBSD also supports a variety of hardware 
 <acronym>RAID</acronym>
 -        controllers.  In which case the actual <acronym>RAID</acronym> system
 -	is built and controlled by the card itself.  Using an on-card
 -	<acronym>BIOS</acronym>, the card will control most of the disk operations
 -	itself.  The following is a brief setup using a Promise <acronym>IDE 
 RAID</acronym>
 -	controller.  When this card is installed and the system started up, it will
 -	display a prompt requesting information.  Follow the on screen instructions
 -	to enter the cards setup screen.  From here a user should have the ability 
 to
 -	combine all the attached drives.  When doing this, the disk(s) will look 
 like
 -	a single drive to FreeBSD.  Other <acronym>RAID</acronym> levels can be 
 setup
 +        controllers.  These devices control a <acronym>RAID</acronym> 
 subsystem
 +        without the need for FreeBSD specific <acronym>RAID</acronym> 
 software 
 +        that manages the <acronym>RAID</acronym> array.</para>
 +
 +      <para>Using an on-card <acronym>BIOS</acronym>, the card controls most 
 of the disk operations
 +	itself.  The following is a brief setup description using a <acronym>Promise 
 IDE RAID</acronym>
 +	controller.  When this card is installed and the system is started up, it 
 +	displays a prompt requesting information.  Follow the instructions
 +	to enter the cards setup screen.  From here, you have the ability to
 +	combine all the attached drives.  After doing so, the disk(s) will look like
 +	a single drive to FreeBSD.  Other <acronym>RAID</acronym> levels can be set 
 up
  	accordingly.
        </para>
      </sect2>
 @@ -608,7 +635,7 @@
  ad6: hard error reading fsbn 1116119 of 0-7 (ad6 bn 1116119; cn 1107 tn 4 sn 
 11) status=59 error=40
  ar0: WARNING - mirror lost</programlisting>
  
 -      <para>Using &man.atacontrol.8;, check to see how things look:</para>
 +      <para>Using &man.atacontrol.8;, check for further information:</para>
  
        <screen>&prompt.root; <userinput>atacontrol list</userinput>
  ATA channel 0:
 @@ -656,8 +683,9 @@
  	</step>
  
  	<step>
 -	  <para>The rebuild command hangs until complete, its possible to open 
 another
 -	    terminal and check on the progress by issuing the following 
 command:</para>
 +	  <para>The rebuild command hangs until complete.  However, it is possible 
 to open another
 +	  terminal (using <keycombo action="simul"><keycap>Alt</keycap> 
 <keycap>F<replaceable>n</replaceable></keycap></keycombo>)
 +          and check on the progress by issuing the following command:</para>
  
  	  <screen>&prompt.root; <userinput>dmesg | tail -10</userinput>
  [output removed]
 



More information about the freebsd-doc mailing list