svn commit: r44664 - head/en_US.ISO8859-1/books/handbook/geom
Dru Lavigne
dru at FreeBSD.org
Fri Apr 25 16:10:59 UTC 2014
Author: dru
Date: Fri Apr 25 16:10:58 2014
New Revision: 44664
URL: http://svnweb.freebsd.org/changeset/doc/44664
Log:
Editorial review of RAID3 chapter.
Sponsored by: iXsystems
Modified:
head/en_US.ISO8859-1/books/handbook/geom/chapter.xml
Modified: head/en_US.ISO8859-1/books/handbook/geom/chapter.xml
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/geom/chapter.xml Fri Apr 25 15:26:06 2014 (r44663)
+++ head/en_US.ISO8859-1/books/handbook/geom/chapter.xml Fri Apr 25 16:10:58 2014 (r44664)
@@ -895,8 +895,8 @@ mountroot></screen>
In a <acronym>RAID</acronym>3 system, data is split up into a
number of bytes that are written across all the drives in the
array except for one disk which acts as a dedicated parity disk.
- This means that reading 1024KB from a
- <acronym>RAID</acronym>3 implementation will access all disks in
+ This means that disk reads from a
+ <acronym>RAID</acronym>3 implementation access all disks in
the array. Performance can be enhanced by using multiple disk
controllers. The <acronym>RAID</acronym>3 array provides a
fault tolerance of 1 drive, while providing a capacity of 1 -
@@ -907,10 +907,19 @@ mountroot></screen>
<para>At least 3 physical hard drives are required to build a
<acronym>RAID</acronym>3 array. Each disk must be of the same
- size, since I/O requests are interleaved to read or write to
+ size, since <acronym>I/O</acronym> requests are interleaved to read or write to
multiple disks in parallel. Also, due to the nature of
<acronym>RAID</acronym>3, the number of drives must be
equal to 3, 5, 9, 17, and so on, or 2^n + 1.</para>
+
+ <para>This section demonstrates how to create a software
+ <acronym>RAID</acronym>3 on a &os; system.</para>
+
+ <note>
+ <para>While it is theoretically possible to boot from a
+ <acronym>RAID</acronym>3 array on &os;, that configuration
+ is uncommon and is not advised.</para>
+ </note>
<sect2>
<title>Creating a Dedicated <acronym>RAID</acronym>3
@@ -922,30 +931,24 @@ mountroot></screen>
<acronym>RAID</acronym>3 array on &os; requires the following
steps.</para>
- <note>
- <para>While it is theoretically possible to boot from a
- <acronym>RAID</acronym>3 array on &os;, that configuration
- is uncommon and is not advised.</para>
- </note>
-
<procedure>
<step>
<para>First, load the <filename>geom_raid3.ko</filename>
- kernel module by issuing the following command:</para>
+ kernel module by issuing one of the following commands:</para>
<screen>&prompt.root; <userinput>graid3 load</userinput></screen>
- <para>Alternatively, it is possible to manually load the
- <filename>geom_raid3.ko</filename> module:</para>
+ <para>or:</para>
- <screen>&prompt.root; <userinput>kldload geom_raid3.ko</userinput></screen>
+ <screen>&prompt.root; <userinput>kldload geom_raid3</userinput></screen>
</step>
<step>
- <para>Create or ensure that a suitable mount point
- exists:</para>
+ <para>Ensure that a suitable mount point
+ exists. This command creates a new directory to use as
+ the mount point:</para>
- <screen>&prompt.root; <userinput>mkdir <replaceable>/multimedia/</replaceable></userinput></screen>
+ <screen>&prompt.root; <userinput>mkdir <replaceable>/multimedia</replaceable></userinput></screen>
</step>
<step>
@@ -971,7 +974,7 @@ Done.</screen>
<step>
<para>Partition the newly created
- <filename>gr0</filename> device and put a UFS file
+ <filename>gr0</filename> device and put a <acronym>UFS</acronym> file
system on it:</para>
<screen>&prompt.root; <userinput>gpart create -s GPT /dev/raid3/gr0</userinput>
@@ -989,7 +992,7 @@ Done.</screen>
</step>
</procedure>
- <para>Additional configuration is needed to retain the above
+ <para>Additional configuration is needed to retain this
setup across system reboots.</para>
<procedure>
More information about the svn-doc-head
mailing list