svn commit: r42547 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs

Warren Block wblock at FreeBSD.org
Thu Aug 15 02:01:37 UTC 2013


Author: wblock
Date: Thu Aug 15 02:01:36 2013
New Revision: 42547
URL: http://svnweb.freebsd.org/changeset/doc/42547

Log:
  Fix numerous punctuation, spelling, and phrasing problems, stuff the
  chapter full of acronym tags.

Modified:
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Thu Aug 15 01:21:23 2013	(r42546)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Thu Aug 15 02:01:36 2013	(r42547)
@@ -15,7 +15,7 @@
     </authorgroup>
   </chapterinfo>
 
-  <title>The Z File System (ZFS)</title>
+  <title>The Z File System (<acronym>ZFS</acronym>)</title>
 
   <para>The Z file system, originally developed by &sun;,
     is designed to future proof the file system by removing many of
@@ -34,10 +34,10 @@
     of the limitations of hardware <acronym>RAID</acronym>.</para>
 
   <sect1 id="zfs-differences">
-    <title>What Makes ZFS Different</title>
+    <title>What Makes <acronym>ZFS</acronym> Different</title>
 
-    <para>ZFS is significantly different from any previous file system
-      owing to the fact that it is more than just a file system.  ZFS
+    <para><acronym>ZFS</acronym> is significantly different from any previous file system
+      owing to the fact that it is more than just a file system.  <acronym>ZFS</acronym>
       combines the traditionally separate roles of volume manager and
       file system, which provides unique advantages because the file
       system is now aware of the underlying structure of the disks.
@@ -48,17 +48,17 @@
       around by presenting the operating system with a single logical
       disk made up of the space provided by a number of disks, on top
       of which the operating system placed its file system.  Even in
-      the case of software RAID solutions like
-      <acronym>GEOM</acronym>, the UFS file system living on top of
+      the case of software <acronym>RAID</acronym> solutions like
+      <acronym>GEOM</acronym>, the <acronym>UFS</acronym> file system living on top of
       the <acronym>RAID</acronym> transform believed that it was
-      dealing with a single device.  ZFS's combination of the volume
+      dealing with a single device.  <acronym>ZFS</acronym>'s combination of the volume
       manager and the file system solves this and allows the creation
       of many file systems all sharing a pool of available storage.
-      One of the biggest advantages to ZFS's awareness of the physical
-      layout of the disks is that ZFS can grow the existing file
+      One of the biggest advantages to <acronym>ZFS</acronym>'s awareness of the physical
+      layout of the disks is that <acronym>ZFS</acronym> can grow the existing file
       systems automatically when additional disks are added to the
       pool.  This new space is then made available to all of the file
-      systems.  ZFS also has a number of different properties that can
+      systems.  <acronym>ZFS</acronym> also has a number of different properties that can
       be applied to each file system, creating many advantages to
       creating a number of different filesystems and datasets rather
       than a single monolithic filesystem.</para>
@@ -69,10 +69,13 @@
 
     <para>There is a start up mechanism that allows &os; to mount
       <acronym>ZFS</acronym> pools during system initialization.  To
-      set it, issue the following commands:</para>
+      enable it, add this line to <filename>/etc/rc.conf</filename>:</para>
 
-    <screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' >> /etc/rc.conf</userinput>
-&prompt.root; <userinput>service zfs start</userinput></screen>
+    <programlisting>zfs_enable="YES"</programlisting>
+
+    <para>Then start the service:</para>
+
+    <screen>&prompt.root; <userinput>service zfs start</userinput></screen>
 
     <para>The examples in this section assume three
       <acronym>SCSI</acronym> disks with the device names
@@ -132,7 +135,7 @@ drwxr-xr-x  21 root  wheel  512 Aug 29 2
 
       <screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
 
-      <para>To unmount a file system, issue the following command and
+      <para>To unmount a file system, use <command>zfs umount</command> and
 	then verify by using <command>df</command>:</para>
 
       <screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
@@ -143,7 +146,7 @@ devfs               1       1        0  
 /dev/ad0s1d  54098308 1032864 48737580     2%    /usr
 example      17547008       0 17547008     0%    /example</screen>
 
-      <para>To re-mount the file system to make it accessible again,
+      <para>To re-mount the file system to make it accessible again, use <command>zfs mount</command>
 	and verify with <command>df</command>:</para>
 
       <screen>&prompt.root; <userinput>zfs mount example/compressed</userinput>
@@ -211,11 +214,11 @@ example/data        17547008       0 175
       <para>There is no way to prevent a disk from failing.  One
 	method of avoiding data loss due to a failed hard disk is to
 	implement <acronym>RAID</acronym>.  <acronym>ZFS</acronym>
-	supports this feature in its pool design.  RAID-Z pools
+	supports this feature in its pool design.  <acronym>RAID-Z</acronym> pools
 	require 3 or more disks but yield more usable space than
 	mirrored pools.</para>
 
-      <para>To create a <acronym>RAID</acronym>-Z pool, issue the
+      <para>To create a <acronym>RAID-Z</acronym> pool, issue the
 	following command and specify the disks to add to the
 	pool:</para>
 
@@ -226,7 +229,7 @@ example/data        17547008       0 175
 	  <acronym>RAID</acronym>-Z configuration is between three and
 	  nine.  For environments requiring a single pool consisting
 	  of 10 disks or more, consider breaking it up into smaller
-	  <acronym>RAID</acronym>-Z groups.  If only two disks are
+	  <acronym>RAID-Z</acronym> groups.  If only two disks are
 	  available and redundancy is a requirement, consider using a
 	  <acronym>ZFS</acronym> mirror.  Refer to &man.zpool.8; for
 	  more details.</para>
@@ -312,7 +315,7 @@ devfs                1       1        0 
 storage       26320512       0 26320512     0%    /storage
 storage/home  26320512       0 26320512     0%    /home</screen>
 
-      <para>This completes the <acronym>RAID</acronym>-Z
+      <para>This completes the <acronym>RAID-Z</acronym>
 	configuration.  To get status updates about the file systems
 	created during the nightly &man.periodic.8; runs, issue the
 	following command:</para>
@@ -325,8 +328,8 @@ storage/home  26320512       0 26320512 
 
       <para>Every software <acronym>RAID</acronym> has a method of
 	monitoring its <literal>state</literal>.  The status of
-	<acronym>RAID</acronym>-Z devices may be viewed with the
-	following command:</para>
+	<acronym>RAID-Z</acronym> devices may be viewed with this
+	command:</para>
 
       <screen>&prompt.root; <userinput>zpool status -x</userinput></screen>
 
@@ -724,19 +727,19 @@ errors: No known data errors</screen>
 
       <para>Some of the features provided by <acronym>ZFS</acronym>
 	are RAM-intensive, so some tuning may be required to provide
-	maximum efficiency on systems with limited RAM.</para>
+	maximum efficiency on systems with limited <acronym>RAM</acronym>.</para>
 
       <sect3>
 	<title>Memory</title>
 
 	<para>At a bare minimum, the total system memory should be at
-	  least one gigabyte.  The amount of recommended RAM depends
-	  upon the size of the pool and the ZFS features which are
-	  used.  A general rule of thumb is 1GB of RAM for every 1TB
+	  least one gigabyte.  The amount of recommended <acronym>RAM</acronym> depends
+	  upon the size of the pool and the <acronym>ZFS</acronym> features which are
+	  used.  A general rule of thumb is 1 GB of RAM for every 1 TB
 	  of storage.  If the deduplication feature is used, a general
-	  rule of thumb is 5GB of RAM per TB of storage to be
-	  deduplicated.  While some users successfully use ZFS with
-	  less RAM, it is possible that when the system is under heavy
+	  rule of thumb is 5 GB of RAM per TB of storage to be
+	  deduplicated.  While some users successfully use <acronym>ZFS</acronym> with
+	  less <acronym>RAM</acronym>, it is possible that when the system is under heavy
 	  load, it may panic due to memory exhaustion.  Further tuning
 	  may be required for systems with less than the recommended
 	  RAM requirements.</para>
@@ -745,8 +748,8 @@ errors: No known data errors</screen>
       <sect3>
 	<title>Kernel Configuration</title>
 
-	<para>Due to the RAM limitations of the &i386; platform, users
-	  using ZFS on the &i386; architecture should add the
+	<para>Due to the <acronym>RAM</acronym> limitations of the &i386; platform, users
+	  using <acronym>ZFS</acronym> on the &i386; architecture should add the
 	  following option to a custom kernel configuration file,
 	  rebuild the kernel, and reboot:</para>
 
@@ -777,7 +780,7 @@ vfs.zfs.arc_max="40M"
 vfs.zfs.vdev.cache.size="5M"</programlisting>
 
 	<para>For a more detailed list of recommendations for
-	  ZFS-related tuning, see <ulink
+	  <acronym>ZFS</acronym>-related tuning, see <ulink
 	    url="http://wiki.freebsd.org/ZFSTuningGuide"></ulink>.</para>
       </sect3>
     </sect2>
@@ -826,22 +829,22 @@ vfs.zfs.vdev.cache.size="5M"</programlis
   </sect1>
 
   <sect1 id="zfs-term">
-    <title>ZFS Features and Terminology</title>
+    <title><acronym>ZFS</acronym> Features and Terminology</title>
 
-    <para>ZFS is a fundamentally different file system because it
-      is more than just a file system.  ZFS combines the roles of
+    <para><acronym>ZFS</acronym> is a fundamentally different file system because it
+      is more than just a file system.  <acronym>ZFS</acronym> combines the roles of
       file system and volume manager, enabling additional storage
       devices to be added to a live system and having the new space
       available on all of the existing file systems in that pool
       immediately.  By combining the traditionally separate roles,
-      ZFS is able to overcome previous limitations that prevented
-      RAID groups being able to grow.  Each top level device in a
-      zpool is called a vdev, which can be a simple disk or a RAID
-      transformation such as a mirror or RAID-Z array.  ZFS file
+      <acronym>ZFS</acronym> is able to overcome previous limitations that prevented
+      <acronym>RAID</acronym> groups being able to grow.  Each top level device in a
+      zpool is called a vdev, which can be a simple disk or a <acronym>RAID</acronym>
+      transformation such as a mirror or <acronym>RAID-Z</acronym> array.  <acronym>ZFS</acronym> file
       systems (called datasets), each have access to the combined
-      free space of the entire pool.  As blocks are allocated the
-      free space in the pool available to of each file system is
-      decreased.  This approach avoids the common pitfall with
+      free space of the entire pool.  As blocks are allocated from
+      the pool, the space available to each file system
+      decreases.  This approach avoids the common pitfall with
       extensive partitioning where free space becomes fragmentated
       across the partitions.</para>
 
@@ -852,7 +855,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry id="zfs-term-zpool">zpool</entry>
 
 	    <entry>A storage pool is the most basic building block of
-	      ZFS.  A pool is made up of one or more vdevs, the
+	      <acronym>ZFS</acronym>.  A pool is made up of one or more vdevs, the
 	      underlying devices that store the data.  A pool is then
 	      used to create one or more file systems (datasets) or
 	      block devices (volumes).  These datasets and volumes
@@ -860,14 +863,14 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      uniquely identified by a name and a
 	      <acronym>GUID</acronym>.  The zpool also controls the
 	      version number and therefore the features available for
-	      use with ZFS.
+	      use with <acronym>ZFS</acronym>.
 
 	      <note>
-		<para>&os; 9.0 and 9.1 include support for ZFS version
-		  28.  Future versions use ZFS version 5000 with
+		<para>&os; 9.0 and 9.1 include support for <acronym>ZFS</acronym> version
+		  28.  Future versions use <acronym>ZFS</acronym> version 5000 with
 		  feature flags.  This allows greater
 		  cross-compatibility with other implementations of
-		  ZFS.</para>
+		  <acronym>ZFS</acronym>.</para>
 	      </note></entry>
 	  </row>
 
@@ -876,8 +879,8 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 
 	    <entry>A zpool is made up of one or more vdevs, which
 	      themselves can be a single disk or a group of disks, in
-	      the case of a RAID transform.  When multiple vdevs are
-	      used, ZFS spreads data across the vdevs to increase
+	      the case of a <acronym>RAID</acronym> transform.  When multiple vdevs are
+	      used, <acronym>ZFS</acronym> spreads data across the vdevs to increase
 	      performance and maximize usable space.
 
 	      <itemizedlist>
@@ -899,7 +902,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 		<listitem>
 		  <para id="zfs-term-vdev-file">
 		    <emphasis>File</emphasis> - In addition to
-		    disks, ZFS pools can be backed by regular files,
+		    disks, <acronym>ZFS</acronym> pools can be backed by regular files,
 		    this is especially useful for testing and
 		    experimentation.  Use the full path to the file
 		    as the device path in the zpool create command.
@@ -930,21 +933,21 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 
 		<listitem>
 		  <para id="zfs-term-vdev-raidz">
-		    <emphasis><acronym>RAID</acronym>-Z</emphasis> -
-		    ZFS implements RAID-Z, a variation on standard
-		    RAID-5 that offers better distribution of parity
-		    and eliminates the "RAID-5 write hole" in which
+		    <emphasis><acronym>RAID-Z</acronym></emphasis> -
+		    <acronym>ZFS</acronym> implements <acronym>RAID-Z</acronym>, a variation on standard
+		    <acronym>RAID-5</acronym> that offers better distribution of parity
+		    and eliminates the "<acronym>RAID-5</acronym> write hole" in which
 		    the data and parity information become
-		    inconsistent after an unexpected restart.  ZFS
-		    supports 3 levels of RAID-Z which provide
+		    inconsistent after an unexpected restart.  <acronym>ZFS</acronym>
+		    supports 3 levels of <acronym>RAID-Z</acronym> which provide
 		    varying levels of redundancy in exchange for
 		    decreasing levels of usable storage.  The types
-		    are named RAID-Z1 through Z3 based on the number
+		    are named <acronym>RAID-Z1</acronym> through <acronym>RAID-Z3</acronym> based on the number
 		    of parity devinces in the array and the number
 		    of disks that the pool can operate
 		    without.</para>
 
-		  <para>In a RAID-Z1 configuration with 4 disks,
+		  <para>In a <acronym>RAID-Z1</acronym> configuration with 4 disks,
 		    each 1 TB, usable storage will be 3 TB
 		    and the pool will still be able to operate in
 		    degraded mode with one faulted disk.  If an
@@ -952,8 +955,8 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 		    disk is replaced and resilvered, all data in the
 		    pool can be lost.</para>
 
-		  <para>In a RAID-Z3 configuration with 8 disks of
-		    1 TB, the volume would provide 5TB of
+		  <para>In a <acronym>RAID-Z3</acronym> configuration with 8 disks of
+		    1 TB, the volume would provide 5 TB of
 		    usable space and still be able to operate with
 		    three faulted disks.  Sun recommends no more
 		    than 9 disks in a single vdev.  If the
@@ -961,53 +964,53 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 		    to divide them into separate vdevs and the pool
 		    data will be striped across them.</para>
 
-		  <para>A configuration of 2 RAID-Z2 vdevs
+		  <para>A configuration of 2 <acronym>RAID-Z2</acronym> vdevs
 		    consisting of 8 disks each would create
-		    something similar to a RAID 60 array.  A RAID-Z
+		    something similar to a <acronym>RAID-60</acronym> array.  A <acronym>RAID-Z</acronym>
 		    group's storage capacity is approximately the
 		    size of the smallest disk, multiplied by the
-		    number of non-parity disks.  4x 1 TB disks
-		    in Z1 has an effective size of approximately
-		    3 TB, and a 8x 1 TB array in Z3 will
-		    yeild 5 TB of usable space.</para>
+		    number of non-parity disks.  Four 1 TB disks
+		    in <acronym>RAID-Z1</acronym> has an effective size of approximately
+		    3 TB, and an array of eight 1 TB disks in <acronym>RAID-Z3</acronym> will
+		    yield 5 TB of usable space.</para>
 		</listitem>
 
 		<listitem>
 		  <para id="zfs-term-vdev-spare">
-		    <emphasis>Spare</emphasis> - ZFS has a special
+		    <emphasis>Spare</emphasis> - <acronym>ZFS</acronym> has a special
 		    pseudo-vdev type for keeping track of available
 		    hot spares.  Note that installed hot spares are
 		    not deployed automatically; they must manually
 		    be configured to replace the failed device using
-		    the zfs replace command.</para>
+		    <command>zfs replace</command>.</para>
 		</listitem>
 
 		<listitem>
 		  <para id="zfs-term-vdev-log">
-		    <emphasis>Log</emphasis> - ZFS Log Devices, also
+		    <emphasis>Log</emphasis> - <acronym>ZFS</acronym> Log Devices, also
 		    known as ZFS Intent Log (<acronym>ZIL</acronym>)
 		    move the intent log from the regular pool
-		    devices to a dedicated device.  The ZIL
+		    devices to a dedicated device.  The <acronym>ZIL</acronym>
 		    accelerates synchronous transactions by using
 		    storage devices (such as
 		    <acronym>SSD</acronym>s) that are faster
-		    compared to those used for the main pool.  When
+		    than those used for the main pool.  When
 		    data is being written and the application
 		    requests a guarantee that the data has been
 		    safely stored, the data is written to the faster
-		    ZIL storage, then later flushed out to the
+		    <acronym>ZIL</acronym> storage, then later flushed out to the
 		    regular disks, greatly reducing the latency of
 		    synchronous writes.  Log devices can be
-		    mirrored, but RAID-Z is not supported.  When
-		    specifying multiple log devices writes will be
-		    load balanced across all devices.</para>
+		    mirrored, but <acronym>RAID-Z</acronym> is not supported.  If
+		    multiple log devices are used, writes will be
+		    load balanced across them.</para>
 		</listitem>
 
 		<listitem>
 		  <para id="zfs-term-vdev-cache">
 		    <emphasis>Cache</emphasis> - Adding a cache vdev
 		    to a zpool will add the storage of the cache to
-		    the L2ARC.  Cache devices cannot be mirrored.
+		    the <acronym>L2ARC</acronym>.  Cache devices cannot be mirrored.
 		    Since a cache device only stores additional
 		    copies of existing data, there is no risk of
 		    data loss.</para>
@@ -1019,7 +1022,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry id="zfs-term-arc">Adaptive Replacement
 	      Cache (<acronym>ARC</acronym>)</entry>
 
-	    <entry>ZFS uses an Adaptive Replacement Cache
+	    <entry><acronym>ZFS</acronym> uses an Adaptive Replacement Cache
 	      (<acronym>ARC</acronym>), rather than a more
 	      traditional Least Recently Used
 	      (<acronym>LRU</acronym>) cache.  An
@@ -1032,8 +1035,8 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      lists; the Most Recently Used (<acronym>MRU</acronym>)
 	      and Most Frequently Used (<acronym>MFU</acronym>)
 	      objects, plus a ghost list for each.  These ghost
-	      lists tracks recently evicted objects to provent them
-	      being added back to the cache.  This increases the
+	      lists track recently evicted objects to prevent them
+	      from being added back to the cache.  This increases the
 	      cache hit ratio by avoiding objects that have a
 	      history of only being used occasionally.  Another
 	      advantage of using both an <acronym>MRU</acronym> and
@@ -1041,14 +1044,14 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      filesystem would normally evict all data from an
 	      <acronym>MRU</acronym> or <acronym>LRU</acronym> cache
 	      in favor of this freshly accessed content.  In the
-	      case of <acronym>ZFS</acronym> since there is also an
+	      case of <acronym>ZFS</acronym>, since there is also an
 	      <acronym>MFU</acronym> that only tracks the most
 	      frequently used objects, the cache of the most
 	      commonly accessed blocks remains.</entry>
 	  </row>
 
 	  <row>
-	    <entry id="zfs-term-l2arc">L2ARC</entry>
+	    <entry id="zfs-term-l2arc"><acronym>L2ARC</acronym></entry>
 
 	    <entry>The <acronym>L2ARC</acronym> is the second level
 	      of the <acronym>ZFS</acronym> caching system.  The
@@ -1060,11 +1063,11 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      vdevs.  Solid State Disks (<acronym>SSD</acronym>s) are
 	      often used as these cache devices due to their higher
 	      speed and lower latency compared to traditional spinning
-	      disks.  An L2ARC is entirely optional, but having one
+	      disks.  An <acronym>L2ARC</acronym> is entirely optional, but having one
 	      will significantly increase read speeds for files that
 	      are cached on the <acronym>SSD</acronym> instead of
 	      having to be read from the regular spinning disks.  The
-	      L2ARC can also speed up <link
+	      <acronym>L2ARC</acronym> can also speed up <link
 		linkend="zfs-term-deduplication">deduplication</link>
 	      since a <acronym>DDT</acronym> that does not fit in
 	      <acronym>RAM</acronym> but does fit in the
@@ -1089,35 +1092,35 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry id="zfs-term-cow">Copy-On-Write</entry>
 
 	    <entry>Unlike a traditional file system, when data is
-	      overwritten on ZFS the new data is written to a
+	      overwritten on <acronym>ZFS</acronym> the new data is written to a
 	      different block rather than overwriting the old data in
 	      place.  Only once this write is complete is the metadata
 	      then updated to point to the new location of the data.
 	      This means that in the event of a shorn write (a system
-	      crash or power loss in the middle of writing a file) the
+	      crash or power loss in the middle of writing a file), the
 	      entire original contents of the file are still available
 	      and the incomplete write is discarded.  This also means
-	      that ZFS does not require a fsck after an unexpected
+	      that <acronym>ZFS</acronym> does not require a &man.fsck.8; after an unexpected
 	      shutdown.</entry>
 	  </row>
 
 	  <row>
 	    <entry id="zfs-term-dataset">Dataset</entry>
 
-	    <entry>Dataset is the generic term for a ZFS file system,
+	    <entry>Dataset is the generic term for a <acronym>ZFS</acronym> file system,
 	      volume, snapshot or clone.  Each dataset will have a
 	      unique name in the format:
 	      <literal>poolname/path at snapshot</literal>.  The root of
 	      the pool is technically a dataset as well.  Child
 	      datasets are named hierarchically like directories; for
-	      example <literal>mypool/home</literal>, the home dataset
-	      is a child of mypool and inherits properties from it.
-	      This can be expended further by creating
+	      example, <literal>mypool/home</literal>, the home dataset,
+	      is a child of <literal>mypool</literal> and inherits properties from it.
+	      This can be expanded further by creating
 	      <literal>mypool/home/user</literal>.  This grandchild
 	      dataset will inherity properties from the parent and
 	      grandparent.  It is also possible to set properties
 	      on a child to override the defaults inherited from the
-	      parents and grandparents.  ZFS also allows
+	      parents and grandparents.  <acronym>ZFS</acronym> also allows
 	      administration of datasets and their children to be
 	      delegated.</entry>
 	  </row>
@@ -1125,12 +1128,12 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	  <row>
 	    <entry id="zfs-term-volum">Volume</entry>
 
-	    <entry>In additional to regular file system datasets, ZFS
+	    <entry>In additional to regular file system datasets, <acronym>ZFS</acronym>
 	      can also create volumes, which are block devices.
 	      Volumes have many of the same features, including
 	      copy-on-write, snapshots, clones and checksumming.
 	      Volumes can be useful for running other file system
-	      formats on top of ZFS, such as UFS or in the case of
+	      formats on top of <acronym>ZFS</acronym>, such as <acronym>UFS</acronym> or in the case of
 	      Virtualization or exporting <acronym>iSCSI</acronym>
 	      extents.</entry>
 	  </row>
@@ -1141,7 +1144,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry>The <link
 		linkend="zfs-term-cow">copy-on-write</link>
 
-	      design of ZFS allows for nearly instantaneous consistent
+	      design of <acronym>ZFS</acronym> allows for nearly instantaneous consistent
 	      snapshots with arbitrary names.  After taking a snapshot
 	      of a dataset (or a recursive snapshot of a parent
 	      dataset that will include all child datasets), new data
@@ -1202,15 +1205,15 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry id="zfs-term-checksum">Checksum</entry>
 
 	    <entry>Every block that is allocated is also checksummed
-	      (which algorithm is used is a per dataset property, see:
-	      zfs set).  ZFS transparently validates the checksum of
-	      each block as it is read, allowing ZFS to detect silent
+	      (the algorithm used is a per dataset property, see:
+	      <command>zfs set</command>).  <acronym>ZFS</acronym> transparently validates the checksum of
+	      each block as it is read, allowing <acronym>ZFS</acronym> to detect silent
 	      corruption.  If the data that is read does not match the
-	      expected checksum, ZFS will attempt to recover the data
-	      from any available redundancy (mirrors, RAID-Z).  You
-	      can trigger the validation of all checksums using the
-	      <link linkend="zfs-term-scrub">scrub</link>
-	      command.  The available checksum algorithms include:
+	      expected checksum, <acronym>ZFS</acronym> will attempt to recover the data
+	      from any available redundancy, like mirrors or <acronym>RAID-Z</acronym>).  Validation of all checksums can be triggered with
+	      the
+	      <link linkend="zfs-term-scrub"><command>scrub</command></link>
+	      command.  Available checksum algorithms include:
 
 	      <itemizedlist>
 		<listitem>
@@ -1235,7 +1238,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	  <row>
 	    <entry id="zfs-term-compression">Compression</entry>
 
-	    <entry>Each dataset in ZFS has a compression property,
+	    <entry>Each dataset in <acronym>ZFS</acronym> has a compression property,
 	      which defaults to off.  This property can be set to one
 	      of a number of compression algorithms, which will cause
 	      all new data that is written to this dataset to be
@@ -1245,7 +1248,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      of the file needs to be read or written.
 
 	      <note>
-		<para>LZ4 compression is only available after &os;
+		<para><acronym>LZ4</acronym> compression is only available after &os;
 		  9.2</para>
 	      </note></entry>
 	  </row>
@@ -1253,12 +1256,12 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	  <row>
 	    <entry id="zfs-term-deduplication">Deduplication</entry>
 
-	    <entry>ZFS has the ability to detect duplicate blocks of
+	    <entry><acronym>ZFS</acronym> has the ability to detect duplicate blocks of
 	      data as they are written (thanks to the checksumming
 	      feature).  If deduplication is enabled, instead of
 	      writing the block a second time, the reference count of
 	      the existing block will be increased, saving storage
-	      space.  In order to do this, ZFS keeps a deduplication
+	      space.  To do this, <acronym>ZFS</acronym> keeps a deduplication
 	      table (<acronym>DDT</acronym>) in memory, containing the
 	      list of unique checksums, the location of that block and
 	      a reference count.  When new data is written, the
@@ -1266,25 +1269,25 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      match is found, the data is considered to be a
 	      duplicate.  When deduplication is enabled, the checksum
 	      algorithm is changed to <acronym>SHA256</acronym> to
-	      provide a secure cryptographic hash.  ZFS deduplication
+	      provide a secure cryptographic hash.  <acronym>ZFS</acronym> deduplication
 	      is tunable; if dedup is on, then a matching checksum is
 	      assumed to mean that the data is identical.  If dedup is
 	      set to verify, then the data in the two blocks will be
 	      checked byte-for-byte to ensure it is actually identical
 	      and if it is not, the hash collision will be noted by
-	      ZFS and the two blocks will be stored separately.  Due
+	      <acronym>ZFS</acronym> and the two blocks will be stored separately.  Due
 	      to the nature of the <acronym>DDT</acronym>, having to
 	      store the hash of each unique block, it consumes a very
 	      large amount of memory (a general rule of thumb is
 	      5-6 GB of ram per 1 TB of deduplicated data).
 	      In situations where it is not practical to have enough
-	      <acronym>RAM</acronym> to keep the entire DDT in memory,
-	      performance will suffer greatly as the DDT will need to
+	      <acronym>RAM</acronym> to keep the entire <acronym>DDT</acronym> in memory,
+	      performance will suffer greatly as the <acronym>DDT</acronym> will need to
 	      be read from disk before each new block is written.
-	      Deduplication can make use of the L2ARC to store the
-	      DDT, providing a middle ground between fast system
-	      memory and slower disks.  It is advisable to consider
-	      using ZFS compression instead, which often provides
+	      Deduplication can make use of the <acronym>L2ARC</acronym> to store the
+	      <acronym>DDT</acronym>, providing a middle ground between fast system
+	      memory and slower disks.  Consider
+	      using <acronym>ZFS</acronym> compression instead, which often provides
 	      nearly as much space savings without the additional
 	      memory requirement.</entry>
 	  </row>
@@ -1292,7 +1295,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	  <row>
 	    <entry id="zfs-term-scrub">Scrub</entry>
 
-	    <entry>In place of a consistency check like fsck, ZFS has
+	    <entry>In place of a consistency check like &man.fsck.8;, <acronym>ZFS</acronym> has
 	      the <literal>scrub</literal> command, which reads all
 	      data blocks stored on the pool and verifies their
 	      checksums them against the known good checksums stored
@@ -1300,7 +1303,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      stored on the pool ensures the recovery of any corrupted
 	      blocks before they are needed.  A scrub is not required
 	      after an unclean shutdown, but it is recommended that
-	      you run a scrub at least once each quarter.  ZFS
+	      you run a scrub at least once each quarter.  <acronym>ZFS</acronym>
 	      compares the checksum for each block as it is read in
 	      the normal course of use, but a scrub operation makes
 	      sure even infrequently used blocks are checked for
@@ -1310,14 +1313,14 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	  <row>
 	    <entry id="zfs-term-quota">Dataset Quota</entry>
 
-	    <entry>ZFS provides very fast and accurate dataset, user
-	      and group space accounting in addition to quotes and
+	    <entry><acronym>ZFS</acronym> provides very fast and accurate dataset, user
+	      and group space accounting in addition to quotas and
 	      space reservations.  This gives the administrator fine
 	      grained control over how space is allocated and allows
 	      critical file systems to reserve space to ensure other
 	      file systems do not take all of the free space.
 
-	      <para>ZFS supports different types of quotas: the
+	      <para><acronym>ZFS</acronym> supports different types of quotas: the
 		dataset quota, the <link
 		  linkend="zfs-term-refquota">reference
 		  quota (<acronym>refquota</acronym>)</link>, the
@@ -1378,7 +1381,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      dataset tries to use all of the free space, at least
 	      10 GB of space is reserved for this dataset.  If a
 	      snapshot is taken of
-	      <filename>storage/home/bob</filename>, the space used by
+	      <filename class="directory">storage/home/bob</filename>, the space used by
 	      that snapshot is counted against the reservation.  The
 	      <link
 		linkend="zfs-term-refreservation">refreservation</link>
@@ -1428,7 +1431,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      process of calculating and writing the missing data
 	      (using the parity information distributed across the
 	      remaining drives) to the new drive is called
-	      Resilvering.</entry>
+	      <emphasis>resilvering</emphasis>.</entry>
 	  </row>
 	</tbody>
       </tgroup>


More information about the svn-doc-projects mailing list