svn commit: r43238 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs

Warren Block wblock at FreeBSD.org
Sun Nov 24 23:53:51 UTC 2013


Author: wblock
Date: Sun Nov 24 23:53:50 2013
New Revision: 43238
URL: http://svnweb.freebsd.org/changeset/doc/43238

Log:
  Edit for clarity, spelling, and redundancy.

Modified:
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Sun Nov 24 23:25:14 2013	(r43237)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Sun Nov 24 23:53:50 2013	(r43238)
@@ -689,7 +689,7 @@ errors: No known data errors</screen>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-history">
-      <title>Displaying recorded Pool history</title>
+      <title>Displaying Recorded Pool History</title>
 
       <para>ZFS records all the commands that were issued to
 	administer the pool.  These include the creation of datasets,
@@ -709,13 +709,13 @@ History for 'tank':
 2013-02-27.18:51:09 zfs set checksum=fletcher4 tank
 2013-02-27.18:51:18 zfs create tank/backup</screen>
 
-      <para>The command output shows in it's basic form a timestamp
-	followed by each <command>zpool</command> or
-	<command>zfs</command> command that was executed on the pool.
+      <para>The output shows
+	<command>zpool</command> and
+	<command>zfs</command> commands that were executed on the pool along with a timestamp.
 	Note that only commands that altered the pool in some way are
 	being recorded.  Commands like <command>zfs list</command> are
 	not part of the history.  When there is no pool name provided
-	for <command>zpool history</command> then the history of all
+	for <command>zpool history</command>, then the history of all
 	pools will be displayed.</para>
 
       <para>The <command>zpool history</command> can show even more
@@ -758,12 +758,12 @@ History for 'tank':
 	on the other system can clearly be distinguished by the
 	hostname that is recorded for each command.</para>
 
-      <para>Both options to the <command>zpool history</command>
-	command can be combined together to give the most detailed
+      <para>Both options to <command>zpool history</command>
+	can be combined to give the most detailed
 	information possible for any given pool.  The pool history can
-	become a valuable information source when tracking down what
-	actions were performed or when it is needed to provide more
-	detailed output for debugging a ZFS pool.</para>
+	be a valuable information source when tracking down what
+	actions were performed or when more
+	detailed output is needed for debugging a ZFS pool.</para>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-iostat">
@@ -974,9 +974,9 @@ Filesystem           Size Used Avail Cap
 NAME PROPERTY           VALUE SOURCE
 tank custom:costcenter  1234  local</screen>
 
-      <para>To remove such a custom property again, use the
-	<command>zfs inherit</command> command with the
-	<option>-r</option> option.  If the custom property is not
+      <para>To remove such a custom property again, use
+	<command>zfs inherit</command> with
+	<option>-r</option>.  If the custom property is not
 	defined in any of the parent datasets, it will be removed
 	completely (although the changes are still recorded in the
 	pool's history).</para>
@@ -1057,7 +1057,7 @@ tank    custom:costcenter  -            
 	that can be consumed by a particular dataset.  <link
 	linkend="zfs-term-refquota">Reference Quotas</link> work in
 	very much the same way, except they only count the space used
-	by the dataset it self, excluding snapshots and child
+	by the dataset itself, excluding snapshots and child
 	datasets.  Similarly <link
 	linkend="zfs-term-userquota">user</link> and <link
 	linkend="zfs-term-groupquota">group</link> quotas can be used
@@ -1258,7 +1258,7 @@ for> done</screen>
 NAME SIZE  ALLOC FREE CAP DEDUP HEALTH ALTROOT
 pool 2.84G 20.9M 2.82G 0% 3.00x ONLINE -</screen>
 
-      <para>The <literal>DEDUP</literal> column does now contain the
+      <para>The <literal>DEDUP</literal> column now contains the
 	value <literal>3.00x</literal>. This indicates that ZFS
 	detected the copies of the ports tree data and was able to
 	deduplicate it at a ratio of 1/3.  The space savings that this
@@ -1269,8 +1269,7 @@ pool 2.84G 20.9M 2.82G 0% 3.00x ONLINE -
 	there is not much redundant data on a ZFS pool.  To see how
 	much space could be saved by deduplication for a given set of
 	data that is already stored in a pool, ZFS can simulate the
-	effects that deduplication would have.  To do that, the
-	following command can be invoked on the pool.</para>
+	effects that deduplication would have:</para>
 
       <screen>&prompt.root; <userinput>zdb -S <replaceable>pool</replaceable></userinput>
 Simulated DDT histogram:
@@ -1293,9 +1292,9 @@ refcnt   blocks   LSIZE   PSIZE   DSIZE 
 
 dedup = 1.05, compress = 1.11, copies = 1.00, dedup * compress / copies = 1.16</screen>
 
-      <para>After <command>zdb -S</command> finished analyzing the
-	pool, it outputs a summary that shows the ratio that would
-	result in activating deduplication.  In this case,
+      <para>After <command>zdb -S</command> finishes analyzing the
+	pool, it shows the space reduction ratio that would be achieved by
+	activating deduplication.  In this case,
 	<literal>1.16</literal> is a very poor rate that is mostly
 	influenced by compression.  Activating deduplication on this
 	pool would not save any significant amount of space.  Keeping
@@ -1316,8 +1315,8 @@ dedup = 1.05, compress = 1.11, copies = 
 	<acronym>ZFS</acronym> dataset to a <link
 	linkend="jails">Jail</link>.  <command>zfs jail
 	<replaceable>jailid</replaceable></command> attaches a dataset
-	to the specified jail, and the <command>zfs unjail</command>
-	detaches it.  In order for the dataset to be administered from
+	to the specified jail, and <command>zfs unjail</command>
+	detaches it.  For the dataset to be administered from
 	within a jail, the <literal>jailed</literal> property must be
 	set.  Once a dataset is jailed it can no longer be mounted on
 	the host, because the jail administrator may have set
@@ -1328,46 +1327,45 @@ dedup = 1.05, compress = 1.11, copies = 
   <sect1 xml:id="zfs-zfs-allow">
     <title>Delegated Administration</title>
 
-    <para>ZFS features a comprehensive delegation system to assign
-      permissions to perform the various ZFS administration functions
-      to a regular (non-root) user.  For example, if each users' home
-      directory is a dataset, then each user could be delegated
+    <para>A comprehensive permission delegation system allows unprivileged
+      users to perform ZFS administration functions.
+      For example, if each user's home
+      directory is a dataset, users can be given
       permission to create and destroy snapshots of their home
-      directory.  A backup user could be assigned the permissions
-      required to make use of the ZFS replication features without
-      requiring root access, or isolate a usage collection script to
-      run as an unprivileged user with access to only the space
-      utilization data of all users.  It is even possible to delegate
-      the ability to delegate permissions.  ZFS allows to delegate
-      permissions over each subcommand and most ZFS properties.</para>
+      directories.  A backup user can be given permission
+      to use ZFS replication features.
+      A usage statistics script can be allowed to
+      run with access only to the space
+      utilization data for all users.  It is even possible to delegate
+      the ability to delegate permissions.  Permission delegation is
+      possible for each subcommand and most ZFS properties.</para>
 
     <sect2 xml:id="zfs-zfs-allow-create">
       <title>Delegating Dataset Creation</title>
 
-      <para>Using the <userinput>zfs allow
+      <para><userinput>zfs allow
 	<replaceable>someuser</replaceable> create
-	<replaceable>mydataset</replaceable></userinput> command will
-	give the indicated user the required permissions to create
+	<replaceable>mydataset</replaceable></userinput>
+	gives the specified user permission to create
 	child datasets under the selected parent dataset.  There is a
-	caveat: creating a new dataset involves mounting it, which
-	requires the <literal>vfs.usermount</literal> sysctl to be
-	enabled in order to allow non-root users to mount a
-	filesystem.  There is another restriction that non-root users
-	must own the directory they are mounting the filesystem to, in
-	order to prevent abuse.</para>
+	caveat: creating a new dataset involves mounting it.
+	That requires setting the <literal>vfs.usermount</literal> &man.sysctl.8; to <literal>1</literal>
+	to allow non-root users to mount a
+	filesystem.  There is another restriction aimed at preventing abuse: non-root users
+	must own the mountpoint where the file system is being mounted.</para>
     </sect2>
 
     <sect2 xml:id="zfs-zfs-allow-allow">
       <title>Delegating Permission Delegation</title>
 
-      <para>Using the <userinput>zfs allow
+      <para><userinput>zfs allow
 	<replaceable>someuser</replaceable> allow
-	<replaceable>mydataset</replaceable></userinput> command will
-	give the indicated user the ability to assign any permission
+	<replaceable>mydataset</replaceable></userinput>
+	gives the specified user the ability to assign any permission
 	they have on the target dataset (or its children) to other
 	users.  If a user has the <literal>snapshot</literal>
-	permission and the <literal>allow</literal> permission that
-	user can then grant the snapshot permission to some other
+	permission and the <literal>allow</literal> permission, that
+	user can then grant the <literal>snapshot</literal> permission to some other
 	users.</para>
     </sect2>
   </sect1>
@@ -1403,23 +1401,23 @@ dedup = 1.05, compress = 1.11, copies = 
       <title>ZFS on i386</title>
 
       <para>Some of the features provided by <acronym>ZFS</acronym>
-	are RAM-intensive, so some tuning may be required to provide
+	are RAM-intensive, and may require tuning for
 	maximum efficiency on systems with limited
 	<acronym>RAM</acronym>.</para>
 
       <sect3>
 	<title>Memory</title>
 
-	<para>At a bare minimum, the total system memory should be at
+	<para>As a bare minimum, the total system memory should be at
 	  least one gigabyte.  The amount of recommended
 	  <acronym>RAM</acronym> depends upon the size of the pool and
-	  the <acronym>ZFS</acronym> features which are used.  A
+	  which <acronym>ZFS</acronym> features are used.  A
 	  general rule of thumb is 1 GB of RAM for every
 	  1 TB of storage.  If the deduplication feature is used,
 	  a general rule of thumb is 5 GB of RAM per TB of
 	  storage to be deduplicated.  While some users successfully
 	  use <acronym>ZFS</acronym> with less <acronym>RAM</acronym>,
-	  it is possible that when the system is under heavy load, it
+	  systems under heavy load
 	  may panic due to memory exhaustion.  Further tuning may be
 	  required for systems with less than the recommended RAM
 	  requirements.</para>
@@ -1429,19 +1427,19 @@ dedup = 1.05, compress = 1.11, copies = 
 	<title>Kernel Configuration</title>
 
 	<para>Due to the <acronym>RAM</acronym> limitations of the
-	  &i386; platform, users using <acronym>ZFS</acronym> on the
-	  &i386; architecture should add the following option to a
+	  &i386; platform, <acronym>ZFS</acronym> users on the
+	  &i386; architecture should add this option to a
 	  custom kernel configuration file, rebuild the kernel, and
 	  reboot:</para>
 
 	<programlisting>options        KVA_PAGES=512</programlisting>
 
-	<para>This option expands the kernel address space, allowing
+	<para>This expands the kernel address space, allowing
 	  the <varname>vm.kvm_size</varname> tunable to be pushed
 	  beyond the currently imposed limit of 1 GB, or the
 	  limit of 2 GB for <acronym>PAE</acronym>.  To find the
 	  most suitable value for this option, divide the desired
-	  address space in megabytes by four (4).  In this example, it
+	  address space in megabytes by four.  In this example, it
 	  is <literal>512</literal> for 2 GB.</para>
       </sect3>
 
@@ -1450,8 +1448,8 @@ dedup = 1.05, compress = 1.11, copies = 
 
 	<para>The <filename>kmem</filename> address space can be
 	  increased on all &os; architectures.  On a test system with
-	  one gigabyte of physical memory, success was achieved with
-	  the following options added to
+	  1 GB of physical memory, success was achieved with
+	  these options added to
 	  <filename>/boot/loader.conf</filename>, and the system
 	  restarted:</para>
 
@@ -1638,12 +1636,12 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 		    levels of usable storage.  The types are named
 		    <acronym>RAID-Z1</acronym> through
 		    <acronym>RAID-Z3</acronym> based on the number of
-		    parity devinces in the array and the number of
-		    disks that the pool can operate without.</para>
+		    parity devices in the array and the number of
+		    disks which can fail while the pool remains operational.</para>
 
 		  <para>In a <acronym>RAID-Z1</acronym> configuration
-		    with 4 disks, each 1 TB, usable storage will
-		    be 3 TB and the pool will still be able to
+		    with 4 disks, each 1 TB, usable storage is
+		    3 TB and the pool will still be able to
 		    operate in degraded mode with one faulted disk.
 		    If an additional disk goes offline before the
 		    faulted disk is replaced and resilvered, all data
@@ -1663,7 +1661,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 		    disks each would create something similar to a
 		    <acronym>RAID-60</acronym> array.  A
 		    <acronym>RAID-Z</acronym> group's storage capacity
-		    is approximately the size of the smallest disk,
+		    is approximately the size of the smallest disk
 		    multiplied by the number of non-parity disks.
 		    Four 1 TB disks in <acronym>RAID-Z1</acronym>
 		    has an effective size of approximately 3 TB,
@@ -1749,17 +1747,17 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry
 	      xml:id="zfs-term-l2arc"><acronym>L2ARC</acronym></entry>
 
-	    <entry>The <acronym>L2ARC</acronym> is the second level
+	    <entry><acronym>L2ARC</acronym> is the second level
 	      of the <acronym>ZFS</acronym> caching system.  The
 	      primary <acronym>ARC</acronym> is stored in
-	      <acronym>RAM</acronym>, however since the amount of
+	      <acronym>RAM</acronym>.  Since the amount of
 	      available <acronym>RAM</acronym> is often limited,
-	      <acronym>ZFS</acronym> can also make use of
+	      <acronym>ZFS</acronym> can also use
 	      <link linkend="zfs-term-vdev-cache">cache</link>
 	      vdevs.  Solid State Disks (<acronym>SSD</acronym>s) are
 	      often used as these cache devices due to their higher
 	      speed and lower latency compared to traditional spinning
-	      disks.  An <acronym>L2ARC</acronym> is entirely
+	      disks.  <acronym>L2ARC</acronym> is entirely
 	      optional, but having one will significantly increase
 	      read speeds for files that are cached on the
 	      <acronym>SSD</acronym> instead of having to be read from
@@ -1789,7 +1787,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry
 	      xml:id="zfs-term-zil"><acronym>ZIL</acronym></entry>
 
-	    <entry>The <acronym>ZIL</acronym> accelerates synchronous
+	    <entry><acronym>ZIL</acronym> accelerates synchronous
 	      transactions by using storage devices (such as
 	      <acronym>SSD</acronym>s) that are faster than those used
 	      for the main storage pool.  When data is being written
@@ -1809,11 +1807,11 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry xml:id="zfs-term-cow">Copy-On-Write</entry>
 
 	    <entry>Unlike a traditional file system, when data is
-	      overwritten on <acronym>ZFS</acronym> the new data is
+	      overwritten on <acronym>ZFS</acronym>, the new data is
 	      written to a different block rather than overwriting the
-	      old data in place.  Only once this write is complete is
-	      the metadata then updated to point to the new location
-	      of the data.  This means that in the event of a shorn
+	      old data in place.  Only when this write is complete is
+	      the metadata then updated to point to the new location.
+	      In the event of a shorn
 	      write (a system crash or power loss in the middle of
 	      writing a file), the entire original contents of the
 	      file are still available and the incomplete write is
@@ -1825,23 +1823,23 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	  <row>
 	    <entry xml:id="zfs-term-dataset">Dataset</entry>
 
-	    <entry>Dataset is the generic term for a
+	    <entry><emphasis>Dataset</emphasis> is the generic term for a
 	      <acronym>ZFS</acronym> file system, volume, snapshot or
-	      clone.  Each dataset will have a unique name in the
+	      clone.  Each dataset has a unique name in the
 	      format: <literal>poolname/path at snapshot</literal>.  The
 	      root of the pool is technically a dataset as well.
 	      Child datasets are named hierarchically like
-	      directories; for example,
+	      directories.  For example,
 	      <literal>mypool/home</literal>, the home dataset, is a
 	      child of <literal>mypool</literal> and inherits
 	      properties from it.  This can be expanded further by
 	      creating <literal>mypool/home/user</literal>.  This
 	      grandchild dataset will inherity properties from the
-	      parent and grandparent.  It is also possible to set
-	      properties on a child to override the defaults inherited
+	      parent and grandparent.
+	      Properties on a child can be set to override the defaults inherited
 	      from the parents and grandparents.
-	      <acronym>ZFS</acronym> also allows administration of
-	      datasets and their children to be <link
+	      Administration of
+	      datasets and their children can be <link
 	        linkend="zfs-zfs-allow">delegated</link>.</entry>
 	  </row>
 
@@ -1852,7 +1850,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      as a file system.  Like most other file systems, a
 	      <acronym>ZFS</acronym> file system is mounted somewhere
 	      in the systems directory heirarchy and contains files
-	      and directories of its own with permissions, flags and
+	      and directories of its own with permissions, flags, and
 	      other metadata.</entry>
 	  </row>
 
@@ -1903,7 +1901,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      space.  Snapshots can also be marked with a
 	      <link linkend="zfs-zfs-snapshot">hold</link>, once a
 	      snapshot is held, any attempt to destroy it will return
-	      an EBUY error.  Each snapshot can have multiple holds,
+	      an <literal>EBUSY</literal> error.  Each snapshot can have multiple holds,
 	      each with a unique name.  The
 	      <link linkend="zfs-zfs-snapshot">release</link> command
 	      removes the hold so the snapshot can then be deleted.
@@ -1924,12 +1922,12 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      overwritten in the cloned file system or volume, the
 	      reference count on the previous block is decremented.
 	      The snapshot upon which a clone is based cannot be
-	      deleted because the clone is dependeant upon it (the
+	      deleted because the clone depends on it (the
 	      snapshot is the parent, and the clone is the child).
 	      Clones can be <literal>promoted</literal>, reversing
-	      this dependeancy, making the clone the parent and the
+	      this dependency, making the clone the parent and the
 	      previous parent the child.  This operation requires no
-	      additional space, however it will change the way the
+	      additional space, but it will change the way the
 	      used space is accounted.</entry>
 	  </row>
 
@@ -1937,9 +1935,9 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry xml:id="zfs-term-checksum">Checksum</entry>
 
 	    <entry>Every block that is allocated is also checksummed
-	      (the algorithm used is a per dataset property, see:
-	      <command>zfs set</command>).  <acronym>ZFS</acronym>
-	      transparently validates the checksum of each block as it
+	      (the algorithm used is a per dataset property, see
+	      <command>zfs set</command>).  The checksum of each block
+	      is transparently validated as it
 	      is read, allowing <acronym>ZFS</acronym> to detect
 	      silent corruption.  If the data that is read does not
 	      match the expected checksum, <acronym>ZFS</acronym> will
@@ -1967,7 +1965,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      The fletcher algorithms are faster, but sha256 is a
 	      strong cryptographic hash and has a much lower chance of
 	      collisions at the cost of some performance.  Checksums
-	      can be disabled but it is inadvisable.</entry>
+	      can be disabled, but it is inadvisable.</entry>
 	  </row>
 
 	  <row>
@@ -1977,8 +1975,8 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      compression property, which defaults to off.  This
 	      property can be set to one of a number of compression
 	      algorithms, which will cause all new data that is
-	      written to this dataset to be compressed as it is
-	      written.  In addition to the reduction in disk usage,
+	      written to the dataset to be compressed.
+	      In addition to the reduction in disk usage,
 	      this can also increase read and write throughput, as
 	      only the smaller compressed version of the file needs to
 	      be read or written.
@@ -1992,9 +1990,9 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	  <row>
 	    <entry xml:id="zfs-term-deduplication">Deduplication</entry>
 
-	    <entry><acronym>ZFS</acronym> has the ability to detect
-	      duplicate blocks of data as they are written (thanks to
-	      the checksumming feature).  If deduplication is enabled,
+	    <entry>Checksums make it possible to detect
+	      duplicate blocks of data as they are written.
+	      If deduplication is enabled,
 	      instead of writing the block a second time, the
 	      reference count of the existing block will be increased,
 	      saving storage space.  To do this,
@@ -2011,23 +2009,23 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      matching checksum is assumed to mean that the data is
 	      identical.  If dedup is set to verify, then the data in
 	      the two blocks will be checked byte-for-byte to ensure
-	      it is actually identical and if it is not, the hash
-	      collision will be noted by <acronym>ZFS</acronym> and
-	      the two blocks will be stored separately.  Due to the
-	      nature of the <acronym>DDT</acronym>, having to store
+	      it is actually identical.  If the data is not identical, the hash
+	      collision will be noted and
+	      the two blocks will be stored separately.  Because
+	      <acronym>DDT</acronym> must store
 	      the hash of each unique block, it consumes a very large
 	      amount of memory (a general rule of thumb is 5-6 GB
 	      of ram per 1 TB of deduplicated data).  In
 	      situations where it is not practical to have enough
 	      <acronym>RAM</acronym> to keep the entire
 	      <acronym>DDT</acronym> in memory, performance will
-	      suffer greatly as the <acronym>DDT</acronym> will need
-	      to be read from disk before each new block is written.
-	      Deduplication can make use of the
+	      suffer greatly as the <acronym>DDT</acronym> must
+	      be read from disk before each new block is written.
+	      Deduplication can use
 	      <acronym>L2ARC</acronym> to store the
 	      <acronym>DDT</acronym>, providing a middle ground
 	      between fast system memory and slower disks.  Consider
-	      using <acronym>ZFS</acronym> compression instead, which
+	      using compression instead, which
 	      often provides nearly as much space savings without the
 	      additional memory requirement.</entry>
 	  </row>
@@ -2035,17 +2033,17 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	  <row>
 	    <entry xml:id="zfs-term-scrub">Scrub</entry>
 
-	    <entry>In place of a consistency check like &man.fsck.8;,
-	      <acronym>ZFS</acronym> has the <literal>scrub</literal>
-	      command, which reads all data blocks stored on the pool
-	      and verifies their checksums them against the known good
+	    <entry>Instead of a consistency check like &man.fsck.8;,
+	      <acronym>ZFS</acronym> has the <command>scrub</command>.
+	      <command>scrub</command> reads all data blocks stored on the pool
+	      and verifies their checksums against the known good
 	      checksums stored in the metadata.  This periodic check
 	      of all the data stored on the pool ensures the recovery
 	      of any corrupted blocks before they are needed.  A scrub
 	      is not required after an unclean shutdown, but it is
 	      recommended that you run a scrub at least once each
-	      quarter.  <acronym>ZFS</acronym> compares the checksum
-	      for each block as it is read in the normal course of
+	      quarter.  Checksums
+	      of each block are tested as they are read in normal
 	      use, but a scrub operation makes sure even infrequently
 	      used blocks are checked for silent corruption.</entry>
 	  </row>
@@ -2054,7 +2052,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry xml:id="zfs-term-quota">Dataset Quota</entry>
 
 	    <entry><acronym>ZFS</acronym> provides very fast and
-	      accurate dataset, user and group space accounting in
+	      accurate dataset, user, and group space accounting in
 	      addition to quotas and space reservations.  This gives
 	      the administrator fine grained control over how space is
 	      allocated and allows critical file systems to reserve
@@ -2087,8 +2085,8 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      Quota</entry>
 
 	    <entry>A reference quota limits the amount of space a
-	      dataset can consume by enforcing a hard limit on the
-	      space used.  However, this hard limit includes only
+	      dataset can consume by enforcing a hard limit.
+	      However, this hard limit includes only
 	      space that the dataset references and does not include
 	      space used by descendants, such as file systems or
 	      snapshots.</entry>
@@ -2115,10 +2113,10 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      Reservation</entry>
 
 	    <entry>The <literal>reservation</literal> property makes
-	      it possible to guaranteed a minimum amount of space for
-	      the use of a specific dataset and its descendants.  This
+	      it possible to guarantee a minimum amount of space for
+	      a specific dataset and its descendants.  This
 	      means that if a 10 GB reservation is set on
-	      <filename>storage/home/bob</filename>, if another
+	      <filename>storage/home/bob</filename>, and another
 	      dataset tries to use all of the free space, at least
 	      10 GB of space is reserved for this dataset.  If a
 	      snapshot is taken of
@@ -2127,7 +2125,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      reservation.  The <link
 		linkend="zfs-term-refreservation">refreservation</link>
 	      property works in a similar way, except it
-	      <emphasis>excludes</emphasis> descendants, such as
+	      <emphasis>excludes</emphasis> descendants like
 	      snapshots.
 
 	      <para>Reservations of any sort are useful in many
@@ -2143,11 +2141,11 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      Reservation</entry>
 
 	    <entry>The <literal>refreservation</literal> property
-	      makes it possible to guaranteed a minimum amount of
+	      makes it possible to guarantee a minimum amount of
 	      space for the use of a specific dataset
 	      <emphasis>excluding</emphasis> its descendants.  This
 	      means that if a 10 GB reservation is set on
-	      <filename>storage/home/bob</filename>, if another
+	      <filename>storage/home/bob</filename>, and another
 	      dataset tries to use all of the free space, at least
 	      10 GB of space is reserved for this dataset.  In
 	      contrast to a regular <link
@@ -2168,10 +2166,10 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry xml:id="zfs-term-resilver">Resilver</entry>
 
 	    <entry>When a disk fails and must be replaced, the new
-	      disk must be filled with the data that was lost.  This
-	      process of calculating and writing the missing data
-	      (using the parity information distributed across the
-	      remaining drives) to the new drive is called
+	      disk must be filled with the data that was lost.  The
+	      process of using the parity information distributed across the remaining drives
+	      to calculate and write the missing data to the new drive
+	      is called
 	      <emphasis>resilvering</emphasis>.</entry>
 	  </row>
 
@@ -2194,7 +2192,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      or vdev into a <link
 	      linkend="zfs-term-faulted">Faulted</link> state.  An
 	      administrator may choose to offline a disk in
-	      preperation for replacing it, or to make it easier to
+	      preparation for replacing it, or to make it easier to
 	      identify.</entry>
 	  </row>
 
@@ -2204,10 +2202,10 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	    <entry>A ZFS pool or vdev that is in the
 	      <literal>Degraded</literal> state has one or more disks
 	      that have been disconnected or have failed.  The pool is
-	      still usable however if additional devices fail the pool
+	      still usable, however if additional devices fail, the pool
 	      could become unrecoverable.  Reconnecting the missing
-	      device(s) or replacing the failed disks will return the
-	      pool to a <link
+	      devices or replacing the failed disks will return the
+	      pool to an <link
 	      linkend="zfs-term-online">Online</link> state after
 	      the reconnected or new device has completed the <link
 	      linkend="zfs-term-resilver">Resilver</link>
@@ -2228,7 +2226,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
 	      linkend="zfs-term-online">Online</link> state.  If
 	      there is insufficient redundancy to compensate for the
 	      number of failed disks, then the contents of the pool
-	      are lost and will need to be restored from
+	      are lost and must be restored from
 	      backups.</entry>
 	  </row>
 	</tbody>


More information about the svn-doc-projects mailing list