svn commit: r42540 - in projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook: . bsdinstall filesystems zfs

Warren Block wblock at FreeBSD.org
Wed Aug 14 22:21:16 UTC 2013


Author: wblock
Date: Wed Aug 14 22:21:15 2013
New Revision: 42540
URL: http://svnweb.freebsd.org/changeset/doc/42540

Log:
  Split the ZFS content into a separate chapter.

Added:
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
     - copied, changed from r42538, projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml
Modified:
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/Makefile
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/book.xml
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/chapters.ent
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/Makefile
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/Makefile	Wed Aug 14 21:50:46 2013	(r42539)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/Makefile	Wed Aug 14 22:21:15 2013	(r42540)
@@ -278,6 +278,7 @@ SRCS+= serialcomms/chapter.xml
 SRCS+= users/chapter.xml
 SRCS+= virtualization/chapter.xml
 SRCS+= x11/chapter.xml
+SRCS+= zfs/chapter.xml
 
 # Entities
 SRCS+= chapters.ent

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/book.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/book.xml	Wed Aug 14 21:50:46 2013	(r42539)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/book.xml	Wed Aug 14 22:21:15 2013	(r42540)
@@ -235,6 +235,7 @@
     &chap.audit;
     &chap.disks;
     &chap.geom;
+    &chap.zfs;
     &chap.filesystems;
     &chap.virtualization;
     &chap.l10n;

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml	Wed Aug 14 21:50:46 2013	(r42539)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml	Wed Aug 14 22:21:15 2013	(r42540)
@@ -1411,7 +1411,7 @@ Trying to mount root from cd9660:/dev/is
       <para>Another partition type worth noting is
 	<literal>freebsd-zfs</literal>, used for partitions that will
 	contain a &os; ZFS filesystem.  See
-	<xref linkend="filesystems-zfs"/>.  &man.gpart.8; shows more
+	<xref linkend="zfs"/>.  &man.gpart.8; shows more
 	of the available <acronym>GPT</acronym> partition
 	types.</para>
 

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/chapters.ent
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/chapters.ent	Wed Aug 14 21:50:46 2013	(r42539)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/chapters.ent	Wed Aug 14 22:21:15 2013	(r42540)
@@ -38,6 +38,7 @@
   <!ENTITY chap.audit		SYSTEM "audit/chapter.xml">
   <!ENTITY chap.disks		SYSTEM "disks/chapter.xml">
   <!ENTITY chap.geom		SYSTEM "geom/chapter.xml">
+  <!ENTITY chap.zfs		SYSTEM "zfs/chapter.xml">
   <!ENTITY chap.filesystems	SYSTEM "filesystems/chapter.xml">
   <!ENTITY chap.virtualization	SYSTEM "virtualization/chapter.xml">
   <!ENTITY chap.l10n		SYSTEM "l10n/chapter.xml">

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml	Wed Aug 14 21:50:46 2013	(r42539)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml	Wed Aug 14 22:21:15 2013	(r42540)
@@ -34,7 +34,7 @@
       <acronym>UFS</acronym> which has been modernized as
       <acronym>UFS2</acronym>.  Since &os; 7.0, the Z File
       System <acronym>ZFS</acronym> is also available as a native file
-      system.</para>
+      system.  See <xref linkend="zfs"/> for more information.</para>
 
     <para>In addition to its native file systems, &os; supports a
       multitude of other file systems so that data from other
@@ -96,1439 +96,6 @@
     </itemizedlist>
   </sect1>
 
-  <sect1 id="filesystems-zfs">
-    <title>The Z File System (ZFS)</title>
-
-    <para>The Z file system, originally developed by &sun;,
-      is designed to future proof the file system by removing many of
-      the arbitrary limits imposed on previous file systems.  ZFS
-      allows continuous growth of the pooled storage by adding
-      additional devices.  ZFS allows you to create many file systems
-      (in addition to block devices) out of a single shared pool of
-      storage.  Space is allocated as needed, so all remaining free
-      space is available to each file system in the pool.  It is also
-      designed for maximum data integrity, supporting data snapshots,
-      multiple copies, and cryptographic checksums.  It uses a
-      software data replication model, known as
-      <acronym>RAID</acronym>-Z. <acronym>RAID</acronym>-Z provides
-      redundancy similar to hardware <acronym>RAID</acronym>, but is
-      designed to prevent data write corruption and to overcome some
-      of the limitations of hardware <acronym>RAID</acronym>.</para>
-
-    <sect2 id="filesystems-zfs-term">
-      <title>ZFS Features and Terminology</title>
-
-      <para>ZFS is a fundamentally different file system because it
-	is more than just a file system.  ZFS combines the roles of
-	file system and volume manager, enabling additional storage
-	devices to be added to a live system and having the new space
-	available on all of the existing file systems in that pool
-	immediately.  By combining the traditionally separate roles,
-	ZFS is able to overcome previous limitations that prevented
-	RAID groups being able to grow.  Each top level device in a
-	zpool is called a vdev, which can be a simple disk or a RAID
-	transformation such as a mirror or RAID-Z array.  ZFS file
-	systems (called datasets), each have access to the combined
-	free space of the entire pool.  As blocks are allocated the
-	free space in the pool available to of each file system is
-	decreased.  This approach avoids the common pitfall with
-	extensive partitioning where free space becomes fragmentated
-	across the partitions.</para>
-
-      <informaltable pgwide="1">
-	<tgroup cols="2">
-	  <tbody>
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-zpool">zpool</entry>
-
-	      <entry>A storage pool is the most basic building block
-		of ZFS.  A pool is made up of one or more vdevs, the
-		underlying devices that store the data.  A pool is
-		then used to create one or more file systems
-		(datasets) or block devices (volumes).  These datasets
-		and volumes share the pool of remaining free space.
-		Each pool is uniquely identified by a name and a
-		<acronym>GUID</acronym>.  The zpool also controls the
-		version number and therefore the features available
-		for use with ZFS.
-		<note><para>&os; 9.0 and 9.1 include
-		  support for ZFS version 28.  Future versions use ZFS
-		  version 5000 with feature flags.  This allows
-		  greater cross-compatibility with other
-		  implementations of ZFS.
-		</para></note></entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-vdev">vdev Types</entry>
-
-	      <entry>A zpool is made up of one or more vdevs, which
-		themselves can be a single disk or a group of disks,
-		in the case of a RAID transform.  When multiple vdevs
-		are used, ZFS spreads data across the vdevs to
-		increase performance and maximize usable space.
-		<itemizedlist>
-		  <listitem>
-		    <para id="filesystems-zfs-term-vdev-disk">
-		      <emphasis>Disk</emphasis> - The most basic type
-		      of vdev is a standard block device.  This can be
-		      an entire disk (such as
-		      <devicename><replaceable>/dev/ada0</replaceable></devicename>
-		      or
-		      <devicename><replaceable>/dev/da0</replaceable></devicename>)
-		      or a partition
-		      (<devicename><replaceable>/dev/ada0p3</replaceable></devicename>).
-		      Contrary to the Solaris documentation, on &os;
-		      there is no performance penalty for using a
-		      partition rather than an entire disk.</para>
-		  </listitem>
-
-		  <listitem>
-		    <para id="filesystems-zfs-term-vdev-file">
-		      <emphasis>File</emphasis> - In addition to
-		      disks, ZFS pools can be backed by regular files,
-		      this is especially useful for testing and
-		      experimentation.  Use the full path to the file
-		      as the device path in the zpool create command.
-		      All vdevs must be atleast 128 MB in
-		      size.</para>
-		  </listitem>
-
-		  <listitem>
-		    <para id="filesystems-zfs-term-vdev-mirror">
-		      <emphasis>Mirror</emphasis> - When creating a
-		      mirror, specify the <literal>mirror</literal>
-		      keyword followed by the list of member devices
-		      for the mirror.  A mirror consists of two or
-		      more devices, all data will be written to all
-		      member devices.  A mirror vdev will only hold as
-		      much data as its smallest member.  A mirror vdev
-		      can withstand the failure of all but one of its
-		      members without losing any data.</para>
-
-		    <note>
-		      <para>
-			A regular single disk vdev can be
-			upgraded to a mirror vdev at any time using
-			the <command>zpool</command> <link
-			linkend="filesystems-zfs-zpool-attach">attach</link>
-			command.</para>
-		    </note>
-		  </listitem>
-
-		  <listitem>
-		    <para id="filesystems-zfs-term-vdev-raidz">
-		      <emphasis><acronym>RAID</acronym>-Z</emphasis> -
-		      ZFS implements RAID-Z, a variation on standard
-		      RAID-5 that offers better distribution of parity
-		      and eliminates the "RAID-5 write hole" in which
-		      the data and parity information become
-		      inconsistent after an unexpected restart.  ZFS
-		      supports 3 levels of RAID-Z which provide
-		      varying levels of redundancy in exchange for
-		      decreasing levels of usable storage.  The types
-		      are named RAID-Z1 through Z3 based on the number
-		      of parity devinces in the array and the number
-		      of disks that the pool can operate
-		      without.</para>
-
-		    <para>In a RAID-Z1 configuration with 4 disks,
-		      each 1 TB, usable storage will be 3 TB
-		      and the pool will still be able to operate in
-		      degraded mode with one faulted disk.  If an
-		      additional disk goes offline before the faulted
-		      disk is replaced and resilvered, all data in the
-		      pool can be lost.</para>
-
-		    <para>In a RAID-Z3 configuration with 8 disks of
-		      1 TB, the volume would provide 5TB of
-		      usable space and still be able to operate with
-		      three faulted disks.  Sun recommends no more
-		      than 9 disks in a single vdev.  If the
-		      configuration has more disks, it is recommended
-		      to divide them into separate vdevs and the pool
-		      data will be striped across them.</para>
-
-		    <para>A configuration of 2 RAID-Z2 vdevs
-		      consisting of 8 disks each would create
-		      something similar to a RAID 60 array.  A RAID-Z
-		      group's storage capacity is approximately the
-		      size of the smallest disk, multiplied by the
-		      number of non-parity disks.  4x 1 TB disks
-		      in Z1 has an effective size of approximately
-		      3 TB, and a 8x 1 TB array in Z3 will
-		      yeild 5 TB of usable space.</para>
-		  </listitem>
-
-		  <listitem>
-		    <para id="filesystems-zfs-term-vdev-spare">
-		      <emphasis>Spare</emphasis> - ZFS has a special
-		      pseudo-vdev type for keeping track of available
-		      hot spares.  Note that installed hot spares are
-		      not deployed automatically; they must manually
-		      be configured to replace the failed device using
-		      the zfs replace command.</para>
-		  </listitem>
-
-		  <listitem>
-		    <para id="filesystems-zfs-term-vdev-log">
-		      <emphasis>Log</emphasis> - ZFS Log Devices, also
-		      known as ZFS Intent Log (<acronym>ZIL</acronym>)
-		      move the intent log from the regular pool
-		      devices to a dedicated device.  The ZIL
-		      accelerates synchronous transactions by using
-		      storage devices (such as
-		      <acronym>SSD</acronym>s) that are faster
-		      compared to those used for the main pool.  When
-		      data is being written and the application
-		      requests a guarantee that the data has been
-		      safely stored, the data is written to the faster
-		      ZIL storage, then later flushed out to the
-		      regular disks, greatly reducing the latency of
-		      synchronous writes.  Log devices can be
-		      mirrored, but RAID-Z is not supported.  When
-		      specifying multiple log devices writes will be
-		      load balanced across all devices.</para>
-		  </listitem>
-
-		  <listitem>
-		    <para id="filesystems-zfs-term-vdev-cache">
-		      <emphasis>Cache</emphasis> - Adding a cache vdev
-		      to a zpool will add the storage of the cache to
-		      the L2ARC.  Cache devices cannot be mirrored.
-		      Since a cache device only stores additional
-		      copies of existing data, there is no risk of
-		      data loss.</para>
-		  </listitem>
-		</itemizedlist></entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-arc">Adaptive Replacement
-		Cache (<acronym>ARC</acronym>)</entry>
-
-	      <entry>ZFS uses an Adaptive Replacement Cache
-		(<acronym>ARC</acronym>), rather than a more
-		traditional Least Recently Used
-		(<acronym>LRU</acronym>) cache.  An
-		<acronym>LRU</acronym> cache is a simple list of items
-		in the cache sorted by when each object was most
-		recently used; new items are added to the top of the
-		list and once the cache is full items from the bottom
-		of the list are evicted to make room for more active
-		objects.  An <acronym>ARC</acronym> consists of four
-		lists; the Most Recently Used (<acronym>MRU</acronym>)
-		and Most Frequently Used (<acronym>MFU</acronym>)
-		objects, plus a ghost list for each.  These ghost
-		lists tracks recently evicted objects to provent them
-		being added back to the cache.  This increases the
-		cache hit ratio by avoiding objects that have a
-		history of only being used occasionally.  Another
-		advantage of using both an <acronym>MRU</acronym> and
-		<acronym>MFU</acronym> is that scanning an entire
-		filesystem would normally evict all data from an
-		<acronym>MRU</acronym> or <acronym>LRU</acronym> cache
-		in favor of this freshly accessed content.  In the
-		case of <acronym>ZFS</acronym> since there is also an
-		<acronym>MFU</acronym> that only tracks the most
-		frequently used objects, the cache of the most
-		commonly accessed blocks remains.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-l2arc">L2ARC</entry>
-
-	      <entry>The <acronym>L2ARC</acronym> is the second level
-		of the <acronym>ZFS</acronym> caching system.  The
-		primary <acronym>ARC</acronym> is stored in
-		<acronym>RAM</acronym>, however since the amount of
-		available <acronym>RAM</acronym> is often limited,
-		<acronym>ZFS</acronym> can also make use of <link
-		linkend="filesystems-zfs-term-vdev-cache">cache</link>
-		vdevs.  Solid State Disks (<acronym>SSD</acronym>s)
-		are often used as these cache devices due to their
-		higher speed and lower latency compared to traditional
-		spinning disks.  An L2ARC is entirely optional, but
-		having one will significantly increase read speeds for
-		files that are cached on the <acronym>SSD</acronym>
-		instead of having to be read from the regular spinning
-		disks.  The L2ARC can also speed up <link
-		linkend="filesystems-zfs-term-deduplication">deduplication</link>
-		since a <acronym>DDT</acronym> that does not fit in
-		<acronym>RAM</acronym> but does fit in the
-		<acronym>L2ARC</acronym> will be much faster than if
-		the <acronym>DDT</acronym> had to be read from disk.
-		The rate at which data is added to the cache devices
-		is limited to prevent prematurely wearing out the
-		<acronym>SSD</acronym> with too many writes.  Until
-		the cache is full (the first block has been evicted to
-		make room), writing to the <acronym>L2ARC</acronym> is
-		limited to the sum of the write limit and the boost
-		limit, then after that limited to the write limit.  A
-		pair of sysctl values control these rate limits;
-		<literal>vfs.zfs.l2arc_write_max</literal> controls
-		how many bytes are written to the cache per second,
-		while <literal>vfs.zfs.l2arc_write_boost</literal>
-		adds to this limit during the "Turbo Warmup Phase"
-		(Write Boost).</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-cow">Copy-On-Write</entry>
-
-	      <entry>Unlike a traditional file system, when data is
-		overwritten on ZFS the new data is written to a
-		different block rather than overwriting the old data
-		in place.  Only once this write is complete is the
-		metadata then updated to point to the new location of
-		the data.  This means that in the event of a shorn
-		write (a system crash or power loss in the middle of
-		writing a file) the entire original contents of the
-		file are still available and the incomplete write is
-		discarded.  This also means that ZFS does not require
-		a fsck after an unexpected shutdown.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-dataset">Dataset</entry>
-
-	      <entry>Dataset is the generic term for a ZFS file
-		system, volume, snapshot or clone.  Each dataset will
-		have a unique name in the format:
-		<literal>poolname/path at snapshot</literal>.  The root
-		of the pool is technically a dataset as well.  Child
-		datasets are named hierarchically like directories;
-		for example <literal>mypool/home</literal>, the home
-		dataset is a child of mypool and inherits properties
-		from it.  This can be expended further by creating
-		<literal>mypool/home/user</literal>.  This grandchild
-		dataset will inherity properties from the parent and
-		grandparent.  It is also possible to set properties
-		on a child to override the defaults inherited from the
-		parents and grandparents.  ZFS also allows
-		administration of datasets and their children to be
-		delegated.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-volum">Volume</entry>
-
-	      <entry>In additional to regular file system datasets,
-		ZFS can also create volumes, which are block devices.
-		Volumes have many of the same features, including
-		copy-on-write, snapshots, clones and
-		checksumming.  Volumes can be useful for running other
-		file system formats on top of ZFS, such as UFS or in
-		the case of Virtualization or exporting
-		<acronym>iSCSI</acronym> extents.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-snapshot">Snapshot</entry>
-
-	      <entry>The <link
-		  linkend="filesystems-zfs-term-cow">copy-on-write</link>
-		design of ZFS allows for nearly instantaneous
-		consistent snapshots with arbitrary names.  After
-		taking a snapshot of a dataset (or a recursive
-		snapshot of a parent dataset that will include all
-		child datasets), new data is written to new blocks (as
-		described above), however the old blocks are not
-		reclaimed as free space.  There are then two versions
-		of the file system, the snapshot (what the file system
-		looked like before) and the live file system; however
-		no additional space is used.  As new data is written
-		to the live file system, new blocks are allocated to
-		store this data.  The apparent size of the snapshot
-		will grow as the blocks are no longer used in the live
-		file system, but only in the snapshot.  These
-		snapshots can be mounted (read only) to allow for the
-		recovery of previous versions of files.  It is also
-		possible to <link
-		linkend="filesystems-zfs-zfs-snapshot">rollback</link>
-		a live file system to a specific snapshot, undoing any
-		changes that took place after the snapshot was taken.
-		Each block in the zpool has a reference counter which
-		indicates how many snapshots, clones, datasets or
-		volumes make use of that block.  As files and
-		snapshots are deleted, the reference count is
-		decremented; once a block is no longer referenced, it
-		is reclaimed as free space.  Snapshots can also be
-		marked with a <link
-		linkend="filesystems-zfs-zfs-snapshot">hold</link>,
-		once a snapshot is held, any attempt to destroy it
-		will return an EBUY error.  Each snapshot can have
-		multiple holds, each with a unique name.  The <link
-		linkend="filesystems-zfs-zfs-snapshot">release</link>
-		command removes the hold so the snapshot can then be
-		deleted.  Snapshots can be taken on volumes, however
-		they can only be cloned or rolled back, not mounted
-		independently.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-clone">Clone</entry>
-
-	      <entry>Snapshots can also be cloned; a clone is a
-		writable version of a snapshot, allowing the file
-		system to be forked as a new dataset.  As with a
-		snapshot, a clone initially consumes no additional
-		space, only as new data is written to a clone and new
-		blocks are allocated does the apparent size of the
-		clone grow.  As blocks are overwritten in the cloned
-		file system or volume, the reference count on the
-		previous block is decremented.  The snapshot upon
-		which a clone is based cannot be deleted because the
-		clone is dependeant upon it (the snapshot is the
-		parent, and the clone is the child).  Clones can be
-		<literal>promoted</literal>, reversing this
-		dependeancy, making the clone the parent and the
-		previous parent the child.  This operation requires no
-		additional space, however it will change the way the
-		used space is accounted.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-checksum">Checksum</entry>
-
-	      <entry>Every block that is allocated is also checksummed
-		(which algorithm is used is a per dataset property,
-		see: zfs set).  ZFS transparently validates the
-		checksum of each block as it is read, allowing ZFS to
-		detect silent corruption.  If the data that is read
-		does not match the expected checksum, ZFS will attempt
-		to recover the data from any available redundancy
-		(mirrors, RAID-Z).  You can trigger the validation of
-		all checksums using the <link
-		linkend="filesystems-zfs-term-scrub">scrub</link>
-		command.  The available checksum algorithms include:
-		<itemizedlist>
-		  <listitem><para>fletcher2</para></listitem>
-		  <listitem><para>fletcher4</para></listitem>
-		  <listitem><para>sha256</para></listitem>
-		</itemizedlist> The fletcher algorithms are faster,
-		but sha256 is a strong cryptographic hash and has a
-		much lower chance of a collisions at the cost of some
-		performance.  Checksums can be disabled but it is
-		inadvisable.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-compression">Compression</entry>
-
-	      <entry>Each dataset in ZFS has a compression property,
-		which defaults to off.  This property can be set to
-		one of a number of compression algorithms, which will
-		cause all new data that is written to this dataset to
-		be compressed as it is written.  In addition to the
-		reduction in disk usage, this can also increase read
-		and write throughput, as only the smaller compressed
-		version of the file needs to be read or
-		written.<note>
-		  <para>LZ4 compression is only available after &os;
-		    9.2</para>
-		</note></entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-deduplication">Deduplication</entry>
-
-	      <entry>ZFS has the ability to detect duplicate blocks of
-		data as they are written (thanks to the checksumming
-		feature).  If deduplication is enabled, instead of
-		writing the block a second time, the reference count
-		of the existing block will be increased, saving
-		storage space.  In order to do this, ZFS keeps a
-		deduplication table (<acronym>DDT</acronym>) in
-		memory, containing the list of unique checksums, the
-		location of that block and a reference count.  When
-		new data is written, the checksum is calculated and
-		compared to the list.  If a match is found, the data
-		is considered to be a duplicate.  When deduplication
-		is enabled, the checksum algorithm is changed to
-		<acronym>SHA256</acronym> to provide a secure
-		cryptographic hash.  ZFS deduplication is tunable; if
-		dedup is on, then a matching checksum is assumed to
-		mean that the data is identical.  If dedup is set to
-		verify, then the data in the two blocks will be
-		checked byte-for-byte to ensure it is actually
-		identical and if it is not, the hash collision will be
-		noted by ZFS and the two blocks will be stored
-		separately.  Due to the nature of the
-		<acronym>DDT</acronym>, having to store the hash of
-		each unique block, it consumes a very large amount of
-		memory (a general rule of thumb is 5-6 GB of ram
-		per 1 TB of deduplicated data).  In situations
-		where it is not practical to have enough
-		<acronym>RAM</acronym> to keep the entire DDT in
-		memory, performance will suffer greatly as the DDT
-		will need to be read from disk before each new block
-		is written.  Deduplication can make use of the L2ARC
-		to store the DDT, providing a middle ground between
-		fast system memory and slower disks.  It is advisable
-		to consider using ZFS compression instead, which often
-		provides nearly as much space savings without the
-		additional memory requirement.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-scrub">Scrub</entry>
-
-	      <entry>In place of a consistency check like fsck, ZFS
-		has the <literal>scrub</literal> command, which reads
-		all data blocks stored on the pool and verifies their
-		checksums them against the known good checksums stored
-		in the metadata.  This periodic check of all the data
-		stored on the pool ensures the recovery of any
-		corrupted blocks before they are needed.  A scrub is
-		not required after an unclean shutdown, but it is
-		recommended that you run a scrub at least once each
-		quarter.  ZFS compares the checksum for each block as
-		it is read in the normal course of use, but a scrub
-		operation makes sure even infrequently used blocks are
-		checked for silent corruption.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-quota">Dataset Quota</entry>
-
-	      <entry>ZFS provides very fast and accurate dataset, user
-		and group space accounting in addition to quotes and
-		space reservations.  This gives the administrator fine
-		grained control over how space is allocated and allows
-		critical file systems to reserve space to ensure other
-		file systems do not take all of the free space.
-		<para>ZFS supports different types of quotas: the
-		  dataset quota, the <link
-		  linkend="filesystems-zfs-term-refquota">reference
-		  quota (<acronym>refquota</acronym>)</link>, the
-		  <link linkend="filesystems-zfs-term-userquota">user
-		  quota</link>, and the <link
-		  linkend="filesystems-zfs-term-groupquota">
-		    group quota</link>.</para>
-
-		<para>Quotas limit the amount of space that a dataset
-		  and all of its descendants (snapshots of the
-		  dataset, child datasets and the snapshots of those
-		  datasets) can consume.</para>
-
-		<note>
-		  <para>Quotas cannot be set on volumes, as the
-		    <literal>volsize</literal> property acts as an
-		    implicit quota.</para>
-		</note></entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-refquota">Reference
-		Quota</entry>
-
-	      <entry>A reference quota limits the amount of space a
-		dataset can consume by enforcing a hard limit on the
-		space used.  However, this hard limit includes only
-		space that the dataset references and does not include
-		space used by descendants, such as file systems or
-		snapshots.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-userquota">User
-		Quota</entry>
-
-	      <entry>User quotas are useful to limit the amount of
-		space that can be used by the specified user.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-groupquota">Group
-		Quota</entry>
-
-	      <entry>The group quota limits the amount of space that a
-		specified group can consume.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-reservation">Dataset
-		Reservation</entry>
-
-	      <entry>The <literal>reservation</literal> property makes
-		it possible to guaranteed a minimum amount of space
-		for the use of a specific dataset and its descendants.
-		This means that if a 10 GB reservation is set on
-		<filename>storage/home/bob</filename>, if another
-		dataset tries to use all of the free space, at least
-		10 GB of space is reserved for this dataset.  If
-		a snapshot is taken of
-		<filename>storage/home/bob</filename>, the space used
-		by that snapshot is counted against the reservation.
-		The <link
-		linkend="filesystems-zfs-term-refreservation">refreservation</link>
-		property works in a similar way, except it
-		<emphasis>excludes</emphasis> descendants, such as
-		snapshots.
-		<para>Reservations of any sort are useful
-		  in many situations, such as planning and testing the
-		  suitability of disk space allocation in a new
-		  system, or ensuring that enough space is available
-		  on file systems for audio logs or system recovery
-		  procedures and files.</para></entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-refreservation">Reference
-		Reservation</entry>
-
-	      <entry>The <literal>refreservation</literal> property
-		makes it possible to guaranteed a minimum amount of
-		space for the use of a specific dataset
-		<emphasis>excluding</emphasis> its descendants.  This
-		means that if a 10 GB reservation is set on
-		<filename>storage/home/bob</filename>, if another
-		dataset tries to use all of the free space, at least
-		10 GB of space is reserved for this dataset.  In
-		contrast to a regular <link
-		linkend="filesystems-zfs-term-reservation">reservation</link>,
-		space used by snapshots and decendant datasets is not
-		counted against the reservation.  As an example, if a
-		snapshot was taken of
-		<filename>storage/home/bob</filename>, enough disk
-		space would have to exist outside of the
-		<literal>refreservation</literal> amount for the
-		operation to succeed because descendants of the main
-		data set are not counted by the
-		<literal>refreservation</literal> amount and so do not
-		encroach on the space set.</entry>
-	    </row>
-
-	    <row>
-	      <entry valign="top"
-		id="filesystems-zfs-term-resilver">Resilver</entry>
-
-	      <entry>When a disk fails and must be replaced, the new
-		disk must be filled with the data that was lost.  This
-		process of calculating and writing the missing data
-		(using the parity information distributed across the
-		remaining drives) to the new drive is called
-		Resilvering.</entry>
-	    </row>
-
-	  </tbody>
-	</tgroup>
-      </informaltable>
-    </sect2>
-
-    <sect2 id="filesystems-zfs-differences">
-      <title>What Makes ZFS Different</title>
-
-      <para>ZFS is significantly different from any previous file
-	system owing to the fact that it is more than just a file
-	system.  ZFS combines the traditionally separate roles of
-	volume manager and file system, which provides unique
-	advantages because the file system is now aware of the
-	underlying structure of the disks.  Traditional file systems
-	could only be created on a single disk at a time, if there
-	were two disks then two separate file systems would have to
-	be created.  In a traditional hardware <acronym>RAID</acronym>
-	configuration, this problem was worked around by presenting
-	the operating system with a single logical disk made up of
-	the space provided by a number of disks, on top of which the
-	operating system placed its file system.  Even in the case of
-	software RAID solutions like <acronym>GEOM</acronym>, the UFS
-	file system living on top of the <acronym>RAID</acronym>
-	transform believed that it was dealing with a single device.
-	ZFS's combination of the volume manager and the file system
-	solves this and allows the creation of many file systems all
-	sharing a pool of available storage.  One of the biggest
-	advantages to ZFS's awareness of the physical layout of the
-	disks is that ZFS can grow the existing file systems
-	automatically when additional disks are added to the pool.
-	This new space is then made available to all of the file
-	systems.  ZFS also has a number of different properties that
-	can be applied to each file system, creating many advantages
-	to creating a number of different filesystems and datasets
-	rather than a single monolithic filesystem.</para>
-    </sect2>
-
-    <sect2 id="filesystems-zfs-quickstart">
-      <title><acronym>ZFS</acronym> Quick Start Guide</title>
-
-      <para>There is a start up mechanism that allows &os; to
-	mount <acronym>ZFS</acronym> pools during system
-	initialization.  To set it, issue the following
-	commands:</para>
-
-      <screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' >> /etc/rc.conf</userinput>
-&prompt.root; <userinput>service zfs start</userinput></screen>
-
-      <para>The examples in this section assume three
-	<acronym>SCSI</acronym> disks with the device names
-	<devicename><replaceable>da0</replaceable></devicename>,
-	<devicename><replaceable>da1</replaceable></devicename>,
-	and <devicename><replaceable>da2</replaceable></devicename>.
-	Users of <acronym>SATA</acronym> hardware should instead use
-	<devicename><replaceable>ada</replaceable></devicename>
-	device names.</para>
-
-      <sect3>
-	<title>Single Disk Pool</title>
-
-	<para>To create a simple, non-redundant <acronym>ZFS</acronym>
-	  pool using a single disk device, use
-	  <command>zpool</command>:</para>
-
-	<screen>&prompt.root; <userinput>zpool create <replaceable>example</replaceable> <replaceable>/dev/da0</replaceable></userinput></screen>
-
-	<para>To view the new pool, review the output of
-	  <command>df</command>:</para>
-
-	<screen>&prompt.root; <userinput>df</userinput>
-Filesystem  1K-blocks    Used    Avail Capacity  Mounted on
-/dev/ad0s1a   2026030  235230  1628718    13%    /
-devfs               1       1        0   100%    /dev
-/dev/ad0s1d  54098308 1032846 48737598     2%    /usr
-example      17547136       0 17547136     0%    /example</screen>
-
-	<para>This output shows that the <literal>example</literal>
-	  pool has been created and <emphasis>mounted</emphasis>.  It
-	  is now accessible as a file system.  Files may be created
-	  on it and users can browse it, as seen in the following
-	  example:</para>
-
-	<screen>&prompt.root; <userinput>cd /example</userinput>
-&prompt.root; <userinput>ls</userinput>
-&prompt.root; <userinput>touch testfile</userinput>
-&prompt.root; <userinput>ls -al</userinput>
-total 4
-drwxr-xr-x   2 root  wheel    3 Aug 29 23:15 .
-drwxr-xr-x  21 root  wheel  512 Aug 29 23:12 ..
--rw-r--r--   1 root  wheel    0 Aug 29 23:15 testfile</screen>
-
-	<para>However, this pool is not taking advantage of any
-	  <acronym>ZFS</acronym> features.  To create a dataset on
-	  this pool with compression enabled:</para>
-
-	<screen>&prompt.root; <userinput>zfs create example/compressed</userinput>
-&prompt.root; <userinput>zfs set compression=gzip example/compressed</userinput></screen>
-
-	<para>The <literal>example/compressed</literal> dataset is now
-	  a <acronym>ZFS</acronym> compressed file system.  Try
-	  copying some large files to <filename
-	    class="directory">/example/compressed</filename>.</para>
-
-	<para>Compression can be disabled with:</para>
-
-	<screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
-
-	<para>To unmount a file system, issue the following command
-	  and then verify by using <command>df</command>:</para>
-
-	<screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
-&prompt.root; <userinput>df</userinput>
-Filesystem  1K-blocks    Used    Avail Capacity  Mounted on
-/dev/ad0s1a   2026030  235232  1628716    13%    /
-devfs               1       1        0   100%    /dev
-/dev/ad0s1d  54098308 1032864 48737580     2%    /usr
-example      17547008       0 17547008     0%    /example</screen>
-
-	<para>To re-mount the file system to make it accessible
-	  again, and verify with <command>df</command>:</para>
-
-	<screen>&prompt.root; <userinput>zfs mount example/compressed</userinput>
-&prompt.root; <userinput>df</userinput>
-Filesystem         1K-blocks    Used    Avail Capacity  Mounted on
-/dev/ad0s1a          2026030  235234  1628714    13%    /
-devfs                      1       1        0   100%    /dev
-/dev/ad0s1d         54098308 1032864 48737580     2%    /usr
-example             17547008       0 17547008     0%    /example
-example/compressed  17547008       0 17547008     0%    /example/compressed</screen>
-
-	<para>The pool and file system may also be observed by viewing
-	  the output from <command>mount</command>:</para>
-
-	<screen>&prompt.root; <userinput>mount</userinput>
-/dev/ad0s1a on / (ufs, local)
-devfs on /dev (devfs, local)
-/dev/ad0s1d on /usr (ufs, local, soft-updates)
-example on /example (zfs, local)
-example/data on /example/data (zfs, local)
-example/compressed on /example/compressed (zfs, local)</screen>
-
-	<para><acronym>ZFS</acronym> datasets, after creation, may be
-	  used like any file systems.  However, many other features
-	  are available which can be set on a per-dataset basis.  In
-	  the following example, a new file system,
-	  <literal>data</literal> is created.  Important files will be
-	  stored here, the file system is set to keep two copies of
-	  each data block:</para>
-
-	<screen>&prompt.root; <userinput>zfs create example/data</userinput>
-&prompt.root; <userinput>zfs set copies=2 example/data</userinput></screen>
-
-	<para>It is now possible to see the data and space utilization
-	  by issuing <command>df</command>:</para>
-
-	<screen>&prompt.root; <userinput>df</userinput>
-Filesystem         1K-blocks    Used    Avail Capacity  Mounted on
-/dev/ad0s1a          2026030  235234  1628714    13%    /
-devfs                      1       1        0   100%    /dev
-/dev/ad0s1d         54098308 1032864 48737580     2%    /usr
-example             17547008       0 17547008     0%    /example
-example/compressed  17547008       0 17547008     0%    /example/compressed
-example/data        17547008       0 17547008     0%    /example/data</screen>
-
-	<para>Notice that each file system on the pool has the same
-	  amount of available space.  This is the reason for using
-	  <command>df</command> in these examples, to show that the
-	  file systems use only the amount of space they need and all
-	  draw from the same pool.  The <acronym>ZFS</acronym> file
-	  system does away with concepts such as volumes and
-	  partitions, and allows for several file systems to occupy
-	  the same pool.</para>
-
-	<para>To destroy the file systems and then destroy the pool as
-	  they are no longer needed:</para>
-
-	<screen>&prompt.root; <userinput>zfs destroy example/compressed</userinput>
-&prompt.root; <userinput>zfs destroy example/data</userinput>
-&prompt.root; <userinput>zpool destroy example</userinput></screen>
-
-      </sect3>
-
-      <sect3>
-	<title><acronym>ZFS</acronym> RAID-Z</title>
-
-	<para>There is no way to prevent a disk from failing.  One
-	  method of avoiding data loss due to a failed hard disk is to
-	  implement <acronym>RAID</acronym>.  <acronym>ZFS</acronym>
-	  supports this feature in its pool design.  RAID-Z pools
-	  require 3 or more disks but yield more usable space than
-	  mirrored pools.</para>
-
-	<para>To create a <acronym>RAID</acronym>-Z pool, issue the
-	  following command and specify the disks to add to the
-	  pool:</para>
-
-	<screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen>
-
-	<note>
-	  <para>&sun; recommends that the number of devices used in
-	    a <acronym>RAID</acronym>-Z configuration is between
-	    three and nine.  For environments requiring a single pool
-	    consisting of 10 disks or more, consider breaking it up
-	    into smaller <acronym>RAID</acronym>-Z groups.  If only
-	    two disks are available and redundancy is a requirement,
-	    consider using a <acronym>ZFS</acronym> mirror.  Refer to
-	    &man.zpool.8; for more details.</para>
-	</note>
-
-	<para>This command creates the <literal>storage</literal>
-	  zpool.  This may be verified using &man.mount.8; and
-	  &man.df.1;.  This command makes a new file system in the
-	  pool called <literal>home</literal>:</para>
-
-	<screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
-
-	<para>It is now possible to enable compression and keep extra
-	  copies of directories and files using the following
-	  commands:</para>
-
-	<screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput>
-&prompt.root; <userinput>zfs set compression=gzip storage/home</userinput></screen>
-
-	<para>To make this the new home directory for users, copy the
-	  user data to this directory, and create the appropriate
-	  symbolic links:</para>
-
-	<screen>&prompt.root; <userinput>cp -rp /home/* /storage/home</userinput>
-&prompt.root; <userinput>rm -rf /home /usr/home</userinput>
-&prompt.root; <userinput>ln -s /storage/home /home</userinput>
-&prompt.root; <userinput>ln -s /storage/home /usr/home</userinput></screen>
-
-	<para>Users should now have their data stored on the freshly
-	  created <filename
-	    class="directory">/storage/home</filename>.  Test by
-	  adding a new user and logging in as that user.</para>
-
-	<para>Try creating a snapshot which may be rolled back
-	  later:</para>
-
-	<screen>&prompt.root; <userinput>zfs snapshot storage/home at 08-30-08</userinput></screen>
-
-	<para>Note that the snapshot option will only capture a real
-	  file system, not a home directory or a file.  The
-	  <literal>@</literal> character is a delimiter used between
-	  the file system name or the volume name.  When a user's
-	  home directory gets trashed, restore it with:</para>
-
-	<screen>&prompt.root; <userinput>zfs rollback storage/home at 08-30-08</userinput></screen>
-
-	<para>To get a list of all available snapshots, run
-	  <command>ls</command> in the file system's
-	  <filename class="directory">.zfs/snapshot</filename>
-	  directory.  For example, to see the previously taken
-	  snapshot:</para>
-
-	<screen>&prompt.root; <userinput>ls /storage/home/.zfs/snapshot</userinput></screen>
-
-	<para>It is possible to write a script to perform regular
-	  snapshots on user data.  However, over time, snapshots
-	  may consume a great deal of disk space.  The previous
-	  snapshot may be removed using the following command:</para>
-
-	<screen>&prompt.root; <userinput>zfs destroy storage/home at 08-30-08</userinput></screen>
-
-	<para>After testing, <filename
-	    class="directory">/storage/home</filename> can be made the
-	  real <filename class="directory">/home</filename> using
-	  this command:</para>
-
-	<screen>&prompt.root; <userinput>zfs set mountpoint=/home storage/home</userinput></screen>
-
-	<para>Run <command>df</command> and
-	  <command>mount</command> to confirm that the system now
-	  treats the file system as the real
-	  <filename class="directory">/home</filename>:</para>
-
-	<screen>&prompt.root; <userinput>mount</userinput>
-/dev/ad0s1a on / (ufs, local)
-devfs on /dev (devfs, local)
-/dev/ad0s1d on /usr (ufs, local, soft-updates)
-storage on /storage (zfs, local)
-storage/home on /home (zfs, local)
-&prompt.root; <userinput>df</userinput>
-Filesystem   1K-blocks    Used    Avail Capacity  Mounted on
-/dev/ad0s1a    2026030  235240  1628708    13%    /
-devfs                1       1        0   100%    /dev
-/dev/ad0s1d   54098308 1032826 48737618     2%    /usr
-storage       26320512       0 26320512     0%    /storage
-storage/home  26320512       0 26320512     0%    /home</screen>
-
-	<para>This completes the <acronym>RAID</acronym>-Z
-	  configuration.  To get status updates about the file systems
-	  created during the nightly &man.periodic.8; runs, issue the
-	  following command:</para>
-
-	<screen>&prompt.root; <userinput>echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf</userinput></screen>
-      </sect3>
-
-      <sect3>
-	<title>Recovering <acronym>RAID</acronym>-Z</title>
-
-	<para>Every software <acronym>RAID</acronym> has a method of
-	  monitoring its <literal>state</literal>.  The status of
-	  <acronym>RAID</acronym>-Z devices may be viewed with the
-	  following command:</para>
-
-	<screen>&prompt.root; <userinput>zpool status -x</userinput></screen>
-

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***


More information about the svn-doc-projects mailing list