svn commit: r45602 - in head/en_US.ISO8859-1/books/handbook: . bsdinstall disks filesystems zfs
Warren Block
wblock at FreeBSD.org
Sat Sep 13 02:08:35 UTC 2014
Author: wblock
Date: Sat Sep 13 02:08:33 2014
New Revision: 45602
URL: http://svnweb.freebsd.org/changeset/doc/45602
Log:
Finally commit the rewritten ZFS section as a new chapter. This greatly
expands the original content, mostly due to the work of Allan Jude.
Added:
head/en_US.ISO8859-1/books/handbook/zfs/
head/en_US.ISO8859-1/books/handbook/zfs/chapter.xml (contents, props changed)
Modified:
head/en_US.ISO8859-1/books/handbook/Makefile
head/en_US.ISO8859-1/books/handbook/book.xml
head/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml
head/en_US.ISO8859-1/books/handbook/chapters.ent
head/en_US.ISO8859-1/books/handbook/disks/chapter.xml
head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml
Modified: head/en_US.ISO8859-1/books/handbook/Makefile
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/Makefile Fri Sep 12 23:42:14 2014 (r45601)
+++ head/en_US.ISO8859-1/books/handbook/Makefile Sat Sep 13 02:08:33 2014 (r45602)
@@ -245,6 +245,7 @@ SRCS+= desktop/chapter.xml
SRCS+= disks/chapter.xml
SRCS+= eresources/chapter.xml
SRCS+= firewalls/chapter.xml
+SRCS+= zfs/chapter.xml
SRCS+= filesystems/chapter.xml
SRCS+= geom/chapter.xml
SRCS+= install/chapter.xml
Modified: head/en_US.ISO8859-1/books/handbook/book.xml
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/book.xml Fri Sep 12 23:42:14 2014 (r45601)
+++ head/en_US.ISO8859-1/books/handbook/book.xml Sat Sep 13 02:08:33 2014 (r45602)
@@ -237,6 +237,7 @@
&chap.audit;
&chap.disks;
&chap.geom;
+ &chap.zfs;
&chap.filesystems;
&chap.virtualization;
&chap.l10n;
Modified: head/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml Fri Sep 12 23:42:14 2014 (r45601)
+++ head/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml Sat Sep 13 02:08:33 2014 (r45602)
@@ -1445,7 +1445,7 @@ Ethernet address 0:3:ba:b:92:d4, Host ID
<para>Another partition type worth noting is
<literal>freebsd-zfs</literal>, used for partitions that will
contain a &os; <acronym>ZFS</acronym> file system (<xref
- linkend="filesystems-zfs"/>). Refer to &man.gpart.8; for
+ linkend="zfs"/>). Refer to &man.gpart.8; for
descriptions of the available <acronym>GPT</acronym> partition
types.</para>
Modified: head/en_US.ISO8859-1/books/handbook/chapters.ent
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/chapters.ent Fri Sep 12 23:42:14 2014 (r45601)
+++ head/en_US.ISO8859-1/books/handbook/chapters.ent Sat Sep 13 02:08:33 2014 (r45602)
@@ -37,6 +37,7 @@
<!ENTITY chap.audit SYSTEM "audit/chapter.xml">
<!ENTITY chap.disks SYSTEM "disks/chapter.xml">
<!ENTITY chap.geom SYSTEM "geom/chapter.xml">
+ <!ENTITY chap.zfs SYSTEM "zfs/chapter.xml">
<!ENTITY chap.filesystems SYSTEM "filesystems/chapter.xml">
<!ENTITY chap.virtualization SYSTEM "virtualization/chapter.xml">
<!ENTITY chap.l10n SYSTEM "l10n/chapter.xml">
Modified: head/en_US.ISO8859-1/books/handbook/disks/chapter.xml
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/disks/chapter.xml Fri Sep 12 23:42:14 2014 (r45601)
+++ head/en_US.ISO8859-1/books/handbook/disks/chapter.xml Sat Sep 13 02:08:33 2014 (r45602)
@@ -2160,7 +2160,7 @@ Filesystem 1K-blocks Used Avail Capacity
<para>This section describes how to configure disk quotas for the
<acronym>UFS</acronym> file system. To configure quotas on the
<acronym>ZFS</acronym> file system, refer to <xref
- linkend="zfs-quotas"/></para>
+ linkend="zfs-zfs-quota"/></para>
<sect2>
<title>Enabling Disk Quotas</title>
Modified: head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Fri Sep 12 23:42:14 2014 (r45601)
+++ head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Sat Sep 13 02:08:33 2014 (r45602)
@@ -5,7 +5,7 @@
-->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="filesystems">
<info>
- <title>File Systems Support</title>
+ <title>Other File Systems</title>
<authorgroup>
<author><personname><firstname>Tom</firstname><surname>Rhodes</surname></personname><contrib>Written
@@ -29,8 +29,8 @@
native &os; file system has been the Unix File System
<acronym>UFS</acronym> which has been modernized as
<acronym>UFS2</acronym>. Since &os; 7.0, the Z File
- System <acronym>ZFS</acronym> is also available as a native file
- system.</para>
+ System (<acronym>ZFS</acronym>) is also available as a native file
+ system. See <xref linkend="zfs"/> for more information.</para>
<para>In addition to its native file systems, &os; supports a
multitude of other file systems so that data from other
@@ -91,642 +91,6 @@
</itemizedlist>
</sect1>
- <sect1 xml:id="filesystems-zfs">
- <title>The Z File System (ZFS)</title>
-
- <para>The Z file system, originally developed by &sun;,
- is designed to use a pooled storage method in that space is only
- used as it is needed for data storage. It is also designed for
- maximum data integrity, supporting data snapshots, multiple
- copies, and data checksums. It uses a software data replication
- model, known as <acronym>RAID</acronym>-Z.
- <acronym>RAID</acronym>-Z provides redundancy similar to
- hardware <acronym>RAID</acronym>, but is designed to prevent
- data write corruption and to overcome some of the limitations
- of hardware <acronym>RAID</acronym>.</para>
-
- <sect2>
- <title>ZFS Tuning</title>
-
- <para>Some of the features provided by <acronym>ZFS</acronym>
- are RAM-intensive, so some tuning may be required to provide
- maximum efficiency on systems with limited RAM.</para>
-
- <sect3>
- <title>Memory</title>
-
- <para>At a bare minimum, the total system memory should be at
- least one gigabyte. The amount of recommended RAM depends
- upon the size of the pool and the ZFS features which are
- used. A general rule of thumb is 1GB of RAM for every 1TB
- of storage. If the deduplication feature is used, a general
- rule of thumb is 5GB of RAM per TB of storage to be
- deduplicated. While some users successfully use ZFS with
- less RAM, it is possible that when the system is under heavy
- load, it may panic due to memory exhaustion. Further tuning
- may be required for systems with less than the recommended
- RAM requirements.</para>
- </sect3>
-
- <sect3>
- <title>Kernel Configuration</title>
-
- <para>Due to the RAM limitations of the &i386; platform, users
- using ZFS on the &i386; architecture should add the
- following option to a custom kernel configuration file,
- rebuild the kernel, and reboot:</para>
-
- <programlisting>options KVA_PAGES=512</programlisting>
-
- <para>This option expands the kernel address space, allowing
- the <varname>vm.kvm_size</varname> tunable to be pushed
- beyond the currently imposed limit of 1 GB, or the
- limit of 2 GB for <acronym>PAE</acronym>. To find the
- most suitable value for this option, divide the desired
- address space in megabytes by four (4). In this example, it
- is <literal>512</literal> for 2 GB.</para>
- </sect3>
-
- <sect3>
- <title>Loader Tunables</title>
-
- <para>The <filename>kmem</filename> address space can
- be increased on all &os; architectures. On a test system
- with one gigabyte of physical memory, success was achieved
- with the following options added to
- <filename>/boot/loader.conf</filename>, and the system
- restarted:</para>
-
- <programlisting>vm.kmem_size="330M"
-vm.kmem_size_max="330M"
-vfs.zfs.arc_max="40M"
-vfs.zfs.vdev.cache.size="5M"</programlisting>
-
- <para>For a more detailed list of recommendations for
- ZFS-related tuning, see <uri
- xlink:href="http://wiki.freebsd.org/ZFSTuningGuide">http://wiki.freebsd.org/ZFSTuningGuide</uri>.</para>
- </sect3>
- </sect2>
-
- <sect2>
- <title>Using <acronym>ZFS</acronym></title>
-
- <para>There is a start up mechanism that allows &os; to mount
- <acronym>ZFS</acronym> pools during system initialization. To
- set it, issue the following commands:</para>
-
- <screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' >> /etc/rc.conf</userinput>
-&prompt.root; <userinput>service zfs start</userinput></screen>
-
- <para>The examples in this section assume three
- <acronym>SCSI</acronym> disks with the device names
- <filename><replaceable>da0</replaceable></filename>,
- <filename><replaceable>da1</replaceable></filename>,
- and <filename><replaceable>da2</replaceable></filename>.
- Users of <acronym>IDE</acronym> hardware should instead use
- <filename><replaceable>ad</replaceable></filename>
- device names.</para>
-
- <sect3>
- <title>Single Disk Pool</title>
-
- <para>To create a simple, non-redundant <acronym>ZFS</acronym>
- pool using a single disk device, use
- <command>zpool</command>:</para>
-
- <screen>&prompt.root; <userinput>zpool create example /dev/da0</userinput></screen>
-
- <para>To view the new pool, review the output of
- <command>df</command>:</para>
-
- <screen>&prompt.root; <userinput>df</userinput>
-Filesystem 1K-blocks Used Avail Capacity Mounted on
-/dev/ad0s1a 2026030 235230 1628718 13% /
-devfs 1 1 0 100% /dev
-/dev/ad0s1d 54098308 1032846 48737598 2% /usr
-example 17547136 0 17547136 0% /example</screen>
-
- <para>This output shows that the <literal>example</literal>
- pool has been created and <emphasis>mounted</emphasis>. It
- is now accessible as a file system. Files may be created
- on it and users can browse it, as seen in the following
- example:</para>
-
- <screen>&prompt.root; <userinput>cd /example</userinput>
-&prompt.root; <userinput>ls</userinput>
-&prompt.root; <userinput>touch testfile</userinput>
-&prompt.root; <userinput>ls -al</userinput>
-total 4
-drwxr-xr-x 2 root wheel 3 Aug 29 23:15 .
-drwxr-xr-x 21 root wheel 512 Aug 29 23:12 ..
--rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile</screen>
-
- <para>However, this pool is not taking advantage of any
- <acronym>ZFS</acronym> features. To create a dataset on
- this pool with compression enabled:</para>
-
- <screen>&prompt.root; <userinput>zfs create example/compressed</userinput>
-&prompt.root; <userinput>zfs set compression=gzip example/compressed</userinput></screen>
-
- <para>The <literal>example/compressed</literal> dataset is now
- a <acronym>ZFS</acronym> compressed file system. Try
- copying some large files to
- <filename>/example/compressed</filename>.</para>
-
- <para>Compression can be disabled with:</para>
-
- <screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
-
- <para>To unmount a file system, issue the following command
- and then verify by using <command>df</command>:</para>
-
- <screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
-&prompt.root; <userinput>df</userinput>
-Filesystem 1K-blocks Used Avail Capacity Mounted on
-/dev/ad0s1a 2026030 235232 1628716 13% /
-devfs 1 1 0 100% /dev
-/dev/ad0s1d 54098308 1032864 48737580 2% /usr
-example 17547008 0 17547008 0% /example</screen>
-
- <para>To re-mount the file system to make it accessible
- again, and verify with <command>df</command>:</para>
-
- <screen>&prompt.root; <userinput>zfs mount example/compressed</userinput>
-&prompt.root; <userinput>df</userinput>
-Filesystem 1K-blocks Used Avail Capacity Mounted on
-/dev/ad0s1a 2026030 235234 1628714 13% /
-devfs 1 1 0 100% /dev
-/dev/ad0s1d 54098308 1032864 48737580 2% /usr
-example 17547008 0 17547008 0% /example
-example/compressed 17547008 0 17547008 0% /example/compressed</screen>
-
- <para>The pool and file system may also be observed by viewing
- the output from <command>mount</command>:</para>
-
- <screen>&prompt.root; <userinput>mount</userinput>
-/dev/ad0s1a on / (ufs, local)
-devfs on /dev (devfs, local)
-/dev/ad0s1d on /usr (ufs, local, soft-updates)
-example on /example (zfs, local)
-example/data on /example/data (zfs, local)
-example/compressed on /example/compressed (zfs, local)</screen>
-
- <para><acronym>ZFS</acronym> datasets, after creation, may be
- used like any file systems. However, many other features
- are available which can be set on a per-dataset basis. In
- the following example, a new file system,
- <literal>data</literal> is created. Important files will be
- stored here, the file system is set to keep two copies of
- each data block:</para>
-
- <screen>&prompt.root; <userinput>zfs create example/data</userinput>
-&prompt.root; <userinput>zfs set copies=2 example/data</userinput></screen>
-
- <para>It is now possible to see the data and space utilization
- by issuing <command>df</command>:</para>
-
- <screen>&prompt.root; <userinput>df</userinput>
-Filesystem 1K-blocks Used Avail Capacity Mounted on
-/dev/ad0s1a 2026030 235234 1628714 13% /
-devfs 1 1 0 100% /dev
-/dev/ad0s1d 54098308 1032864 48737580 2% /usr
-example 17547008 0 17547008 0% /example
-example/compressed 17547008 0 17547008 0% /example/compressed
-example/data 17547008 0 17547008 0% /example/data</screen>
-
- <para>Notice that each file system on the pool has the same
- amount of available space. This is the reason for using
- <command>df</command> in these examples, to show that the
- file systems use only the amount of space they need and all
- draw from the same pool. The <acronym>ZFS</acronym> file
- system does away with concepts such as volumes and
- partitions, and allows for several file systems to occupy
- the same pool.</para>
-
- <para>To destroy the file systems and then destroy the pool as
- they are no longer needed:</para>
-
- <screen>&prompt.root; <userinput>zfs destroy example/compressed</userinput>
-&prompt.root; <userinput>zfs destroy example/data</userinput>
-&prompt.root; <userinput>zpool destroy example</userinput></screen>
-
- </sect3>
-
- <sect3>
- <title><acronym>ZFS</acronym> RAID-Z</title>
-
- <para>There is no way to prevent a disk from failing. One
- method of avoiding data loss due to a failed hard disk is to
- implement <acronym>RAID</acronym>. <acronym>ZFS</acronym>
- supports this feature in its pool design.</para>
-
- <para>To create a <acronym>RAID</acronym>-Z pool, issue the
- following command and specify the disks to add to the
- pool:</para>
-
- <screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen>
-
- <note>
- <para>&sun; recommends that the amount of devices used in
- a <acronym>RAID</acronym>-Z configuration is between
- three and nine. For environments requiring a single pool
- consisting of 10 disks or more, consider breaking it up
- into smaller <acronym>RAID</acronym>-Z groups. If only
- two disks are available and redundancy is a requirement,
- consider using a <acronym>ZFS</acronym> mirror. Refer to
- &man.zpool.8; for more details.</para>
- </note>
-
- <para>This command creates the <literal>storage</literal>
- zpool. This may be verified using &man.mount.8; and
- &man.df.1;. This command makes a new file system in the
- pool called <literal>home</literal>:</para>
-
- <screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
-
- <para>It is now possible to enable compression and keep extra
- copies of directories and files using the following
- commands:</para>
-
- <screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput>
-&prompt.root; <userinput>zfs set compression=gzip storage/home</userinput></screen>
-
- <para>To make this the new home directory for users, copy the
- user data to this directory, and create the appropriate
- symbolic links:</para>
-
- <screen>&prompt.root; <userinput>cp -rp /home/* /storage/home</userinput>
-&prompt.root; <userinput>rm -rf /home /usr/home</userinput>
-&prompt.root; <userinput>ln -s /storage/home /home</userinput>
-&prompt.root; <userinput>ln -s /storage/home /usr/home</userinput></screen>
-
- <para>Users should now have their data stored on the freshly
- created <filename>/storage/home</filename>. Test by
- adding a new user and logging in as that user.</para>
-
- <para>Try creating a snapshot which may be rolled back
- later:</para>
-
- <screen>&prompt.root; <userinput>zfs snapshot storage/home at 08-30-08</userinput></screen>
-
- <para>Note that the snapshot option will only capture a real
- file system, not a home directory or a file. The
- <literal>@</literal> character is a delimiter used between
- the file system name or the volume name. When a user's
- home directory gets trashed, restore it with:</para>
-
- <screen>&prompt.root; <userinput>zfs rollback storage/home at 08-30-08</userinput></screen>
-
- <para>To get a list of all available snapshots, run
- <command>ls</command> in the file system's
- <filename>.zfs/snapshot</filename> directory. For example,
- to see the previously taken snapshot:</para>
-
- <screen>&prompt.root; <userinput>ls /storage/home/.zfs/snapshot</userinput></screen>
-
- <para>It is possible to write a script to perform regular
- snapshots on user data. However, over time, snapshots
- may consume a great deal of disk space. The previous
- snapshot may be removed using the following command:</para>
-
- <screen>&prompt.root; <userinput>zfs destroy storage/home at 08-30-08</userinput></screen>
-
- <para>After testing, <filename>/storage/home</filename> can be
- made the real <filename>/home</filename> using this
- command:</para>
-
- <screen>&prompt.root; <userinput>zfs set mountpoint=/home storage/home</userinput></screen>
-
- <para>Run <command>df</command> and
- <command>mount</command> to confirm that the system now
- treats the file system as the real
- <filename>/home</filename>:</para>
-
- <screen>&prompt.root; <userinput>mount</userinput>
-/dev/ad0s1a on / (ufs, local)
-devfs on /dev (devfs, local)
-/dev/ad0s1d on /usr (ufs, local, soft-updates)
-storage on /storage (zfs, local)
-storage/home on /home (zfs, local)
-&prompt.root; <userinput>df</userinput>
-Filesystem 1K-blocks Used Avail Capacity Mounted on
-/dev/ad0s1a 2026030 235240 1628708 13% /
-devfs 1 1 0 100% /dev
-/dev/ad0s1d 54098308 1032826 48737618 2% /usr
-storage 26320512 0 26320512 0% /storage
-storage/home 26320512 0 26320512 0% /home</screen>
-
- <para>This completes the <acronym>RAID</acronym>-Z
- configuration. To get status updates about the file systems
- created during the nightly &man.periodic.8; runs, issue the
- following command:</para>
-
- <screen>&prompt.root; <userinput>echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf</userinput></screen>
- </sect3>
-
- <sect3>
- <title>Recovering <acronym>RAID</acronym>-Z</title>
-
- <para>Every software <acronym>RAID</acronym> has a method of
- monitoring its <literal>state</literal>. The status of
- <acronym>RAID</acronym>-Z devices may be viewed with the
- following command:</para>
-
- <screen>&prompt.root; <userinput>zpool status -x</userinput></screen>
-
- <para>If all pools are healthy and everything is normal, the
- following message will be returned:</para>
-
- <screen>all pools are healthy</screen>
-
- <para>If there is an issue, perhaps a disk has gone offline,
- the pool state will look similar to:</para>
-
- <screen> pool: storage
- state: DEGRADED
-status: One or more devices has been taken offline by the administrator.
- Sufficient replicas exist for the pool to continue functioning in a
- degraded state.
-action: Online the device using 'zpool online' or replace the device with
- 'zpool replace'.
- scrub: none requested
-config:
-
- NAME STATE READ WRITE CKSUM
- storage DEGRADED 0 0 0
- raidz1 DEGRADED 0 0 0
- da0 ONLINE 0 0 0
- da1 OFFLINE 0 0 0
- da2 ONLINE 0 0 0
-
-errors: No known data errors</screen>
-
- <para>This indicates that the device was previously taken
- offline by the administrator using the following
- command:</para>
-
- <screen>&prompt.root; <userinput>zpool offline storage da1</userinput></screen>
-
- <para>It is now possible to replace
- <filename>da1</filename> after the system has been
- powered down. When the system is back online, the following
- command may issued to replace the disk:</para>
-
- <screen>&prompt.root; <userinput>zpool replace storage da1</userinput></screen>
-
- <para>From here, the status may be checked again, this time
- without the <option>-x</option> flag to get state
- information:</para>
-
- <screen>&prompt.root; <userinput>zpool status storage</userinput>
- pool: storage
- state: ONLINE
- scrub: resilver completed with 0 errors on Sat Aug 30 19:44:11 2008
-config:
-
- NAME STATE READ WRITE CKSUM
- storage ONLINE 0 0 0
- raidz1 ONLINE 0 0 0
- da0 ONLINE 0 0 0
- da1 ONLINE 0 0 0
- da2 ONLINE 0 0 0
-
-errors: No known data errors</screen>
-
- <para>As shown from this example, everything appears to be
- normal.</para>
- </sect3>
-
- <sect3>
- <title>Data Verification</title>
-
- <para><acronym>ZFS</acronym> uses checksums to verify the
- integrity of stored data. These are enabled automatically
- upon creation of file systems and may be disabled using the
- following command:</para>
-
- <screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
-
- <para>Doing so is <emphasis>not</emphasis> recommended as
- checksums take very little storage space and are used to
- check data integrity using checksum verification in a
- process is known as <quote>scrubbing.</quote> To verify the
- data integrity of the <literal>storage</literal> pool, issue
- this command:</para>
-
- <screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
-
- <para>This process may take considerable time depending on
- the amount of data stored. It is also very
- <acronym>I/O</acronym> intensive, so much so that only one
- scrub may be run at any given time. After the scrub has
- completed, the status is updated and may be viewed by
- issuing a status request:</para>
-
- <screen>&prompt.root; <userinput>zpool status storage</userinput>
- pool: storage
- state: ONLINE
- scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013
-config:
-
- NAME STATE READ WRITE CKSUM
- storage ONLINE 0 0 0
- raidz1 ONLINE 0 0 0
- da0 ONLINE 0 0 0
- da1 ONLINE 0 0 0
- da2 ONLINE 0 0 0
-
-errors: No known data errors</screen>
-
- <para>The completion time is displayed and helps to ensure
- data integrity over a long period of time.</para>
-
- <para>Refer to &man.zfs.8; and &man.zpool.8; for other
- <acronym>ZFS</acronym> options.</para>
- </sect3>
-
- <sect3 xml:id="zfs-quotas">
- <title>ZFS Quotas</title>
-
- <para>ZFS supports different types of quotas: the refquota,
- the general quota, the user quota, and the group quota.
- This section explains the basics of each type and includes
- some usage instructions.</para>
-
- <para>Quotas limit the amount of space that a dataset and its
- descendants can consume, and enforce a limit on the amount
- of space used by file systems and snapshots for the
- descendants. Quotas are useful to limit the amount of space
- a particular user can use.</para>
-
- <note>
- <para>Quotas cannot be set on volumes, as the
- <literal>volsize</literal> property acts as an implicit
- quota.</para>
- </note>
-
- <para>The
- <literal>refquota=<replaceable>size</replaceable></literal>
- limits the amount of space a dataset can consume by
- enforcing a hard limit on the space used. However, this
- hard limit does not include space used by descendants, such
- as file systems or snapshots.</para>
-
- <para>To enforce a general quota of 10 GB for
- <filename>storage/home/bob</filename>, use the
- following:</para>
-
- <screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen>
-
- <para>User quotas limit the amount of space that can be used
- by the specified user. The general format is
- <literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>,
- and the user's name must be in one of the following
- formats:</para>
-
- <itemizedlist>
- <listitem>
- <para><acronym
- role="Portable Operating System
- Interface">POSIX</acronym> compatible name such as
- <replaceable>joe</replaceable>.</para>
- </listitem>
-
- <listitem>
- <para><acronym
- role="Portable Operating System
- Interface">POSIX</acronym> numeric ID such as
- <replaceable>789</replaceable>.</para>
- </listitem>
-
- <listitem>
- <para><acronym role="System Identifier">SID</acronym> name
- such as
- <replaceable>joe.bloggs at example.com</replaceable>.</para>
- </listitem>
-
- <listitem>
- <para><acronym role="System Identifier">SID</acronym>
- numeric ID such as
- <replaceable>S-1-123-456-789</replaceable>.</para>
- </listitem>
- </itemizedlist>
-
- <para>For example, to enforce a quota of 50 GB for a user
- named <replaceable>joe</replaceable>, use the
- following:</para>
-
- <screen>&prompt.root; <userinput>zfs set userquota at joe=50G</userinput></screen>
-
- <para>To remove the quota or make sure that one is not set,
- instead use:</para>
-
- <screen>&prompt.root; <userinput>zfs set userquota at joe=none</userinput></screen>
-
- <para>User quota properties are not displayed by
- <command>zfs get all</command>.
- Non-<systemitem class="username">root</systemitem> users can
- only see their own quotas unless they have been granted the
- <literal>userquota</literal> privilege. Users with this
- privilege are able to view and set everyone's quota.</para>
-
- <para>The group quota limits the amount of space that a
- specified group can consume. The general format is
- <literal>groupquota@<replaceable>group</replaceable>=<replaceable>size</replaceable></literal>.</para>
-
- <para>To set the quota for the group
- <replaceable>firstgroup</replaceable> to 50 GB,
- use:</para>
-
- <screen>&prompt.root; <userinput>zfs set groupquota at firstgroup=50G</userinput></screen>
-
- <para>To remove the quota for the group
- <replaceable>firstgroup</replaceable>, or to make sure that
- one is not set, instead use:</para>
-
- <screen>&prompt.root; <userinput>zfs set groupquota at firstgroup=none</userinput></screen>
-
- <para>As with the user quota property,
- non-<systemitem class="username">root</systemitem> users can
- only see the quotas associated with the groups that they
- belong to. However, <systemitem
- class="username">root</systemitem> or a user with the
- <literal>groupquota</literal> privilege can view and set all
- quotas for all groups.</para>
-
- <para>To display the amount of space consumed by each user on
- the specified file system or snapshot, along with any
- specified quotas, use <command>zfs userspace</command>.
- For group information, use <command>zfs
- groupspace</command>. For more information about
- supported options or how to display only specific options,
- refer to &man.zfs.1;.</para>
-
- <para>Users with sufficient privileges and <systemitem
- class="username">root</systemitem> can list the quota for
- <filename>storage/home/bob</filename> using:</para>
-
- <screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen>
- </sect3>
-
- <sect3>
- <title>ZFS Reservations</title>
-
- <para>ZFS supports two types of space reservations. This
- section explains the basics of each and includes some usage
- instructions.</para>
-
- <para>The <literal>reservation</literal> property makes it
- possible to reserve a minimum amount of space guaranteed
- for a dataset and its descendants. This means that if a
- 10 GB reservation is set on
- <filename>storage/home/bob</filename>, if disk
- space gets low, at least 10 GB of space is reserved
- for this dataset. The <literal>refreservation</literal>
- property sets or indicates the minimum amount of space
- guaranteed to a dataset excluding descendants, such as
- snapshots. As an example, if a snapshot was taken of
- <filename>storage/home/bob</filename>, enough disk space
- would have to exist outside of the
- <literal>refreservation</literal> amount for the operation
- to succeed because descendants of the main data set are
- not counted by the <literal>refreservation</literal>
- amount and so do not encroach on the space set.</para>
-
- <para>Reservations of any sort are useful in many situations,
- such as planning and testing the suitability of disk space
- allocation in a new system, or ensuring that enough space is
- available on file systems for system recovery procedures and
- files.</para>
-
- <para>The general format of the <literal>reservation</literal>
- property is
- <literal>reservation=<replaceable>size</replaceable></literal>,
- so to set a reservation of 10 GB on
- <filename>storage/home/bob</filename>, use:</para>
-
- <screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen>
-
- <para>To make sure that no reservation is set, or to remove a
- reservation, use:</para>
-
- <screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen>
-
- <para>The same principle can be applied to the
- <literal>refreservation</literal> property for setting a
- refreservation, with the general format
- <literal>refreservation=<replaceable>size</replaceable></literal>.</para>
-
- <para>To check if any reservations or refreservations exist on
- <filename>storage/home/bob</filename>, execute one of the
- following commands:</para>
-
- <screen>&prompt.root; <userinput>zfs get reservation storage/home/bob</userinput>
-&prompt.root; <userinput>zfs get refreservation storage/home/bob</userinput></screen>
- </sect3>
- </sect2>
- </sect1>
-
<sect1 xml:id="filesystems-linux">
<title>&linux; File Systems</title>
Added: head/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
==============================================================================
--- /dev/null 00:00:00 1970 (empty, because file is newly added)
+++ head/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Sat Sep 13 02:08:33 2014 (r45602)
@@ -0,0 +1,4332 @@
+<?xml version="1.0" encoding="iso-8859-1"?>
+<!--
+ The FreeBSD Documentation Project
+ $FreeBSD$
+-->
+
+<chapter xmlns="http://docbook.org/ns/docbook"
+ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
+ xml:id="zfs">
+
+ <info>
+ <title>The Z File System (<acronym>ZFS</acronym>)</title>
+
+ <authorgroup>
+ <author>
+ <personname>
+ <firstname>Tom</firstname>
+ <surname>Rhodes</surname>
+ </personname>
+ <contrib>Written by </contrib>
+ </author>
+ <author>
+ <personname>
+ <firstname>Allan</firstname>
+ <surname>Jude</surname>
+ </personname>
+ <contrib>Written by </contrib>
+ </author>
+ <author>
+ <personname>
+ <firstname>Benedict</firstname>
+ <surname>Reuschling</surname>
+ </personname>
+ <contrib>Written by </contrib>
+ </author>
+ <author>
+ <personname>
+ <firstname>Warren</firstname>
+ <surname>Block</surname>
+ </personname>
+ <contrib>Written by </contrib>
+ </author>
+ </authorgroup>
+ </info>
+
+ <para>The <emphasis>Z File System</emphasis>, or
+ <acronym>ZFS</acronym>, is an advanced file system designed to
+ overcome many of the major problems found in previous
+ designs.</para>
+
+ <para>Originally developed at &sun;, ongoing open source
+ <acronym>ZFS</acronym> development has moved to the <link
+ xlink:href="http://open-zfs.org">OpenZFS Project</link>.</para>
+
+ <para><acronym>ZFS</acronym> has three major design goals:</para>
+
+ <itemizedlist>
+ <listitem>
+ <para>Data integrity: All data includes a
+ <link linkend="zfs-term-checksum">checksum</link> of the data.
+ When data is written, the checksum is calculated and written
+ along with it. When that data is later read back, the
+ checksum is calculated again. If the checksums do not match,
+ a data error has been detected. <acronym>ZFS</acronym> will
+ attempt to automatically correct errors when data redundancy
+ is available.</para>
+ </listitem>
+
+ <listitem>
+ <para>Pooled storage: physical storage devices are added to a
+ pool, and storage space is allocated from that shared pool.
+ Space is available to all file systems, and can be increased
+ by adding new storage devices to the pool.</para>
+ </listitem>
+
+ <listitem>
+ <para>Performance: multiple caching mechanisms provide increased
+ performance. <link linkend="zfs-term-arc">ARC</link> is an
+ advanced memory-based read cache. A second level of
+ disk-based read cache can be added with
+ <link linkend="zfs-term-l2arc">L2ARC</link>, and disk-based
+ synchronous write cache is available with
+ <link linkend="zfs-term-zil">ZIL</link>.</para>
+ </listitem>
+ </itemizedlist>
+
+ <para>A complete list of features and terminology is shown in
+ <xref linkend="zfs-term"/>.</para>
+
+ <sect1 xml:id="zfs-differences">
+ <title>What Makes <acronym>ZFS</acronym> Different</title>
+
+ <para><acronym>ZFS</acronym> is significantly different from any
+ previous file system because it is more than just a file system.
+ Combining the traditionally separate roles of volume manager and
+ file system provides <acronym>ZFS</acronym> with unique
+ advantages. The file system is now aware of the underlying
+ structure of the disks. Traditional file systems could only be
+ created on a single disk at a time. If there were two disks
+ then two separate file systems would have to be created. In a
+ traditional hardware <acronym>RAID</acronym> configuration, this
+ problem was avoided by presenting the operating system with a
+ single logical disk made up of the space provided by a number of
+ physical disks, on top of which the operating system placed a
+ file system. Even in the case of software
+ <acronym>RAID</acronym> solutions like those provided by
+ <acronym>GEOM</acronym>, the <acronym>UFS</acronym> file system
+ living on top of the <acronym>RAID</acronym> transform believed
+ that it was dealing with a single device.
+ <acronym>ZFS</acronym>'s combination of the volume manager and
+ the file system solves this and allows the creation of many file
+ systems all sharing a pool of available storage. One of the
+ biggest advantages to <acronym>ZFS</acronym>'s awareness of the
+ physical layout of the disks is that existing file systems can
+ be grown automatically when additional disks are added to the
+ pool. This new space is then made available to all of the file
+ systems. <acronym>ZFS</acronym> also has a number of different
+ properties that can be applied to each file system, giving many
+ advantages to creating a number of different file systems and
+ datasets rather than a single monolithic file system.</para>
+ </sect1>
+
+ <sect1 xml:id="zfs-quickstart">
+ <title>Quick Start Guide</title>
+
+ <para>There is a startup mechanism that allows &os; to mount
+ <acronym>ZFS</acronym> pools during system initialization. To
+ enable it, add this line to
+ <filename>/etc/rc.conf</filename>:</para>
+
+ <programlisting>zfs_enable="YES"</programlisting>
+
+ <para>Then start the service:</para>
+
+ <screen>&prompt.root; <userinput>service zfs start</userinput></screen>
+
+ <para>The examples in this section assume three
+ <acronym>SCSI</acronym> disks with the device names
+ <filename><replaceable>da0</replaceable></filename>,
+ <filename><replaceable>da1</replaceable></filename>, and
+ <filename><replaceable>da2</replaceable></filename>. Users
+ of <acronym>SATA</acronym> hardware should instead use
+ <filename><replaceable>ada</replaceable></filename> device
+ names.</para>
+
+ <sect2>
+ <title>Single Disk Pool</title>
+
+ <para>To create a simple, non-redundant pool using a single
+ disk device:</para>
+
+ <screen>&prompt.root; <userinput>zpool create <replaceable>example</replaceable> <replaceable>/dev/da0</replaceable></userinput></screen>
+
+ <para>To view the new pool, review the output of
+ <command>df</command>:</para>
+
+ <screen>&prompt.root; <userinput>df</userinput>
+Filesystem 1K-blocks Used Avail Capacity Mounted on
+/dev/ad0s1a 2026030 235230 1628718 13% /
+devfs 1 1 0 100% /dev
+/dev/ad0s1d 54098308 1032846 48737598 2% /usr
+example 17547136 0 17547136 0% /example</screen>
+
+ <para>This output shows that the <literal>example</literal> pool
+ has been created and mounted. It is now accessible as a file
+ system. Files can be created on it and users can browse
+ it:</para>
+
+ <screen>&prompt.root; <userinput>cd /example</userinput>
+&prompt.root; <userinput>ls</userinput>
+&prompt.root; <userinput>touch testfile</userinput>
+&prompt.root; <userinput>ls -al</userinput>
+total 4
+drwxr-xr-x 2 root wheel 3 Aug 29 23:15 .
+drwxr-xr-x 21 root wheel 512 Aug 29 23:12 ..
+-rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile</screen>
+
+ <para>However, this pool is not taking advantage of any
+ <acronym>ZFS</acronym> features. To create a dataset on this
+ pool with compression enabled:</para>
+
+ <screen>&prompt.root; <userinput>zfs create example/compressed</userinput>
+&prompt.root; <userinput>zfs set compression=gzip example/compressed</userinput></screen>
+
+ <para>The <literal>example/compressed</literal> dataset is now a
+ <acronym>ZFS</acronym> compressed file system. Try copying
+ some large files to
+ <filename>/example/compressed</filename>.</para>
+
+ <para>Compression can be disabled with:</para>
+
+ <screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
+
+ <para>To unmount a file system, use
+ <command>zfs umount</command> and then verify with
+ <command>df</command>:</para>
+
+ <screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
+&prompt.root; <userinput>df</userinput>
+Filesystem 1K-blocks Used Avail Capacity Mounted on
+/dev/ad0s1a 2026030 235232 1628716 13% /
+devfs 1 1 0 100% /dev
+/dev/ad0s1d 54098308 1032864 48737580 2% /usr
+example 17547008 0 17547008 0% /example</screen>
+
+ <para>To re-mount the file system to make it accessible again,
+ use <command>zfs mount</command> and verify with
+ <command>df</command>:</para>
+
+ <screen>&prompt.root; <userinput>zfs mount example/compressed</userinput>
+&prompt.root; <userinput>df</userinput>
+Filesystem 1K-blocks Used Avail Capacity Mounted on
+/dev/ad0s1a 2026030 235234 1628714 13% /
+devfs 1 1 0 100% /dev
+/dev/ad0s1d 54098308 1032864 48737580 2% /usr
+example 17547008 0 17547008 0% /example
+example/compressed 17547008 0 17547008 0% /example/compressed</screen>
+
+ <para>The pool and file system may also be observed by viewing
+ the output from <command>mount</command>:</para>
+
+ <screen>&prompt.root; <userinput>mount</userinput>
+/dev/ad0s1a on / (ufs, local)
+devfs on /dev (devfs, local)
+/dev/ad0s1d on /usr (ufs, local, soft-updates)
+example on /example (zfs, local)
+example/data on /example/data (zfs, local)
+example/compressed on /example/compressed (zfs, local)</screen>
+
+ <para>After creation, <acronym>ZFS</acronym> datasets can be
+ used like any file systems. However, many other features are
+ available which can be set on a per-dataset basis. In the
+ example below, a new file system called
+ <literal>data</literal> is created. Important files will be
+ stored here, so it is configured to keep two copies of each
+ data block:</para>
+
+ <screen>&prompt.root; <userinput>zfs create example/data</userinput>
+&prompt.root; <userinput>zfs set copies=2 example/data</userinput></screen>
+
+ <para>It is now possible to see the data and space utilization
+ by issuing <command>df</command>:</para>
+
+ <screen>&prompt.root; <userinput>df</userinput>
+Filesystem 1K-blocks Used Avail Capacity Mounted on
+/dev/ad0s1a 2026030 235234 1628714 13% /
+devfs 1 1 0 100% /dev
+/dev/ad0s1d 54098308 1032864 48737580 2% /usr
+example 17547008 0 17547008 0% /example
+example/compressed 17547008 0 17547008 0% /example/compressed
+example/data 17547008 0 17547008 0% /example/data</screen>
+
+ <para>Notice that each file system on the pool has the same
+ amount of available space. This is the reason for using
+ <command>df</command> in these examples, to show that the file
+ systems use only the amount of space they need and all draw
+ from the same pool. <acronym>ZFS</acronym> eliminates
+ concepts such as volumes and partitions, and allows multiple
+ file systems to occupy the same pool.</para>
+
+ <para>To destroy the file systems and then destroy the pool as
+ it is no longer needed:</para>
+
+ <screen>&prompt.root; <userinput>zfs destroy example/compressed</userinput>
+&prompt.root; <userinput>zfs destroy example/data</userinput>
+&prompt.root; <userinput>zpool destroy example</userinput></screen>
+ </sect2>
+
+ <sect2>
+ <title>RAID-Z</title>
+
+ <para>Disks fail. One method of avoiding data loss from disk
+ failure is to implement <acronym>RAID</acronym>.
+ <acronym>ZFS</acronym> supports this feature in its pool
+ design. <acronym>RAID-Z</acronym> pools require three or more
+ disks but provide more usable space than mirrored
+ pools.</para>
+
+ <para>This example creates a <acronym>RAID-Z</acronym> pool,
+ specifying the disks to add to the pool:</para>
*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***
More information about the svn-doc-all
mailing list