svn commit: r43241 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs
Warren Block
wblock at FreeBSD.org
Mon Nov 25 04:36:46 UTC 2013
Author: wblock
Date: Mon Nov 25 04:36:45 2013
New Revision: 43241
URL: http://svnweb.freebsd.org/changeset/doc/43241
Log:
More whitespace fixes, translators please ignore.
Modified:
projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Mon Nov 25 03:53:50 2013 (r43240)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Mon Nov 25 04:36:45 2013 (r43241)
@@ -303,9 +303,8 @@ example/data 17547008 0 175
<screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
- <para>Now compression and keeping extra
- copies of directories and files can be enabled with these
- commands:</para>
+ <para>Now compression and keeping extra copies of directories
+ and files can be enabled with these commands:</para>
<screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput>
&prompt.root; <userinput>zfs set compression=gzip storage/home</userinput></screen>
@@ -394,15 +393,15 @@ storage/home 26320512 0 26320512
<screen>&prompt.root; <userinput>zpool status -x</userinput></screen>
- <para>If all pools are <link
- linkend="zfs-term-online">Online</link> and everything is
- normal, the message indicates that:</para>
+ <para>If all pools are
+ <link linkend="zfs-term-online">Online</link> and everything
+ is normal, the message indicates that:</para>
<screen>all pools are healthy</screen>
- <para>If there is an issue, perhaps a disk is in the <link
- linkend="zfs-term-offline">Offline</link> state, the pool
- state will look similar to:</para>
+ <para>If there is an issue, perhaps a disk is in the
+ <link linkend="zfs-term-offline">Offline</link> state, the
+ pool state will look similar to:</para>
<screen> pool: storage
state: DEGRADED
@@ -424,8 +423,7 @@ config:
errors: No known data errors</screen>
<para>This indicates that the device was previously taken
- offline by the administrator with this
- command:</para>
+ offline by the administrator with this command:</para>
<screen>&prompt.root; <userinput>zpool offline storage da1</userinput></screen>
@@ -436,8 +434,8 @@ errors: No known data errors</screen>
<screen>&prompt.root; <userinput>zpool replace storage da1</userinput></screen>
<para>From here, the status may be checked again, this time
- without <option>-x</option> so that all pools
- are shown:</para>
+ without <option>-x</option> so that all pools are
+ shown:</para>
<screen>&prompt.root; <userinput>zpool status storage</userinput>
pool: storage
@@ -518,25 +516,25 @@ errors: No known data errors</screen>
<para>The administration of ZFS is divided between two main
utilities. The <command>zpool</command> utility which controls
the operation of the pool and deals with adding, removing,
- replacing and managing disks, and the <link
- linkend="zfs-zfs"><command>zfs</command></link> utility, which
- deals with creating, destroying and managing datasets (both
- <link linkend="zfs-term-filesystem">filesystems</link> and <link
- linkend="zfs-term-volume">volumes</link>).</para>
+ replacing and managing disks, and the
+ <link linkend="zfs-zfs"><command>zfs</command></link> utility,
+ which deals with creating, destroying and managing datasets
+ (both <link linkend="zfs-term-filesystem">filesystems</link> and
+ <link linkend="zfs-term-volume">volumes</link>).</para>
<sect2 xml:id="zfs-zpool-create">
<title>Creating & Destroying Storage Pools</title>
<para>Creating a ZFS Storage Pool (<acronym>zpool</acronym>)
involves making a number of decisions that are relatively
- permanent because the structure of the pool cannot be
- changed after the pool has been created. The most important
- decision is what types of vdevs to group the physical disks
- into. See the list of <link
- linkend="zfs-term-vdev">vdev types</link> for details about
- the possible options. After the pool has been created, most
- vdev types do not allow additional disks to be added to the
- vdev. The exceptions are mirrors, which allow additional
+ permanent because the structure of the pool cannot be changed
+ after the pool has been created. The most important decision
+ is what types of vdevs to group the physical disks into. See
+ the list of
+ <link linkend="zfs-term-vdev">vdev types</link> for details
+ about the possible options. After the pool has been created,
+ most vdev types do not allow additional disks to be added to
+ the vdev. The exceptions are mirrors, which allow additional
disks to be added to the vdev, and stripes, which can be
upgraded to mirrors by attaching an additional disk to the
vdev. Although additional vdevs can be added to a pool, the
@@ -565,21 +563,20 @@ errors: No known data errors</screen>
linkend="zfs-term-vdev">vdev types</link> allow disks to be
added to the vdev after creation.</para>
- <para>When adding disks to the existing vdev is not
- an option, as in the case of RAID-Z, the other option is
- to add a vdev to the pool. It is possible, but
- discouraged, to mix vdev types. ZFS stripes data across each
- of the vdevs. For example, if there are two mirror vdevs,
- then this is effectively a RAID 10, striping the writes across
- the two sets of mirrors. Because of the way that space is
- allocated in ZFS to attempt to have each vdev reach
- 100% full at the same time, there is a performance penalty if
- the vdevs have different amounts of free space.</para>
+ <para>When adding disks to the existing vdev is not an option,
+ as in the case of RAID-Z, the other option is to add a vdev to
+ the pool. It is possible, but discouraged, to mix vdev types.
+ ZFS stripes data across each of the vdevs. For example, if
+ there are two mirror vdevs, then this is effectively a RAID
+ 10, striping the writes across the two sets of mirrors.
+ Because of the way that space is allocated in ZFS to attempt
+ to have each vdev reach 100% full at the same time, there is a
+ performance penalty if the vdevs have different amounts of
+ free space.</para>
<para>Currently, vdevs cannot be removed from a zpool, and disks
can only be removed from a mirror if there is enough remaining
redundancy.</para>
-
</sect2>
<sect2 xml:id="zfs-zpool-replace">
@@ -601,23 +598,23 @@ errors: No known data errors</screen>
<title>Dealing with Failed Devices</title>
<para>When a disk in a ZFS pool fails, the vdev that the disk
- belongs to will enter the <link
- linkend="zfs-term-degraded">Degraded</link> state. In this
- state, all of the data stored on the vdev is still available,
- but performance may be impacted because missing data will need
- to be calculated from the available redundancy. To restore
- the vdev to a fully functional state the failed physical
- device will need to be replace replaced, and ZFS must be
- instructed to begin the <link
- linkend="zfs-term-resilver">resilver</link> operation, where
- data that was on the failed device will be recalculated
+ belongs to will enter the
+ <link linkend="zfs-term-degraded">Degraded</link> state. In
+ this state, all of the data stored on the vdev is still
+ available, but performance may be impacted because missing
+ data will need to be calculated from the available redundancy.
+ To restore the vdev to a fully functional state the failed
+ physical device will need to be replace replaced, and ZFS must
+ be instructed to begin the
+ <link linkend="zfs-term-resilver">resilver</link> operation,
+ where data that was on the failed device will be recalculated
from the available redundancy and written to the replacement
device. Once this process has completed the vdev will return
to <link linkend="zfs-term-online">Online</link> status. If
the vdev does not have any redundancy, or if multiple devices
have failed and there is insufficient redundancy to
- compensate, the pool will enter the <link
- linkend="zfs-term-faulted">Faulted</link> state. If a
+ compensate, the pool will enter the
+ <link linkend="zfs-term-faulted">Faulted</link> state. If a
sufficient number of devices cannot be reconnected to the pool
then the pool will be inoperative, and data will need to be
restored from backups.</para>
@@ -629,14 +626,14 @@ errors: No known data errors</screen>
<para>The usable size of a redundant ZFS pool is limited by the
size of the smallest device in the vdev. If each device in
the vdev is replaced sequentially, after the smallest device
- has completed the <link
- linkend="zfs-zpool-replace">replace</link> or <link
- linkend="zfs-term-resilver">resilver</link> operation, the
- pool can grow based on the size of the new smallest device.
- This expansion can be triggered by using <command>zpool
- online</command> with the <option>-e</option> parameter on
- each device. After the expansion of each device, the
- additional space will become available in the pool.</para>
+ has completed the
+ <link linkend="zfs-zpool-replace">replace</link> or
+ <link linkend="zfs-term-resilver">resilver</link> operation,
+ the pool can grow based on the size of the new smallest
+ device. This expansion can be triggered by using
+ <command>zpool online</command> with the <option>-e</option>
+ parameter on each device. After the expansion of each device,
+ the additional space will become available in the pool.</para>
</sect2>
<sect2 xml:id="zfs-zpool-import">
@@ -759,26 +756,26 @@ History for 'tank':
on the other system can clearly be distinguished by the
hostname that is recorded for each command.</para>
- <para>Both options to <command>zpool history</command>
- can be combined to give the most detailed
- information possible for any given pool. The pool history can
- be a valuable information source when tracking down what
- actions were performed or when more
- detailed output is needed for debugging a ZFS pool.</para>
+ <para>Both options to <command>zpool history</command> can be
+ combined to give the most detailed information possible for
+ any given pool. The pool history can be a valuable
+ information source when tracking down what actions were
+ performed or when more detailed output is needed for debugging
+ a ZFS pool.</para>
</sect2>
<sect2 xml:id="zfs-zpool-iostat">
<title>Performance Monitoring</title>
<para>ZFS has a built-in monitoring system that can display
- statistics about I/O happening on the pool in real-time.
- It shows the amount of free and used space on the pool, how
- many read and write operations are being performed per second,
- and how much I/O bandwidth is currently being utilized for
- read and write operations. By default, all pools in the
- system will be monitored and displayed. A pool name can be
- provided as part of the command to monitor just that specific
- pool. A basic example:</para>
+ statistics about I/O happening on the pool in real-time. It
+ shows the amount of free and used space on the pool, how many
+ read and write operations are being performed per second, and
+ how much I/O bandwidth is currently being utilized for read
+ and write operations. By default, all pools in the system
+ will be monitored and displayed. A pool name can be provided
+ as part of the command to monitor just that specific pool. A
+ basic example:</para>
<screen>&prompt.root; <userinput>zpool iostat</userinput>
capacity operations bandwidth
@@ -790,11 +787,13 @@ data 288G 1.53T 2 11
number can be specified as the last parameter, indicating
the frequency in seconds to wait between updates. ZFS will
print the next statistic line after each interval. Press
- <keycombo
- action="simul"><keycap>Ctrl</keycap><keycap>C</keycap></keycombo>
- to stop this continuous monitoring. Alternatively, give a
- second number on the command line after the interval to
- specify the total number of statistics to display.</para>
+ <keycombo action="simul">
+ <keycap>Ctrl</keycap>
+ <keycap>C</keycap>
+ </keycombo> to stop this continuous monitoring.
+ Alternatively, give a second number on the command line after
+ the interval to specify the total number of statistics to
+ display.</para>
<para>Even more detailed pool I/O statistics can be displayed
with <option>-v</option>. In this case each storage device in
@@ -850,22 +849,22 @@ data 288G 1.53T
partitioned and assigned to a file system, there was no way to
add an additional file system without adding a new disk.
<acronym>ZFS</acronym> also allows you to set a number of
- properties on each <link
- linkend="zfs-term-dataset">dataset</link>. These properties
- include features like compression, deduplication, caching and
- quoteas, as well as other useful properties like readonly,
- case sensitivity, network file sharing and mount point. Each
- separate dataset can be administered, <link
- linkend="zfs-zfs-allow">delegated</link>, <link
- linkend="zfs-zfs-send">replicated</link>, <link
- linkend="zfs-zfs-snapshot">snapshoted</link>, <link
- linkend="zfs-zfs-jail">jailed</link>, and destroyed as a unit.
- This offers many advantages to creating a separate dataset for
- each different type or set of files. The only drawback to
- having an extremely large number of datasets, is that some
- commands like <command>zfs list</command> will be slower,
- and the mounting of an extremely large number of datasets
- (100s or 1000s) can make the &os; boot process take
+ properties on each
+ <link linkend="zfs-term-dataset">dataset</link>. These
+ properties include features like compression, deduplication,
+ caching and quoteas, as well as other useful properties like
+ readonly, case sensitivity, network file sharing and mount
+ point. Each separate dataset can be administered,
+ <link linkend="zfs-zfs-allow">delegated</link>,
+ <link linkend="zfs-zfs-send">replicated</link>,
+ <link linkend="zfs-zfs-snapshot">snapshoted</link>,
+ <link linkend="zfs-zfs-jail">jailed</link>, and destroyed as a
+ unit. This offers many advantages to creating a separate
+ dataset for each different type or set of files. The only
+ drawback to having an extremely large number of datasets, is
+ that some commands like <command>zfs list</command> will be
+ slower, and the mounting of an extremely large number of
+ datasets (100s or 1000s) can make the &os; boot process take
longer.</para>
<para>Destroying a dataset is much quicker than deleting all
@@ -878,8 +877,8 @@ data 288G 1.53T
property, accessible with <command>zpool get freeing
<replaceable>poolname</replaceable></command> indicates how
many datasets are having their blocks freed in the background.
- If there are child datasets, such as <link
- linkend="zfs-term-snapshot">snapshots</link> or other
+ If there are child datasets, such as
+ <link linkend="zfs-term-snapshot">snapshots</link> or other
datasets, then the parent cannot be destroyed. To destroy a
dataset and all of its children, use the <option>-r</option>
parameter to recursively destroy the dataset and all of its
@@ -926,16 +925,15 @@ Filesystem Size Used Avail Cap
regular filesystem dataset. The operation is nearly
instantaneous, but it may take several minutes for the free
space to be reclaimed in the background.</para>
-
</sect2>
<sect2 xml:id="zfs-zfs-rename">
<title>Renaming a Dataset</title>
- <para>The name of a dataset can be changed using <command>zfs
- rename</command>. The rename command can also be used to
- change the parent of a dataset. Renaming a dataset to be
- under a different parent dataset will change the value of
+ <para>The name of a dataset can be changed using
+ <command>zfs rename</command>. The rename command can also be
+ used to change the parent of a dataset. Renaming a dataset to
+ be under a different parent dataset will change the value of
those properties that are inherited by the child dataset.
When a dataset is renamed, it is unmounted and then remounted
in the new location (inherited from the parent dataset). This
@@ -1004,12 +1002,12 @@ tank custom:costcenter -
<para>By default, snapshots are mounted in a hidden directory
under the parent dataset: <filename
- class="directory">.zfs/snapshots/<replaceable>snapshotname</replaceable></filename>.
+ class="directory">.zfs/snapshots/<replaceable>snapshotname</replaceable></filename>.
Individual files can easily be restored to a previous state by
copying them from the snapshot back to the parent dataset. It
is also possible to revert the entire dataset back to the
- point-in-time of the snapshot using <command>zfs
- rollback</command>.</para>
+ point-in-time of the snapshot using
+ <command>zfs rollback</command>.</para>
<para>Snapshots consume space based on how much the parent file
system has changed since the time of the snapshot. The
@@ -1018,7 +1016,7 @@ tank custom:costcenter -
<para>To destroy a snapshot and recover the space consumed by
the overwritten or deleted files, run <command>zfs destroy
- <replaceable>dataset</replaceable>@<replaceable>snapshot</replaceable></command>.
+ <replaceable>dataset</replaceable>@<replaceable>snapshot</replaceable></command>.
The <option>-r</option> parameter will recursively remove all
snapshots with the same name under the parent dataset. Adding
the <option>-n -v</option> parameters to the destroy command
@@ -1035,12 +1033,12 @@ tank custom:costcenter -
only, is mounted, and can have its own properties. Once a
clone has been created, the snapshot it was created from
cannot be destroyed. The child/parent relationship between
- the clone and the snapshot can be reversed using <command>zfs
- promote</command>. After a clone has been promoted, the
- snapshot becomes a child of the clone, rather than of the
- original parent dataset. This will change how the space is
- accounted, but not actually change the amount of space
- consumed.</para>
+ the clone and the snapshot can be reversed using
+ <command>zfs promote</command>. After a clone has been
+ promoted, the snapshot becomes a child of the clone, rather
+ than of the original parent dataset. This will change how the
+ space is accounted, but not actually change the amount of
+ space consumed.</para>
</sect2>
<sect2 xml:id="zfs-zfs-send">
@@ -1052,17 +1050,17 @@ tank custom:costcenter -
<sect2 xml:id="zfs-zfs-quota">
<title>Dataset, User and Group Quotas</title>
- <para><link linkend="zfs-term-quota">Dataset
- quotas</link> can be used to restrict the amount of space
- that can be consumed by a particular dataset. <link
- linkend="zfs-term-refquota">Reference Quotas</link> work in
- very much the same way, except they only count the space used
- by the dataset itself, excluding snapshots and child
- datasets. Similarly <link
- linkend="zfs-term-userquota">user</link> and <link
- linkend="zfs-term-groupquota">group</link> quotas can be used
- to prevent users or groups from consuming all of the available
- space in the pool or dataset.</para>
+ <para><link linkend="zfs-term-quota">Dataset quotas</link> can
+ be used to restrict the amount of space that can be consumed
+ by a particular dataset.
+ <link linkend="zfs-term-refquota">Reference Quotas</link> work
+ in very much the same way, except they only count the space
+ used by the dataset itself, excluding snapshots and child
+ datasets. Similarly
+ <link linkend="zfs-term-userquota">user</link> and
+ <link linkend="zfs-term-groupquota">group</link> quotas can be
+ used to prevent users or groups from consuming all of the
+ available space in the pool or dataset.</para>
<para>To enforce a dataset quota of 10 GB for
<filename>storage/home/bob</filename>, use the
@@ -1167,11 +1165,10 @@ tank custom:costcenter -
<para><link linkend="zfs-term-reservation">Reservations</link>
guarantee a minimum amount of space will always be available
- to a dataset. The reserved space will not
- be available to any other dataset. This feature can be
- especially useful to ensure that users cannot comsume all of
- the free space, leaving none for an important dataset or log
- files.</para>
+ to a dataset. The reserved space will not be available to any
+ other dataset. This feature can be especially useful to
+ ensure that users cannot comsume all of the free space,
+ leaving none for an important dataset or log files.</para>
<para>The general format of the <literal>reservation</literal>
property is
@@ -1189,7 +1186,7 @@ tank custom:costcenter -
<para>The same principle can be applied to the
<literal>refreservation</literal> property for setting a
<link linkend="zfs-term-refreservation">Reference
- Reservation</link>, with the general format
+ Reservation</link>, with the general format
<literal>refreservation=<replaceable>size</replaceable></literal>.</para>
<para>To check if any reservations or refreservations exist on
@@ -1209,13 +1206,13 @@ tank custom:costcenter -
<sect2 xml:id="zfs-zfs-deduplication">
<title>Deduplication</title>
- <para>When enabled, <link
- linkend="zfs-term-deduplication">Deduplication</link> uses
- the checksum of each block to detect duplicate blocks. When a
- new block is about to be written and it is determined to be a
- duplicate of an existing block, rather than writing the same
- data again, <acronym>ZFS</acronym> just references the
- existing data on disk an additional time. This can offer
+ <para>When enabled,
+ <link linkend="zfs-term-deduplication">Deduplication</link>
+ uses the checksum of each block to detect duplicate blocks.
+ When a new block is about to be written and it is determined
+ to be a duplicate of an existing block, rather than writing
+ the same data again, <acronym>ZFS</acronym> just references
+ the existing data on disk an additional time. This can offer
tremendous space savings if your data contains many discreet
copies of the file information. Deduplication requires an
extremely large amount of memory, and most of the space
@@ -1343,12 +1340,12 @@ dedup = 1.05, compress = 1.11, copies =
<title>Delegating Dataset Creation</title>
<para><command>zfs allow
- <replaceable>someuser</replaceable> create
- <replaceable>mydataset</replaceable></command>
- gives the specified user permission to create child datasets
- under the selected parent dataset. There is a caveat:
- creating a new dataset involves mounting it. That requires
- setting the <literal>vfs.usermount</literal> &man.sysctl.8; to
+ <replaceable>someuser</replaceable> create
+ <replaceable>mydataset</replaceable></command> gives the
+ specified user permission to create child datasets under the
+ selected parent dataset. There is a caveat: creating a new
+ dataset involves mounting it. That requires setting the
+ <literal>vfs.usermount</literal> &man.sysctl.8; to
<literal>1</literal> to allow non-root users to mount a
filesystem. There is another restriction aimed at preventing
abuse: non-root users must own the mountpoint where the file
@@ -1359,14 +1356,14 @@ dedup = 1.05, compress = 1.11, copies =
<title>Delegating Permission Delegation</title>
<para><command>zfs allow
- <replaceable>someuser</replaceable> allow
- <replaceable>mydataset</replaceable></command>
- gives the specified user the ability to assign any permission
- they have on the target dataset (or its children) to other
- users. If a user has the <literal>snapshot</literal>
- permission and the <literal>allow</literal> permission, that
- user can then grant the <literal>snapshot</literal> permission
- to some other users.</para>
+ <replaceable>someuser</replaceable> allow
+ <replaceable>mydataset</replaceable></command> gives the
+ specified user the ability to assign any permission they have
+ on the target dataset (or its children) to other users. If a
+ user has the <literal>snapshot</literal> permission and the
+ <literal>allow</literal> permission, that user can then grant
+ the <literal>snapshot</literal> permission to some other
+ users.</para>
</sect2>
</sect1>
@@ -1401,8 +1398,8 @@ dedup = 1.05, compress = 1.11, copies =
<title>ZFS on i386</title>
<para>Some of the features provided by <acronym>ZFS</acronym>
- are RAM-intensive, and may require tuning for
- maximum efficiency on systems with limited
+ are RAM-intensive, and may require tuning for maximum
+ efficiency on systems with limited
<acronym>RAM</acronym>.</para>
<sect3>
@@ -1411,16 +1408,15 @@ dedup = 1.05, compress = 1.11, copies =
<para>As a bare minimum, the total system memory should be at
least one gigabyte. The amount of recommended
<acronym>RAM</acronym> depends upon the size of the pool and
- which <acronym>ZFS</acronym> features are used. A
- general rule of thumb is 1 GB of RAM for every
- 1 TB of storage. If the deduplication feature is used,
- a general rule of thumb is 5 GB of RAM per TB of
- storage to be deduplicated. While some users successfully
- use <acronym>ZFS</acronym> with less <acronym>RAM</acronym>,
- systems under heavy load
- may panic due to memory exhaustion. Further tuning may be
- required for systems with less than the recommended RAM
- requirements.</para>
+ which <acronym>ZFS</acronym> features are used. A general
+ rule of thumb is 1 GB of RAM for every 1 TB of
+ storage. If the deduplication feature is used, a general
+ rule of thumb is 5 GB of RAM per TB of storage to be
+ deduplicated. While some users successfully use
+ <acronym>ZFS</acronym> with less <acronym>RAM</acronym>,
+ systems under heavy load may panic due to memory exhaustion.
+ Further tuning may be required for systems with less than
+ the recommended RAM requirements.</para>
</sect3>
<sect3>
@@ -1686,7 +1682,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
<emphasis>Log</emphasis> - <acronym>ZFS</acronym>
Log Devices, also known as ZFS Intent Log
(<link
- linkend="zfs-term-zil"><acronym>ZIL</acronym></link>)
+ linkend="zfs-term-zil"><acronym>ZIL</acronym></link>)
move the intent log from the regular pool devices
to a dedicated device, typically an
<acronym>SSD</acronym>. Having a dedicated log
@@ -1703,7 +1699,7 @@ vfs.zfs.vdev.cache.size="5M"</programlis
<emphasis>Cache</emphasis> - Adding a cache vdev
to a zpool will add the storage of the cache to
the <link
- linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>.
+ linkend="zfs-term-l2arc"><acronym>L2ARC</acronym></link>.
Cache devices cannot be mirrored. Since a cache
device only stores additional copies of existing
data, there is no risk of data loss.</para>
@@ -1870,9 +1866,9 @@ vfs.zfs.vdev.cache.size="5M"</programlis
<row>
<entry xml:id="zfs-term-snapshot">Snapshot</entry>
- <entry>The <link
- linkend="zfs-term-cow">copy-on-write</link>
- (<acronym>COW</acronym>) design of
+ <entry>The
+ <link linkend="zfs-term-cow">copy-on-write</link>
+ (<acronym>COW</acronym>) design of
<acronym>ZFS</acronym> allows for nearly instantaneous
consistent snapshots with arbitrary names. After taking
a snapshot of a dataset (or a recursive snapshot of a
@@ -1974,11 +1970,10 @@ vfs.zfs.vdev.cache.size="5M"</programlis
compression property, which defaults to off. This
property can be set to one of a number of compression
algorithms, which will cause all new data that is
- written to the dataset to be compressed.
- In addition to the reduction in disk usage,
- this can also increase read and write throughput, as
- only the smaller compressed version of the file needs to
- be read or written.
+ written to the dataset to be compressed. In addition to
+ the reduction in disk usage, this can also increase read
+ and write throughput, as only the smaller compressed
+ version of the file needs to be read or written.
<note>
<para><acronym>LZ4</acronym> compression is only
@@ -2082,11 +2077,10 @@ vfs.zfs.vdev.cache.size="5M"</programlis
Quota</entry>
<entry>A reference quota limits the amount of space a
- dataset can consume by enforcing a hard limit.
- However, this hard limit includes only
- space that the dataset references and does not include
- space used by descendants, such as file systems or
- snapshots.</entry>
+ dataset can consume by enforcing a hard limit. However,
+ this hard limit includes only space that the dataset
+ references and does not include space used by
+ descendants, such as file systems or snapshots.</entry>
</row>
<row>
@@ -2145,8 +2139,8 @@ vfs.zfs.vdev.cache.size="5M"</programlis
<filename>storage/home/bob</filename>, and another
dataset tries to use all of the free space, at least
10 GB of space is reserved for this dataset. In
- contrast to a regular <link
- linkend="zfs-term-reservation">reservation</link>,
+ contrast to a regular
+ <link linkend="zfs-term-reservation">reservation</link>,
space used by snapshots and decendant datasets is not
counted against the reservation. As an example, if a
snapshot was taken of
@@ -2186,9 +2180,9 @@ vfs.zfs.vdev.cache.size="5M"</programlis
<entry>Individual devices can be put in an
<literal>Offline</literal> state by the administrator if
there is sufficient redundancy to avoid putting the pool
- or vdev into a <link
- linkend="zfs-term-faulted">Faulted</link> state. An
- administrator may choose to offline a disk in
+ or vdev into a
+ <link linkend="zfs-term-faulted">Faulted</link> state.
+ An administrator may choose to offline a disk in
preparation for replacing it, or to make it easier to
identify.</entry>
</row>
More information about the svn-doc-projects
mailing list