[RFC] [patch] periodic status-zfs: list pools in daily emails
Glen Barber
gjb at FreeBSD.org
Wed Jun 29 11:21:17 UTC 2011
On 6/29/11 6:37 AM, Glen Barber wrote:
> I will reply later today with of the script with an unhealthy pool, and
> will make listing the pools configurable. I imagine an empty line would
> certainly make it more readable in either case. I would be reluctant to
> replace 'status' output with 'list' output for healthy pools mostly to
> avoid headaches for people parsing their daily email, specifically
> looking for (or missing) 'all pools are healthy.'
>
Might as well do this now, in case I don't have time later today.
For completeness, I took one drive in both of my pools offline. (Pardon
the long lines.) I also made listing the pools configurable, enabled by
default, but it runs only if daily_status_zfs_enable=YES.
Feedback would be appreciated.
Regards,
--
Glen Barber | gjb at FreeBSD.org
FreeBSD Documentation Project
-------------- next part --------------
Checking status of zfs pools:
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zroot 456G 146G 310G 32% 1.00x DEGRADED -
zstore 928G 258G 670G 27% 1.00x DEGRADED -
pool: zroot
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: scrub repaired 0 in 2h40m with 0 errors on Thu Jun 16 00:12:47 2011
config:
NAME STATE READ WRITE CKSUM
zroot DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
gptid/f877c64a-69c3-11df-aff1-001cc019b4b8 ONLINE 0 0 0
gptid/fa7fd19a-69c3-11df-aff1-001cc019b4b8 OFFLINE 0 0 0
errors: No known data errors
pool: zstore
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: scrub repaired 0 in 2h40m with 0 errors on Thu Jun 16 15:30:08 2011
config:
NAME STATE READ WRITE CKSUM
zstore DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
gptid/61d0cdf8-c135-11df-8b72-001cc019b4b8 ONLINE 0 0 0
gptid/645560ad-c135-11df-8b72-001cc019b4b8 OFFLINE 0 0 0
errors: No known data errors
-------------- next part --------------
Index: periodic/daily/404.status-zfs
===================================================================
--- periodic/daily/404.status-zfs (revision 223645)
+++ periodic/daily/404.status-zfs (working copy)
@@ -16,12 +16,21 @@
echo
echo 'Checking status of zfs pools:'
- out=`zpool status -x`
- echo "$out"
+ case "$daily_status_zfs_zpool_list_enable" in
+ [Yy][Ee][Ss])
+ lout=`zpool list`
+ echo "$lout"
+ echo
+ ;;
+ *)
+ ;;
+ esac
+ sout=`zpool status -x`
+ echo "$sout"
# zpool status -x always exits with 0, so we have to interpret its
# output to see what's going on.
- if [ "$out" = "all pools are healthy" \
- -o "$out" = "no pools available" ]; then
+ if [ "$sout" = "all pools are healthy" \
+ -o "$sout" = "no pools available" ]; then
rc=0
else
rc=1
Index: defaults/periodic.conf
===================================================================
--- defaults/periodic.conf (revision 223645)
+++ defaults/periodic.conf (working copy)
@@ -96,6 +96,7 @@
# 404.status-zfs
daily_status_zfs_enable="NO" # Check ZFS
+daily_status_zfs_zpool_list_enable="YES" # List ZFS pools
# 405.status-ata_raid
daily_status_ata_raid_enable="NO" # Check ATA raid status
More information about the freebsd-fs
mailing list