zfs
Brett Wynkoop
freebsd-arm at wynn.com
Sat Mar 14 03:45:18 UTC 2015
Mark-
I did a cc to the list as your questions and my answers might be of
interest to others.
On Fri, 13 Mar 2015 17:44:39 -0500
Mark Treacy <mark.treacy at gmail.com> wrote:
> Hi Brett,
>
> What options did you use when you create the zfs pool? Multiple
> devices?
No special options just
zpool create -m /export bbexport /dev/gpt/bbexport
(I already had the gpt label from when it was a UFS)
Then I used zfs set to turn off atime and turn on compression.
[root at beaglebone ~]# zfs get all bbexport | grep compress
bbexport compressratio 2.03x -
bbexport compression lzjb local
bbexport refcompressratio 2.10x -
[root at beaglebone ~]#
While I think there would be a further space savings with dedupe I did
not want the system bogged down doing that work. The BeagleBone is
after all a pretty small system.
As already posted here are the excerpts from /boot/loader.conf
and /etc/rc.conf:
rc.conf:
#
zfs_enable="YES"
#
loader.conf:
zfs_load="YES"
# from zfstuning wiki
#
vm.kmem_size="256M"
vm.kmem_size_max="256M"
vfs.zfs.arc_max="24M"
vfs.zfs.vdev.cache.size="5M"
With the above arc I am getting about 50% arc cache hits:
[root at beaglebone ~]# zfsci
ARC efficiency 50.9707%
[root at beaglebone ~]#
My experience on my bigger x86 boxes tells me that I could get a better
arc cache hit ratio if I kicked the arc_max up some more, but I do not
think I want to hand any more main memory to zfs on such a small box.
I wonder if we could do zfs on root.....hmmmm.
Of course I miss the advantages of RAID or MIRRORS since I have only
one disk in the pool, but I can at least do disk i/o to my usb flash
fast and without crashing the system. I also get the advantage of a
compressed filesystem and in addition to allowing me more storage it
may account for the better write performance. There are fewer bits to
write, so the slowness of flash media is somewhat compensated for.
In a future test I plan to add a hub and stick 3 usb sticks on to form
a raidz.
>
> Since zfs checksums everything you may well just be seeing it
> detecting corruption and repairing it.
>
You could be right on this one. I think I need to dig out my other
Beagle Bone and shift the sd and usb flash over to it and run the
series of tests against ufs again on the new hardware. I say new
because I have never fired up the other BBone.
[root at beaglebone ~]# zpool status -v
pool: bbexport
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: none requested
config:
NAME STATE READ WRITE CKSUM
bbexport ONLINE 0 0 10
gpt/bbexport ONLINE 0 0 99
errors: Permanent errors have been detected in the following files:
/export/src/contrib/llvm/tools/clang/lib/Sema/SemaExpr.cpp
/export/ports/packages/All/perl5-5.18.4_11.txz
/export/ports/distfiles/rsync-3.1.1.tar.gz
/export/ports/distfiles/readline-6.3.tar.gz
/export/ports/comms/wsjt/files/configure
bbexport/ports:<0x253db>
[root at beaglebone ~]#
> That would support Ian's faulty hardware suggestion...
>
> - Mark
As does the output of zpool status above.
-Brett
--
wynkoop at wynn.com http://prd4.wynn.com/wynkoop/pgp-keys.txt
917-642-6925
929-272-0000
Amendment II
A well regulated militia, being necessary to the security of a free
state, the right of the people to keep and bear arms, shall not be
infringed.
More information about the freebsd-arm
mailing list