Areca vs. ZFS performance testing.
Nikolay Denev
ndenev at gmail.com
Thu Jan 8 01:45:53 PST 2009
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 8 Jan, 2009, at 02:33 , Danny Carroll wrote:
> I'd like to post some results of what I have found with my tests.
> I did a few different types of tests. Basically a set of 5-disk tests
> and a set of 12-disk tests.
>
> I did this because I only had 5 ports available on my onboard
> controller
> and I wanted to see how the areca compared to that. I also wanted to
> see comparisons between JBOD, Passthru and hardware raid5.
>
> I have not tested raid6 or raidz2.
>
> You can see the results here:
> http://www.dannysplace.net/quickweb/filesystem%20tests.htm
>
> An explanation of each of the tests:
> ICH9_ZFS 5 disk zfs raidz test with onboard SATA
> ports.
> ARECAJBOD_ZFS 5 disk zfs raidz test with Areca SATA
> ports configured in JBOD mode.
> ARECAJBOD_ZFS_NoWriteCache 5 disk zfs raidz test with Areca SATA
> ports configured in JBOD mode and with
> disk caches disabled.
> ARECARAID 5 disk zfs single-disk test with Areca
> raid5 array.
> ARECAPASSTHRU 5 disk zfs raidz test with Areca SATA ports
> configured in Passthru mode. This
> means that the onboard areca cache is
> active.
> ARECARAID-UFS2 5 disk ufs2 single-disk test with Areca
> raid5 array.
> ARECARAID-BIG 12 disk zfs single-disk test with Areca
> raid5 array.
> ARECAPASSTHRU_12 12 disk zfs raidz test with Areca SATA ports
> configured in Passthru mode. This
> means that the onboard areca cache is
> active.
>
>
> I'll probably be opting for the ARECAPASSTHRU_12 configuration.
> Mainly
> because I do not need amazing read speeds (network port would be
> saturated anyway) and I think that the raidz implementation would be
> more fault tolerant. By that I mean if you have a disk read error
> during a rebuild then as I understand it, raidz will write off that
> block (and hopefully tell me about dead files) but continue with the
> rest of the rebuild.
>
> This is something I'd love to test for real, just to see what happens.
> But I am not sure how I could do that. Perhaps removing one drive,
> then
> a few random writes to a remaining disk (or two) and seeing how it
> goes
> with a rebuild.
>
> Something else worth mentioning. When I converted from JBOD to
> passthrough, I was able to re-import the disks without any problems.
> This must mean that the areca passthrough option does not alter the
> disk
> much, perhaps not at all.
>
> After a 21 hour rebuild I have to say I am not that keen to do more of
> these tests, but if there is something someone wants to see, then I'll
> definitely consider it.
>
> One thing I am at a loss to understand is why turning off the disk
> caches when testing the JBOD performance produced almost identical
> (very
> slightly better) results. Perhaps it was a case of the ZFS internal
> cache making the disks cache redundant? Comparing to the ARECA
> passthrough (where the areca cache is used) shows again, similar
> results.
>
> -D
> _______________________________________________
> freebsd-fs at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe at freebsd.org"
There is a big difference betweeen hardware and ZFS raidz with 12 disk
on the get_block test,
maybe it would be interesting to rerun this test with zfs prefetch
disabled?
- --
Regards,
Nikolay Denev
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (Darwin)
iEYEARECAAYFAkllxT8ACgkQHNAJ/fLbfrnHnwCeJ8nSjBY6fc0Lvu2+fSN5E4HI
zb0Ani2ZFLdxYCWYBuCnoo+D244O2lg5
=EKgi
-----END PGP SIGNATURE-----
More information about the freebsd-fs
mailing list