Areca vs. ZFS performance testing.
Danny Carroll
fbsd at dannysplace.net
Mon Nov 3 22:12:40 PST 2008
Ivan Voras wrote:
> Danny Carroll wrote:
>
>> Any thoughts on this setup as well as advice on what options to give to
>> bonnie++ (or suggestions on another disk testing package) are very welcome.
>
> I'd suggest two more tests, because bonnie++ won't tell you the
> performance of random IO and file system overhead:
>
> 1) randomIO: http://arctic.org/~dean/randomio/
> 2) blogbench: http://www.pureftpd.org/project/blogbench
>
> Be sure to select appropriate parameters for both (and the same
> parameters in every test so they can be compared) and study how they are
> used so you don't, for example, benchmark your system drive instead of
> the array :) ! (try not to put the system on the array - use the array
> only for benchmarks).
>
> For example, use blogbench "-c 30 -i 20 -r 40 -W 5 -w 5" to simulate a
> read-mostly environment.
>
Apologies if this comes twice.
Thanks for the info. I'll put together a few tests together with the
test scenarios already discussed.
On another note, slightly OT, I've been tuning the system a little bit
and I already have had some gains. Apart from the ZFS tuning already
mentioned, I have also done a few other things.
- Forced 1000baseTX mode on the Nic
- Experimented with jumbo frames and device polling.
- Tuned a few network IO parameters.
These really have no relevance to the tests I want to do (Areca Vs. ZFS)
but it was interesting to me to note the following:
- Device polling resulted in a performance degradation.
It's possible that I did not correctly tune
the device polling sysctl parameters well,
so I will revisit this.
- Tuning sysctl params gave the best results
I've been able to double my Samba throughput.
- Jumbo Frames had no noticeable effect.
- I have seen sustained 130Mb reads from ZFS:
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
bigarray 1.29T 3.25T 1.10K 0 140M 0
bigarray 1.29T 3.25T 1.00K 0 128M 0
bigarray 1.29T 3.25T 945 0 118M 0
bigarray 1.29T 3.25T 1.05K 0 135M 0
bigarray 1.29T 3.25T 1.01K 0 129M 0
bigarray 1.29T 3.25T 994 0 124M 0
ad4 ad6 ad8 cpu
KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id
0.00 0 0.00 65.90 375 24.10 63.74 387 24.08 0 0 19 2 78
0.00 0 0.00 66.36 357 23.16 63.93 370 23.11 0 0 23 2 75
16.00 0 0.00 64.84 387 24.51 63.79 389 24.20 0 0 23 2 75
16.00 2 0.03 68.09 407 27.04 64.98 409 25.98 0 0 28 2 70
Notes:
ad4 is the system drive, and not part of ZFS. I forgot to add the
options for the rest of the array drives (5 in total)
These figures are not measured along the same time frame.
I'm curious if the ~130M figure shown above is bandwidth from the array
or a total of all the drives. In other words, does it include reading
the parity information? I think it does not since if I look at iostat
figures and add up all of the drives it is greater than that reported by
zfs by a factor of 5/4 (100M in Zfs iostat = 5 x 25Mb in standard iostat).
If so then that is probably the most I will see coming off the drives
during a network transfer given that 130Mb/s should already be over the
limit of Gigabit ethernet.
Lastly, The windows client which performed these tests was measuring
local bandwidth at about 30-50Mb/s. I believe this figure to be
incorrect (given how much I transferred in X seconds...)
Edit: Scratch that, I can't do math.
It was indeed transferring at about 50M/sec. I wonder why the IOstat
measurements showed more IO than 50M/sec...?
-D
More information about the freebsd-hardware
mailing list