[Bug 237807] ZFS: ZVOL writes fast, ZVOL reads abysmal...
bugzilla-noreply at freebsd.org
bugzilla-noreply at freebsd.org
Fri Aug 9 12:44:58 UTC 2019
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237807
--- Comment #10 from Nils Beyer <nbe at renzel.net> ---
maybe I'm too supid, I don't know. I can't get the pool fast...
Created the pool from scratch. Updated to latest 12-STABLE. But reads from that
pool are still abysmal.
Current pool layout:
--------------------------------------------------------------------------------
NAME STATE READ WRITE CKSUM
veeam-backups ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
da0 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da7 ONLINE 0 0 0
raidz1-2 ONLINE 0 0 0
da9 ONLINE 0 0 0
da14 ONLINE 0 0 0
da17 ONLINE 0 0 0
raidz1-3 ONLINE 0 0 0
da18 ONLINE 0 0 0
da21 ONLINE 0 0 0
da22 ONLINE 0 0 0
raidz1-4 ONLINE 0 0 0
da6 ONLINE 0 0 0
da15 ONLINE 0 0 0
da16 ONLINE 0 0 0
raidz1-5 ONLINE 0 0 0
da11 ONLINE 0 0 0
da8 ONLINE 0 0 0
da3 ONLINE 0 0 0
raidz1-6 ONLINE 0 0 0
da23 ONLINE 0 0 0
da20 ONLINE 0 0 0
da19 ONLINE 0 0 0
raidz1-7 ONLINE 0 0 0
da10 ONLINE 0 0 0
da12 ONLINE 0 0 0
da13 ONLINE 0 0 0
errors: No known data errors
--------------------------------------------------------------------------------
used bonnie++:
--------------------------------------------------------------------------------
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
veeambackups.local 64G 141 99 471829 59 122365 23 5 8 40084 8
1016 19
Latency 61947us 348ms 618ms 1634ms 105ms 190ms
Version 1.97 ------Sequential Create------ --------Random Create--------
veeambackups.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ 17378 58 +++++ +++ +++++ +++ 32079 99
Latency 2424us 44us 388ms 2295us 36us 91us
1.97,1.97,veeambackups.local,1,1565375578,64G,,141,99,471829,59,122365,23,5,8,40084,8,1016,19,16,,,,,+++++,+++,+++++,+++,17378,58,+++++,+++,+++++,+++,32079,99,61947us,348ms,618ms,1634ms,105ms,190ms,2424us,44us,388ms,2295us,36us,91us
--------------------------------------------------------------------------------
tested locally. No iSCSI, no NFS.
"gstat" tells me that the harddisks are only 15% busy.
CPU load averages: 0.51, 0.47, 0.39
ZFS recordsize is default 128k.
Maybe too many top-level VDEVs?
Maybe the HBA sucks for ZFS? A simple parallel DD using:
--------------------------------------------------------------------------------
for NR in `jot 24 0`; do
dd if=/dev/da${NR} of=/dev/null bs=1M count=1k &
done
--------------------------------------------------------------------------------
delivers 90MB/s for each of the 24 drives during the run which results in 90*24
= 2160MB/s total. Should be plenty for the pool.
I'm really out of ideas apart from trying 13-CURRENT or FreeNAS or Linux or or
or - which I'd like to avoid...
Needless to say that the read performances via NFS or iSCSI are still pathetic
which makes the current setup unusable as a ESXi datastore and makes me afraid
of future restore jobs in TB size ranges...
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the freebsd-fs
mailing list