zfs send/receive: is this slow?
Martin Matuska
mm at FreeBSD.org
Mon Oct 4 07:27:58 UTC 2010
Try using zfs receive with the -v flag (gives you some stats at the end):
# zfs send storage/bacula at transfer | zfs receive -v
storage/compressed/bacula
And use the following sysctl (you may set that in /boot/loader.conf, too):
# sysctl vfs.zfs.txg.write_limit_override=805306368
I have good results with the 768MB writelimit on systems with at least
8GB RAM. With 4GB ram, you might want to try to set the TXG write limit
to a lower threshold (e.g. 256MB):
# sysctl vfs.zfs.txg.write_limit_override=268435456
You can experiment with that setting to get the best results on your
system. A value of 0 means using calculated default (which is very high).
During the operation you can observe what your disks actually do:
a) via ZFS pool I/O statistics:
# zpool iostat -v 1
b) via GEOM:
# gstat -a
mm
Dňa 4. 10. 2010 4:06, Artem Belevich wrote / napísal(a):
> On Sun, Oct 3, 2010 at 6:11 PM, Dan Langille <dan at langille.org> wrote:
>> I'm rerunning my test after I had a drive go offline[1]. But I'm not
>> getting anything like the previous test:
>>
>> time zfs send storage/bacula at transfer | mbuffer | zfs receive
>> storage/compressed/bacula-buffer
>>
>> $ zpool iostat 10 10
>> capacity operations bandwidth
>> pool used avail read write read write
>> ---------- ----- ----- ----- ----- ----- -----
>> storage 6.83T 5.86T 8 31 1.00M 2.11M
>> storage 6.83T 5.86T 207 481 25.7M 17.8M
>
> It may be worth checking individual disk activity using gstat -f 'da.$'
>
> Some time back I had one drive that was noticeably slower than the
> rest of the drives in RAID-Z2 vdev and was holding everything back.
> SMART looked OK, there were no obvious errors and yet performance was
> much worse than what I'd expect. gstat clearly showed that one drive
> was almost constantly busy with much lower number of reads and writes
> per second than its peers.
>
> Perhaps previously fast transfer rates were due to caching effects.
> I.e. if all metadata already made it into ARC, subsequent "zfs send"
> commands would avoid a lot of random seeks and would show much better
> throughput.
>
> --Artem
> _______________________________________________
> freebsd-stable at freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe at freebsd.org"
More information about the freebsd-stable
mailing list