zfs + NFS + FreeBSD with performance prob
Albert Shih
Albert.Shih at obspm.fr
Thu Jan 31 22:17:17 UTC 2013
Hi all,
I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
(I known it's bad).
Well I've server running FreeBSD 9.0 with (don't count / on differents
disks) zfs pool with 36 disk.
The performance is very very good on the server.
I've one NFS client running FreeBSD 8.3 and the performance over NFS is
very good :
For example : Read from the client and write over NFS to ZFS:
[root@ .tmp]# time tar xf /tmp/linux-3.7.5.tar
real 1m7.244s
user 0m0.921s
sys 0m8.990s
this client is on 1Gbits/s network cable and same network switch as the
server.
I've a second NFS client running FreeBSD 9.1-Stable, and on this second
client the performance is catastrophic. After 1 hour the tar isn't finish.
OK this second client is connect with 100Mbit/s and not on the same switch.
But well from 2 min --> ~ 90 min ...:-(
I've try for this second client to change on the ZFS-NFS server the
zfs set sync=disabled
and that change nothing.
On a third NFS client linux (recent Ubuntu) I got the almost same catastrophic
performance. With or without sync=disabled.
Those three NFS client use TCP.
If I do a classic scp I got normal speed ~9-10 Mbytes/s so the network is
not the problem.
I try to something like (find with google):
net.inet.tcp.sendbuf_max: 2097152 -> 16777216
net.inet.tcp.recvbuf_max: 2097152 -> 16777216
net.inet.tcp.sendspace: 32768 -> 262144
net.inet.tcp.recvspace: 65536 -> 262144
net.inet.tcp.mssdflt: 536 -> 1452
net.inet.udp.recvspace: 42080 -> 65535
net.inet.udp.maxdgram: 9216 -> 65535
net.local.stream.recvspace: 8192 -> 65535
net.local.stream.sendspace: 8192 -> 65535
and that change nothing either.
Anyone have any idea ?
Regards.
JAS
--
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: jas at obspm.fr
Heure local/Local time:
jeu 31 jan 2013 23:04:47 CET
More information about the freebsd-questions
mailing list