System Freezes When MBufClust Usages Rises
Ed Mandy
emandy at triticom.com
Sat Nov 10 15:58:29 PST 2007
We are using FreeBSD to run the Dante SOCKS proxy server to accelerate a
high-latency (approximately 1-second round-trip) network link. We need to
support many concurrent transfers of large files. To do this, we have set
the machine up with the following parameters.
Compiled Dante with the following setting in include/config.h
SOCKD_BUFSIZETCP = (1024*1000)
/etc/sysctl.conf :
kern.ipc.maxsockbuf=4194304
net.inet.tcp.sendspace=2097152
net.inet.tcp.recvspace=2097152
/boot/loader.conf :
kern.ipc.maxsockets="0" (also tried 25600, 51200, 102400, and 409600)
kern.ipc.nmbclusters="0" (also tried 102400 and 409600)
(Looking at the code, it seems that 0 means not to set a max for the above
two controls.)
If kern.ipc.nmbclusters is set to 25600, the system will hard freeze when
"vmstat -z" shows the number of clusters reaches 25600. If
kern.ipc.nmbclusters is set to 0 (or 102400), the system will hard freeze
when "vmstat -z" shows the number of clusters is around 66000. When it
freezes, the number of Kbytes allocated to network (as shown by
"netstat -m") is roughly 160,000 (160MB).
For a while, we thought that there may be a limit of 65536 mbuf clusters, so
we tested building the kernel with MCLSHIFT=12, which makes each mbcluster
4096-bytes. With this configuration, nmbclusters only reached about 33000
before the system froze. The number of Kbytes allocated to network (as
shown by "netstat -m") still maxed out at around 160,000.
Now, it seems that we are running into some other memory limitation that
occurs when our network allocation gets close to 160MB. We have tried
tuning paramaters such as KVA_PAGES, vm.kmem_size, vm.kmem_size_max, etc.
Though, we are unsure if the mods we made there helped in any way.
This is all being done on Celeron 2.8GHz machines with 3+ GB of RAM running
FreeBSD 5.3. We are very much tied to this platform at the moment, and
upgrading is not a realistic option for us. We would like to tune the
systems to not lockup. We can currently work around the problem (by using
smaller buffers and such), but it is at the expense of network throughput,
which is less than ideal.
Are there any other parameters that would help us to allocate more memory to
the kernel networking? What other options should we look into?
Thanks,
Ed Mandy
More information about the freebsd-net
mailing list