Need docs, info, pointers to source, whatever.
jason at dstc.edu.au
Mon Aug 19 09:00:00 CDT 2002
On 19 Aug 2002, Brian K. Jones wrote:
> "What is the purpose/significance of the files in /proc/sys/net/core/?"
all of the files ? the main ones people try to toggle are
-rw-r--r-- 1 root root 0 Aug 19 23:51 rmem_default
-rw-r--r-- 1 root root 0 Aug 19 23:51 rmem_max
-rw-r--r-- 1 root root 0 Aug 19 23:51 wmem_default
-rw-r--r-- 1 root root 0 Aug 19 23:51 wmem_max
from what i've had people explain to me, the read_memory and write_memory
defaults and maximum values affect the number of buffers available for
network related operations (i.e if the default is 128K, that provides X
number of buffers, if 256K, 2X and so on).
it can have a beneficial effect to temporarily increase these values
for nfs servers to increase the number of buffers available to nfsd
which can sometimes (on highly loaded servers) be memory starved.
the usual process is to modify the nfs script to bump up these
values, start nfsd and then drop the values down again. if you
leave them high (or too high) you'll find yourself running out of
memory for your system/applications as each network process might
be allocating that memory (e.g setting rmem_max to 1M, and then firing
up 1000 apache processes might be a bad thing...)
> kind) 'it increases the buffer sizes'. Of what? Which
> processes/services use these files to get their buffer sizes? Why are
any process that uses networking i guess. from nfsd to apache or whatever.
> there so many files to control buffer sizes? Buffer for what - there
because linux is infinitely configurable ? that said, i don't touch
settings usually unless someone indicates some concrete benefit or
debugging reason to do so..
> In the end, I'm trying to get better NFS/Autofs *CLIENT* performance out
> of 3 Dell 2650's with Broadcom 5701 NICs. I've upgraded the kernel on
> one of them to 2.4.19, and the others are running RH's 2.4.18-5, which
> is the updated kernel via up2date.
can you get reasonable performance from the client/server without nfs ?
i.e testing using some other network later test - if you are getting
in the hundreds of megabits, then at least you know it's just nfs to
debug, not the network layer/cards themselves..
> At this point, I know that the Broadcom's are not the greatest cards to
> be running under Linux, so I'll be switching one of them today to an
> Intel GE card, but I feel like I should be doing better than the numbers
> I'm getting from my tests. Running bonnie locally is about 10 times
> faster than running it using a mounted directory! :-/
> Any (relatively current - 2.4 + ) pointers to NFS *CLIENT* performance
> tuning under Linux would be appreciated. Please don't point me to the
> HOWTO - I can recite it by now.
uhm, please search the archives.. there is a good redhat performance tuning
site by one of the redhat guys which includes NFS related tuning. there is
let us know how you go. i for one would love to know if linux will ever let
me (successfully) use > 1K packet sizes for nfs mounts.. it's never worked
More information about the Linux-PowerEdge