[NTLUG:Discuss] Apache 1.3.27 w/ RedHat 7.3 driving....re-visited
kbrannen@gte.net
kbrannen at gte.net
Thu Dec 19 11:10:28 CST 2002
Douglas King wrote:
> I have begun having a problem with my Apache server. The problem is as
> follows:
>
> I am not able to run / execute ANY Perl or cgi scripts...at all. Right
> now, I cannot even ADD another site. This server IS RUNNING
> approximately 1,100 sites...MOST of them are virtual (second level
> domain hosting) http://www.xyz.abc.com type hosting....
>
> In the error log for Apache, I get the following error message:
>
> [Tue Nov 26 21:47:33 2002] [warn] (24)Too many open files: unable to open a
> file descriptor above 15, you may need to increase the number of
> descriptors
> [Tue Nov 26 21:47:34 2002] [notice] Apache/1.3.23 (Unix) (Red-Hat/Linux)
> mod_ssl/2.8.7 OpenSSL/0.9.6b DAV/1.0.3 PHP/4.1.2 mod_perl/1.26 configured
> -- resuming normal operations
> [Tue Nov 26 21:47:34 2002] [notice] suEXEC mechanism enabled (wrapper:
> /usr/sbin/suexec)
> [Tue Nov 26 21:47:34 2002] [notice] Accept mutex: sysvsem (Default:
> sysvsem)
> [Tue Nov 26 21:47:52 2002] [error] [client 66.14.30.174] (24)Too many open
> files: couldn't spawn child process:
> /var/www/cgi-bin/domainname/mycgi_list.cgi
>
> AND more recently:
>
> [Wed Dec 18 19:59:48 2002] [crit] [client 64.12.96.73] (24)Too many open
> files: /.htaccess pcfg_openfile: unable to check htaccess file, ensure
> it is readable
...
> Here is the info on the server:
...
> open files (-n) 1024
...
> Has anybody run across this problem? If so....what is my solution?
At the risk of guessing wrong, I'd say that you can't open enough files. :-)
It would seem that running that many websites on 1 machine does have its
hazards. You either need to run less, or increase the resources the machine
can use. I assume you wrote because you want to do the latter.
There are 2 basic kernel parameters that control something like this: open
files per process, and open files for the machine; I think you're being
limited on the whole machine. As root, do:
cat /proc/sys/fs/file-max
That will show you a number, which is the current max (e.g. mine is 39321).
Add something to that (e.g. I might do 50000) and:
echo 50000 > /proc/sys/fs/file-max
and see how if that makes things better. To be sure everything will run well,
you may want to restart the httpd service. Assuming that fixes it for you,
you can put the "echo ..." line in your /etc/init.d/boot.local (or rc.local or
whatever your distro calls it).
If that doesn't help you, then you have the per process limit affecting you.
The only way I know to fix that is to change /usr/include/linux/fs.h, increase
NR_FILE, then compile and install the new kernel (it may also require a ulimit
command at runtime too, I don't remember).
I doubt you have this problem, but check to see if you're running out of
inodes; "df -i" will show you.
HTH,
Kevin
More information about the Discuss
mailing list