Thursday, August 5, 2010

Tune max open files parameter in Linux

A busy websever serving thousands of connections may encounter error like “java.net.SocketException: Too many open files”, That is because the default value open files per process in Linux is 1024 and each connection consume 1 file handle.

An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, a stream or a network file (Internet socket, NFS file or UNIX domain socket.) 
Linux has global setting and per process setting to control the max number of file descriptor
Global Setting:
 Max number of file handles for whole system. This value varies with memory size.
 

$sysctl fs.file-max
fs.file-max = 65535
Per process setting:
This value is per process (values in multiple child process don’t count towards parent process ).
The default value is 1024, which doesn’t seem to vary with memory size.


$ulimit -a | grep "open files"
open files                    (-n) 1024 
The value can be changed with “ulimit –n” command, but it is only effective on current shell session.  To impose limit on each new shell session, enable PAM module pam_limits.so.

There are many ways to start new shell session: login, sudo, and su.
each need to be enabled with pam_limits  by PAM config file  /etc/pam.d/login,  /etc/pam.d/sudo, or  /etc/pam.d/su


/etc/security/limits.conf is the configuration file for pam_limits.so to set values.
e.g Increase max number of open files  from 1024 to 4096 for Apache web server, which is started with user apache
apache       -       nofile       4096

pam_limits.so is session PAM, so change become effective for new session, no reboot required.
Count the number of open files for a process.
 ls -l  /proc/PID/fd | wc –l
or use lsof to count open files  excluding memory-mapped file (mem)
sudo lsof –n -p PID | awk '$4 != "mem" {print}' | wc –l

lsof is slow, but it can count all processes belong to a user “lsof –n –u username”
Count the number of open files for whole system.
 The first column of fs.file-nr output is current number of open files
$sysctl fs.file-nr
fs.file-nr = 1530       0       65535
Test ulimit.
You will be disappointed to test open files directly in shell by command tail –f  etc, because the limit is imposed on  process, but each tail –f   will start new process.

The following Perl script can open 10 files in a single process.

#!/usr/bin/perl -w
foreach $i (1..10) {
$FH="FH${i}";
open ($FH,'>',"/tmp/Test${i}.log") || die "$!";
print $FH "$i\n";
} 
nfile has been set to 8 with: ulimit –n 8
$ ulimit -a | grep files
open files                      (-n) 8
Too many open files error appeared while halfway creating files
$ ./testnfiles.pl
Too many open files at ./ testnfiles.pl line 4

1 comment:

  1. Tanks, very good article, and quite helpful.

    But please bear in mind that if you want to set permanently ulimits for a process, enabling PAM module in each new shell session, is not sufficient.
    This is because on reboot process doesn't get executed in shell.
    To make process ulimit settings survive reboot, you have to add ulimit values in process startup script (/etc/init.d/$process_name)
    by just adding "ulimit -n $desired_ulimit_value" in the beginning of the init script

    ReplyDelete