How to set ulimit in linux

View Discussion

Improve Article

Save Article

Like Article

  • Read
  • Discuss
  • View Discussion

    Improve Article

    Save Article

    Like Article

    ulimit is admin access required Linux shell command which is used to see, set, or limit the resource usage of the current user. It is used to return the number of open file descriptors for each process. It is also used to set restrictions on the resources used by a process.

    Syntax:

    To check the ulimit value use the following command:

    ulimit -a

    How to set ulimit in linux

    Working with ulimit commands:

    1. To display maximum users process or for showing maximum user process limit for the logged-in user.

    ulimit -u

    How to set ulimit in linux

    2. For showing the maximum file size a user can have.

    ulimit -f

    How to set ulimit in linux

    3. For showing maximum memory size for the current user.

    ulimit -m

    How to set ulimit in linux

    4. For showing maximum memory size limit.

    ulimit -v

    How to set ulimit in linux

    What are Soft limits and Hard limits in Linux? 

    The soft limits are the limits which are allocated for actual processing of application or users while the Hard limits are nothing but an upper bound to the values of soft limits. Hence,  

    (soft limits <= hard limit)

    Working with Hard and Soft limit values:

    1. For displaying the Hard limit. Hard limits are a restriction to the maximum value of soft limits

    ulimit -Hn

    How to set ulimit in linux

    2. For displaying Soft Limit. The soft limits are the limits that are there for processing.

    ulimit -Sn

    How to set ulimit in linux

    3. To change Soft Limit values:

    sysctl -w fs.file-max=<value>

    Note: Replace <value> with the value you want to set for the soft limit and also remember size can not exceed the Hard Limit!

    4. Displaying current Values for opened files

    cat /proc/sys/fs/file-max

    How to set ulimit in linux

    Administering Unix servers can be a challenge, especially when the systems you manage are heavily used and performance problems reduce availability. Fortunately, you can put limits on certain resources to help ensure that the most important processes on your servers can keep running and competing processes don't consume far more resources than is good for the overall system. The ulimit command can keep disaster at bay, but you need to anticipate where limits will make sense and where they will cause problems.

    It may not happen all that often, but a single user who starts too many processes can make a system unusable for everyone else. A fork bomb -- a denial of service attack in which a process continually replicates itself until available resources are depleted -- is a worst case of this. However, even friendly users can use more resources than is good for a system -- often without intending to. At the same time, legitimate processes can sometimes fail when they are run against limits that are designed for average users. In this case, you need to make sure that these processes get beefed up allocations of system resources that will allow them to run properly without making the same resources available for everyone.

    To see the limits associate with your login, use the command ulimit -a. If you're using a regular user account, you will likely see something like this:

    $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 32767 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 50 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    One thing you might notice right off the bat is that you can't create core dumps -- because your max core file size is 0. Yes, that means nothing, no data, no core dump. If a process that you are running aborts, no core file is going to be dropped into your home directory. As long as the core file size is set to zero, core dumps are not allowed. This makes sense for most users since they probably wouldn't do anything more with a core dump other than erase it, but if you need a core dump to debug problems you are running into with an application, you might want to set your core file size to unlimited -- and maybe you can.

    $ ulimit -c ulimited $ ulimit -c unlimited

    If you are managing a server and want to turn on the ability to generate core dumps for all of your users -- perhaps they're developers are really need to be able to analyze these core dumps, you have to switch user to root and edit your /etc/security/limits.conf (Linux) or make changes in your /etc/system (Solaris) file.

    If, on the other hand, you are managing a server and don't want any of your users able to generate core dumps regardless of how much they'd like to sink their teeth into one, you can set a limit of 0 in your limits.conf.

    Another limit that is often enforced is one that limits the number of processes that an individual can run. The ulimit option used for this is -u. You can look at your limit as we did above with the ulimit -a command or show just the "nproc" limit with the command ulimit -u.

    $ ulimit -u 50

    Once again, your users can change their limits with another ulimit command -- ulimit -u 100 -- unless, of course, they can't. If you have limited them to 50 processes in the limits.conf or system file, they will get an error like this when they try to increase their limits:

    $ ulimit -u 100 -bash: ulimit: max user processes: cannot modify limit: Operation not permitted

    Limits can also be set up by group so that you can, say, give developers the ability to run more processes than managers. Lines like these in your limits.conf file would do that:

    @managers hard nproc 50 @developers hard nproc 200

    If you want to limit the number of open files, you just use a different setting.

    @managers hard nofile 2048 @developers hard nofile 8192 sbob hard nofile 8192

    Here we've given two groups and one individual increases in their open files limits. These all set hard limits. If you set soft limits as well, the users will get warnings when they reach the lower limit.

    @developers soft nofile 2048 @developers hard nofile 8192

    To see a list of the ulimit options, look at the man page (man ulimit). You will note that ulimit is a bash built-in -- at least on Linux -- and that the following options are available:

    -a All current limits are reported -c The maximum size of core files created -d The maximum size of a process's data segment -e The maximum scheduling priority ("nice") -f The maximum size of files written by the shell and its children -i The maximum number of pending signals -l The maximum size that may be locked into memory -m The maximum resident set size (has no effect on Linux) -n The maximum number of open file descriptors (most systems do not allow this value to be set) -p The pipe size in 512-byte blocks (this may not be set) -q The maximum number of bytes in POSIX message queues -r The maximum real-time scheduling priority -s The maximum stack size -t The maximum amount of cpu time in seconds -u The maximum number of processes available to a single user -v The maximum amount of virtual memory available to the shell

    If you limits.conf file permits, you might see limits like these set up for particular applications that really need the extra capacity. In this example, the oracle user is being given the ability to run up to 16,384 processes and open 65,536 files. These lines would be set up in the oracle user's .bash_profile.

    if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi

    Setting limits can provide a defense against processes that go haywire and malicious processes that try to make your systems unusable. Just make sure that your limits work for you and not against you as you plan how your resources can best be allocated.

    2-Minute Linux Tip: Learn how to use the ping command

    Copyright © 2012 IDG Communications, Inc.