Whether it is user intention or just accident it may happen, that a single user can eat up all available system resources such as RAM memory or disk space. Depends on the nature of you Linux system you may want to limit your users to only what they might actually need.

Let's start with something like a fork bomb:

:(){ :|:& };:

The line above can almost instantly consume all resources since it creates recursive function all to it self as it forks unlimited children processes. One does not even need a root privileges to crash your Linux system. What about to limit user by a number of process he/she can spawn:

NOTE: All limits are applied to a current bash shell session only. To make a permanent change system wide use /etc/profile .

$ ulimit -u 10
$ :(){ :|:& };:
bash: fork: retry: Resource temporarily unavailable

This takes care of the fork bomb problem. But what about disk space? Linux command ulimit can limit users to create files bigger than a certain size:

$ ulimit -f 100
$ cat /dev/zero > file
File size limit exceeded (core dumped)
$ ls -lh file
-rw-rw-r--. 1 linux commands 100K Feb 21 18:27 file 

Some extreme examples:

With ulimit it is also possible to limit a maximum amount of virtual memory available to the process:

ulimit -v 1000
[lilo@localhost ~]$ ls
ls: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

Limit a user by number of opened files ( file descriptors )

$ ulimit -n 0
$ echo ulimit > command
bash: command: Too many open files

To check all your current limits use -a option:

$ ulimit -a

Free Linux eBooks

Do you have the right skills?

Our IT Skills Watch page reflects an up to date IT skills demand leaning towards the Linux and Unix environment. We have considered a number of skills and operating systems.

See the result...

Linux Online Training

Learn to run Linux servers and prepare for LPI certification with Linux Academy. 104 available video lessons with PDF course notes with your own server!

Go to top