Troubleshooting Manual

 


I get errors about ulimit in splunkd.log

NOTE - Splunk version 4.x reached its End of Life on October 1, 2013. Please see the migration information.

I get errors about ulimit in splunkd.log

Are you seeing messages like these in splunkd.log while running Splunk on Linux, possibly accompanied by a Splunk crash?

03-03-2011 21:50:09.027 INFO  ulimit - Limit: virtual address space size: unlimited
03-03-2011 21:50:09.027 INFO  ulimit - Limit: data segment size: 1879048192 bytes [hard maximum: unlimited]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: resident memory size: 2147482624 bytes [hard maximum: unlimited]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: stack size: 33554432 bytes [hard maximum: 2147483646 bytes]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: core file size: 1073741312 bytes [hard maximum: unlimited]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: data file size: 2147483646 bytes
03-03-2011 21:50:09.027 ERROR ulimit - Splunk may not work due to low file size limit
03-03-2011 21:50:09.027 INFO  ulimit - Limit: open files: 1024
03-03-2011 21:50:09.027 INFO  ulimit - Limit: cpu time: unlimited
03-03-2011 21:50:09.029 INFO  loader - Splunkd starting (build 95063).

If so, you might need to adjust your server ulimit. Ulimit controls the resources available to a Linux shell and processors the Linux shell has started. A dedicated Splunk server needs higher limits than are provided by default.

To check your limits, type:

ulimit -a

Or restart Splunk and look in splunkd.log for events mentioning ulimit:

index=_internal source=*splunkd.log ulimit

You probably want your new values to stay set even after you reboot. To persistently modify the values, edit settings in /etc/security/limits.conf

The critical values are:

  • The file size (ulimit -f). The size of an uncompressed bucket file can be very high.
  • The data segment size (ulimit -d). With Splunk 4.2+, increase the value to at least 1 GB = 1073741824 bytes.
  • The number of open files (ulimit -n), sometimes called the number of file descriptors. Increase the value to at least 8192 (depending on your server capacity).
  • The max user processes (ulimit -u). Increase to match the file descriptors. (it limits the number of http threads)

Another value that you might need to modify on an older system (but not on most modern systems) is the system-wide file size, fs.file-max, in /etc/sysctl.conf.

Why must you increase ulimit to run Splunk? Well, you might concurrently need file descriptors for every forwarder socket and every deployment client socket. Each bucket can use 10 to 100 files, every search consumes up to 3, and then consider every file to be indexed and every user connected.

This documentation applies to the following versions of Splunk: 4.2.3 , 4.2.4 , 4.2.5 , 4.3 , 4.3.1 , 4.3.2 , 4.3.3 , 4.3.4 , 4.3.5 , 4.3.6 , 4.3.7 , 5.0 , 5.0.1 , 5.0.2 , 5.0.3 , 5.0.4 , 5.0.5 , 5.0.6 , 5.0.7 , 5.0.8 , 5.0.9 , 5.0.10 , 6.0 , 6.0.1 , 6.0.2 , 6.0.3 , 6.0.4 , 6.0.5 , 6.0.6 , 6.1 , 6.1.1 , 6.1.2 , 6.1.3 , 6.1.4 , 6.2.0 View the Article History for its revisions.


Comments

Thanks, Matthewhaswell! We have an enhancement request to make this better (SPL-79534).

Jlaw splunk
April 25, 2014

Just a note - on Centos when the Splunk process is started with it's /etc/init.d/splunk script then it doesn't check the /etc/security/limits.conf file since it's not an interactive terminal (due to the PAM configuration). So it is best to edit the file /etc/init.d/functions and add a line saying "ulimit -n 8196" at the beginning of it. This functions script is run by every init.d startup file.

Note that you should also change the limits.conf in case you manually restart splunk as it will pick up the values from your process.

Perhaps splunk engineering would like to just hardcode it into their startup file to check if it's already below 8192 and then to set it to 8192?

Matthewhaswell
September 16, 2013

You must be logged into splunk.com in order to post comments. Log in now.

Was this documentation topic helpful?

If you'd like to hear back from us, please provide your email address:

We'd love to hear what you think about this topic or the documentation as a whole. Feedback you enter here will be delivered to the documentation team.

Feedback submitted, thanks!