Splunk® Enterprise

Troubleshooting Manual

Download manual as PDF

This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

I get errors about ulimit in splunkd.log

Are you seeing messages like these in splunkd.log while running Splunk software on *nix, possibly accompanied by a Splunk software crash?

03-03-2011 21:50:09.027 INFO  ulimit - Limit: virtual address space size: unlimited
03-03-2011 21:50:09.027 INFO  ulimit - Limit: data segment size: 1879048192 bytes [hard maximum: unlimited]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: resident memory size: 2147482624 bytes [hard maximum: unlimited]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: stack size: 33554432 bytes [hard maximum: 2147483646 bytes]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: core file size: 1073741312 bytes [hard maximum: unlimited]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: data file size: 2147483646 bytes
03-03-2011 21:50:09.027 ERROR ulimit - Splunk may not work due to low file size limit
03-03-2011 21:50:09.027 INFO  ulimit - Limit: open files: 1024
03-03-2011 21:50:09.027 INFO  ulimit - Limit: cpu time: unlimited
03-03-2011 21:50:09.029 INFO  loader - Splunkd starting (build 95063).

If so, you might need to adjust your server ulimit. Ulimit controls the resources available to a *nix shell and processors the *nix shell has started. A machine running Splunk software needs higher limits than are provided by default.

Check current limits

To check your limits, type:

ulimit -a

Or restart Splunk Enterprise and look in splunkd.log for events mentioning ulimit:

index=_internal source=*splunkd.log ulimit

Set new limits

Your Splunk administrator determines the correct level and sets each of these values.

You probably want your new values to stay set even after you reboot. To persistently modify the values, edit settings in /etc/security/limits.conf

The most important values are:

  • The file size (ulimit -f). The size of an uncompressed bucket file can be very high.
  • The data segment size (ulimit -d). Increase the value to at least 1 GB = 1073741824 bytes.
  • The number of open files (ulimit -n), sometimes called the number of file descriptors. This should be at least 8192. Your machine might concurrently need file descriptors for every forwarder socket, deployment client socket, file to be indexed, and user connected. Each bucket can use 10 to 100 files, every search consumes up to four file descriptors, and KV store can use many file descriptors.
  • The max user processes (ulimit -u). This number must be large enough to accommodate all Splunk threads. The thread count grows with concurrent http connections, parallel pipelines, KV store, and most of all concurrent searches. If you must have a limit (other than unlimited), choose a value in the high thousands or tens of thousands.

Another value that you might need to modify on an older system (but not on most modern systems) is the system-wide file size, fs.file-max, in /etc/sysctl.conf.

See [requirements for use of Splunk Enterprise on-premises] in the Installation Manual.

PREVIOUS
Advanced help troubleshooting Splunk software for Windows
  NEXT
Splunk Enterprise does not start due to unusable filesystem

This documentation applies to the following versions of Splunk® Enterprise: 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.2.14, 6.2.15, 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.3.13, 6.3.14, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11


Comments

That worked for me on RHEL 7 also. But I noticed that limits were only being set when Splunk started at boot time and not later when I ran /opt/splunk/bin/splunk restart.
As a colleague pointed out to me, since RHEL 7 uses service specific ulimits, you need to use the correct RHEL 7 service startup commands. So instead of /opt/splunk/bin/splunk restart you need to run systemctl restart splunk -- otherwise these limits are not set.

Gn694
January 5, 2017

If you're having issues with ulimits on RHEL 7+ look at adding systemd config as below:
mkdir -p /etc/systemd/system/splunk.service.d/
vim /etc/systemd/system/splunk.service.d/limits.conf
[Service]
LimitNOFILE=10000
LimitNPROC=10000
LimitDATA=4G
LimitFSIZE=8G

systemd doesn't take the information from /etc/security/limits.conf during the Splunk boot up script. The above resolved this issue for me.

Hmclaren splunk, Splunker
October 4, 2016

Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters