
I get errors about ulimit in splunkd.log
Are you seeing messages like these in splunkd.log while running Splunk software on *nix, possibly accompanied by a Splunk software crash?
03-03-2011 21:50:09.027 INFO ulimit - Limit: virtual address space size: unlimited 03-03-2011 21:50:09.027 INFO ulimit - Limit: data segment size: 1879048192 bytes [hard maximum: unlimited] 03-03-2011 21:50:09.027 INFO ulimit - Limit: resident memory size: 2147482624 bytes [hard maximum: unlimited] 03-03-2011 21:50:09.027 INFO ulimit - Limit: stack size: 33554432 bytes [hard maximum: 2147483646 bytes] 03-03-2011 21:50:09.027 INFO ulimit - Limit: core file size: 1073741312 bytes [hard maximum: unlimited] 03-03-2011 21:50:09.027 INFO ulimit - Limit: data file size: 2147483646 bytes 03-03-2011 21:50:09.027 ERROR ulimit - Splunk may not work due to low file size limit 03-03-2011 21:50:09.027 INFO ulimit - Limit: open files: 1024 03-03-2011 21:50:09.027 INFO ulimit - Limit: cpu time: unlimited 03-03-2011 21:50:09.029 INFO loader - Splunkd starting (build 95063).
If so, you might need to adjust your server ulimit. Ulimit controls the resources available to a *nix shell and processors the *nix shell has started. A machine running Splunk software needs higher limits than are provided by default.
Check current limits
To check your limits, type:
ulimit -a
Or restart Splunk Enterprise and look in splunkd.log for events mentioning ulimit:
index=_internal source=*splunkd.log ulimit
Set new limits
Your Splunk administrator determines the correct level and sets each of these values.
You probably want your new values to stay set even after you reboot. To persistently modify the values, edit settings in /etc/security/limits.conf
The most important values are:
- The file size (
ulimit -f
). The size of an uncompressed bucket file can be very high. - The data segment size (
ulimit -d
). Increase the value to at least 1 GB = 1073741824 bytes. - The number of open files (
ulimit -n
), sometimes called the number of file descriptors. This should be at least 8192. Your machine might concurrently need file descriptors for every forwarder socket, deployment client socket, file to be indexed, and user connected. Each bucket can use 10 to 100 files, every search consumes up to four file descriptors, and KV store can use many file descriptors.
- The max user processes (
ulimit -u
). This number must be large enough to accommodate all Splunk threads. The thread count grows with concurrent http connections, parallel pipelines, KV store, and most of all concurrent searches. If you must have a limit (other than unlimited), choose a value in the high thousands or tens of thousands.
Another value that you might need to modify on an older system (but not on most modern systems) is the system-wide file size, fs.file-max
, in /etc/sysctl.conf
.
See [requirements for use of Splunk Enterprise on-premises] in the Installation Manual.
PREVIOUS Advanced help troubleshooting Splunk software for Windows |
NEXT Splunk Enterprise does not start due to unusable filesystem |
This documentation applies to the following versions of Splunk® Enterprise: 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.3.13, 6.3.14, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11
Feedback submitted, thanks!