Splunk® Enterprise

Troubleshooting Manual

Splunk Enterprise version 7.1 is no longer supported as of October 31, 2020. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
This documentation does not apply to the most recent version of Splunk® Enterprise. For documentation on the most recent version, go to the latest release.

I get errors about ulimit in splunkd.log

Are you seeing messages like these in splunkd.log while running Splunk software on *nix, possibly accompanied by a Splunk software crash?

03-03-2011 21:50:09.027 INFO  ulimit - Limit: virtual address space size: unlimited
03-03-2011 21:50:09.027 INFO  ulimit - Limit: data segment size: 1879048192 bytes [hard maximum: unlimited]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: resident memory size: 2147482624 bytes [hard maximum: unlimited]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: stack size: 33554432 bytes [hard maximum: 2147483646 bytes]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: core file size: 1073741312 bytes [hard maximum: unlimited]
03-03-2011 21:50:09.027 INFO  ulimit - Limit: data file size: 2147483646 bytes
03-03-2011 21:50:09.027 ERROR ulimit - Splunk may not work due to low file size limit
03-03-2011 21:50:09.027 INFO  ulimit - Limit: open files: 1024
03-03-2011 21:50:09.027 INFO  ulimit - Limit: cpu time: unlimited
03-03-2011 21:50:09.029 INFO  loader - Splunkd starting (build 95063).

If so, you might need to adjust your server ulimit. Ulimit controls the resources available to a *nix shell and processors the *nix shell has started. A machine running Splunk software needs higher limits than are provided by default.

Check current limits

There are a few ways you can check your current ulimit settings.

  • On the command line, you can type ulimit -a
  • You can restart Splunk Enterprise and look in splunkd.log for events mentioning ulimit:

    index=_internal source=*splunkd.log ulimit

  • The monitoring console has a health check for ulimits. See Access and customize health check in Monitoring Splunk Enterprise.

Set new limits

Your Splunk administrator determines the correct level and sets each of these values. To persistently modify the values, edit the limit settings in your operating system. How you do this depends on the version of *nix that you run:

  • For earlier versions of Linux that use the init system, edit the /etc/security/limits.conf file.
  • For the latest versions of Linux that run the systemd system, edit either /etc/systemd/system.conf, /etc/systemd/user.conf or, if Splunk software has been configured to run as a systemd service, /etc/systemd/system/splunkd.service. This path might vary depending on your distribution of Linux.

The most important values are:

  • The file size (ulimit -f). The size of an uncompressed bucket file can be very high.
  • The data segment size (ulimit -d). Increase the value to at least 1 GB = 1073741824 bytes.
  • The number of open files (ulimit -n), sometimes called the number of file descriptors. This should be at least 8192. Your machine might concurrently need file descriptors for every forwarder socket, deployment client socket, file to be indexed, and user connected. Each bucket can use 10 to 100 files, every search consumes up to four file descriptors, and KV store can use many file descriptors.
  • The max user processes (ulimit -u). This number must be large enough to accommodate all Splunk threads. The thread count grows with concurrent http connections, parallel pipelines, KV store, and most of all concurrent searches. If you must have a limit (other than unlimited), choose a value in the high thousands or tens of thousands.

Another value that you might need to modify on an older system (but not on most modern systems) is the system-wide file size, fs.file-max, in /etc/sysctl.conf.

See System requirements for use of Splunk Enterprise on-premises in the Installation Manual.

Set limits using /etc/security/limits.conf

These instructions are for machines that run the init service.

  1. Become the root user or an administrative equivalent with su:
    sudo su -
    
  2. Open /etc/security/limits.conf with a text editor.
  3. Add at least the following values, or confirm that they exist:
    *    hard    nofile     64000
    *    hard    nproc     8192
    *    hard    fsize      -1  
    
  4. Save the file and exit the text editor.
  5. Restart the machine to complete the changes.

Set limits using the /etc/systemd configuration files

These instructions are for machines that run the systemd service. Editing the /etc/systemd/system.conf file sets system-wide limits, while editing /etc/systemd/user.conf sets limits for services that run under a specific user within systemd.

Splunk has not released an official systemd unit file for splunkd, but this Splunk answer details a Splunk community effort to create one, and you can use it as the basis for creating a systemd unit file on your machine.

  1. Become the root user or an administrative equivalent with su:
    sudo su -
    
  2. Open /etc/systemd/system.conf with a text editor.
  3. Add at least the following values to the file:
    [Manager]
    DefaultLimitFSIZE=-1
    DefaultLimitNOFILE=64000
    DefaultLimitNPROC=8192
    
  4. Save the file and exit the text editor.
  5. Restart the machine to complete the changes.
Last modified on 17 October, 2019
Troubleshoot high memory usage   Splunk Enterprise does not start due to unusable filesystem

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters