Splunk® Enterprise

Search Manual

Splunk Enterprise version 8.0 is no longer supported as of October 22, 2021. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.

Limit search process memory usage

Splunk software tracks search process memory and supports a search process memory threshold, which automatically terminates search job processes that exceed a certain amount of resident memory in use.

The search process memory threshold limits the maximum memory permitted for each search process and automatically terminates search processes that are outliers in memory size, which limits damage to other processes. This threshold uses process resource usage information that is recorded by platform instrumentation, and as a result, works only on *nix, Solaris, and Windows platforms. For more information, see:

Search memory is checked periodically, so a rapid spike might exceed the configured limit. The functionality is built into the DispatchDirectoryReaper, so stalls in the reaper components also cause stalls in how often the memory of searches are checked.

By default, search process memory tracking is turned on for Splunk Cloud Platform, and turned off for Splunk Enterprise.

The search process memory threshold on Splunk Cloud Platform

Search process memory tracking is turned on by default for Splunk Cloud Platform. The following are the default settings for the search process memory threshold on Splunk Cloud Platform:

  • The threshold for search process memory usage is set to 80 percent. This is the percentage of total memory that a search process is allowed to consume. Search processes that violate the threshold percentage are terminated.
  • The threshold for search process memory usage is set to 0, which means that search processes are allowed to grow unbounded in terms of in memory usage.

The default settings for the search process memory threshold can't be changed on Splunk Cloud Platform.

The search process memory threshold on Splunk Enterprise

You can configure the search process memory threshold on Splunk Enterprise by updating the limits.conf file. You might want to set the search process memory threshold if:

  • You want to be proactive and avoid a scenario where one runaway search causes one or several of your search peers to crash.
  • You already have encountered this scenario and do not want it to happen again.
  • In the Distributed Management Console, the Search Activity: Instance view exposes one or more searches that consume dangerous amounts of physical memory. You can see this information in the Top 10 memory-consuming search panel.

Search process memory tracking is turned off by default on Splunk Enterprise. To turn on the search process memory threshold in the limits.conf file, follow these steps.

Prerequisites

  • Only users with file system access, such as system administrators, can edit configuration files.
  • Review the steps in How to edit a configuration file in the Splunk Enterprise Admin Manual.

Never change or copy the configuration files in the default directory. The files in the default directory must remain intact and in their original location. Make changes to the files in the local directory.

Steps

  1. Open or create a local limits.conf file in the desired path. For example, use the $SPLUNK_HOME/etc/apps/search/local path to apply this change only to the Search app.
  2. Under the [search] stanza, change the setting for the enable_memory_tracker setting to true.
  3. Review and adjust the memory limit. You can set the limit to an absolute amount or a percentage of the identified system maximum, using search_process_memory_usage_threshold or search_process_memory_usage_percentage_threshold, respectively. Searches are always tested against both values, and the lower value applies. See limits.conf.spec in the Admin Manual.
  4. To make the configuration changes take effect, restart Splunk Enterprise.

Where is threshold activity logged?

Threshold activity is logged in different places depending on whether the threshold causes a search process to be stopped on a search head or a search peer.

Search head logging

If the threshold causes a search process to be stopped on a search head, an error is inserted into the search.log and the search artifact file info.csv on the search head. The error message is also displayed in Splunk Web below the search bar and logged in the splunkd.log file in the DispatchReaper category.

The error states that the process was terminated and specifies the limit setting and value. The error message differs depending on whether the physical memory usage (in megabytes) or relative physical memory usage (in percent), or both, exceeded the threshold. The message looks something like this:

The search process with sid=<sid name> was forcefully terminated because both its physical memory usage ( <specified in MB> ) and its relative physical memory usage ( <specified in percent> ) have exceeded the 'search_process_memory_usage_threshold' ( <specified in MB> ) and 'search_process_memory_usage_percentage_threshold' (<specified in percent>) settings in limits.conf.

Search peer logging

If the threshold causes a search process to be stopped on a search peer, an error message is logged in the splunkd.log file in the StreamedSearch category and the splunkd.log file in the DispatchReaper category.

The error states that the process was terminated and specifies the limit setting and value. The error message differs depending on whether the physical memory usage (in megabytes) or relative physical memory usage (in percent), or both, exceeded the threshold. The message looks something like this:

Forcefully terminated search process with sid=<sid name> since both its physical memory usage ( <specified in MB> ) and the relative physical memory usage (<specified in percent>) has exceeded the physical memory thresholds specified in limits.conf / search_process_memory_usage_threshold ( <specified in MB> ) and limits.conf/search_process_memory_usage_percentage_threshold <specified in percent>) respectively.
Last modified on 27 March, 2025
Dispatch directory and search artifacts   Manage Splunk Enterprise jobs from the OS

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.1.8, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.2.5, 9.3.0, 9.3.1, 9.3.2, 9.3.3, 9.4.0, 9.4.1


Please expect delayed responses to documentation feedback while the team migrates content to a new system. We value your input and thank you for your patience as we work to provide you with an improved content experience!

Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters