Splunk® Enterprise

Search Manual

Download manual as PDF

Download topic as PDF

Limit search process memory usage

Splunk software can be configured to automatically terminate search job processes that exceed a threshold of a configured quantity of resident memory in use.

You might be interested in using this feature if:

  • You want to be proactive and avoid a scenario where one runaway search causes one or several of your search peers to crash.
  • You already have encountered this scenario and do not want it to happen again.
  • In the Distributed Management Console, the Search Activity: Instance view exposes one or more searches that consume dangerous amounts of physical memory. You can see this information in the Top 10 memory-consuming search panel.

If you have Splunk Cloud and want to adjust this threshold, you must file a Support ticket, because you do not have access to the limits.conf file.

What does this threshold do?

Enabling this threshold limits the maximum memory permitted for each search process. A search process that is an outlier in memory size is automatically killed off, limiting damage.

This threshold uses process resource usage information that is recorded by platform instrumentation. So this feature works only on *nix, Solaris, and Windows platforms.

Search memory is checked periodically, so a rapid spike might exceed the configured limit.

The functionality is wired into the DispatchDirectoryReaper, so stalls in the reaper components also cause stalls in how often the memory of searches are checked.

Enable a search process memory threshold

The search process memory tracking is disabled by default.

1. See How to edit a configuration file in the Admin Manual.

2. Open the limits.conf file.

3. In the [search] stanza, change the setting for the enable_memory_tracker attribute to true.

4. Review and adjust the memory limit.

You can set the limit to an absolute amount or a percentage of the identified system maximum, using search_process_memory_usage_threshold or search_process_memory_usage_percentage_threshold, respectively. Searches are always tested against both values, and the lower value applies. See limits.conf.spec in the Admin Manual.

5. To enable the configuration changes, restart Splunk Enterprise.

Where is threshold activity logged?

If the threshold causes a search process to be stopped on a search head, an error is inserted into the search artifact file info.csv. If the search is run through Splunk Web, this error message also appears in Splunk Web. The error states that the process was terminated and specifies the limit setting and value.

If the threshold causes a search process to be stopped on a search peer, a WARN message is logged in the splunkd.log file in the StreamedSearch category.

In both cases, a WARN message is logged in the splunkd.log file in the DispatchReaper category.

The messages are similar to:

Forcefully terminated search process with sid=... since 
its \[relative physical or physical] memory usage (... \[MB or %]) 
has exceeded the \[relative physical or physical] memory 
threshold specified in limits.conf/...setting name.. (...setting value...)
PREVIOUS
Dispatch directory and search artifacts
  NEXT
Manage Splunk Enterprise jobs from the OS

This documentation applies to the following versions of Splunk® Enterprise: 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.5.0, 6.5.1, 6.5.1612 (Splunk Cloud only), 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.6.0, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 7.0.0, 7.0.1


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters