Limit search process memory usage
Splunk software can be configured to automatically terminate search job processes that exceed a threshold of a configured quantity of resident memory in use.
You might be interested in using this feature if:
- You want to be proactive and avoid a scenario where one runaway search causes one or several of your search peers to crash.
- You already have encountered this scenario and do not want it to happen again.
- In the Distributed Management Console, the Search Activity: Instance view exposes one or more searches that consume dangerous amounts of physical memory. You can see this information in the Top 10 memory-consuming search panel.
If you have Splunk Cloud and want to adjust this threshold, you must file a Support ticket, because you do not have access to the
What does this threshold do?
Enabling this threshold limits the maximum memory permitted for each search process. A search process that is an outlier in memory size is automatically killed off, limiting damage.
This threshold uses process resource usage information that is recorded by platform instrumentation. So this feature works only on *nix, Solaris, and Windows platforms.
- See Introspection endpoint descriptions in the REST API Reference Manual.
- See About the platform instrumentation framework in the Troubleshooting Manual.
Search memory is checked periodically, so a rapid spike might exceed the configured limit.
The functionality is wired into the DispatchDirectoryReaper, so stalls in the reaper components also cause stalls in how often the memory of searches are checked.
Enable a search process memory threshold
The search process memory tracking is disabled by default.
1. See How to edit a configuration file in the Admin Manual.
2. Open the
3. In the [search] stanza, change the setting for the
enable_memory_tracker attribute to
4. Review and adjust the memory limit.
- You can set the limit to an absolute amount or a percentage of the identified system maximum, using
search_process_memory_usage_percentage_threshold, respectively. Searches are always tested against both values, and the lower value applies. See limits.conf.spec in the Admin Manual.
5. To enable the configuration changes, restart Splunk Enterprise.
Where is threshold activity logged?
If the threshold causes a search process to be stopped on a search head, an error is inserted into the search artifact file
info.csv. If the search is run through Splunk Web, this error message also appears in Splunk Web. The error states that the process was terminated and specifies the limit setting and value.
If the threshold causes a search process to be stopped on a search peer, a WARN message is logged in the
splunkd.log file in the StreamedSearch category.
In both cases, a WARN message is logged in the
splunkd.log file in the DispatchReaper category.
The messages are similar to:
Forcefully terminated search process with sid=... since its \[relative physical or physical] memory usage (... \[MB or %]) has exceeded the \[relative physical or physical] memory threshold specified in limits.conf/...setting name.. (...setting value...)
Dispatch directory and search artifacts
Manage Splunk Enterprise jobs from the OS
This documentation applies to the following versions of Splunk Cloud™: 6.6.3, 7.0.3, 7.0.2, 7.0.0