Limit search process memory usage
Splunk software can be configured to automatically terminate search job processes that exceed a threshold of a configured quantity of resident memory in use.
You might use this feature if:
- You want to be proactive and avoid a scenario where one runaway search causes one or several of your search peers to crash.
- You already have encountered this scenario and do not want it to happen again.
- In the Distributed Management Console, the Search Activity: Instance view exposes one or more searches that consume dangerous amounts of physical memory. You can see this information in the Top 10 memory-consuming search panel.
What does this threshold do?
Enabling this threshold limits the maximum memory permitted for each search process. A search process that is an outlier in memory size is automatically killed off, limiting damage.
This threshold uses process resource usage information that is recorded by platform instrumentation. So this feature works only on *nix, Solaris, and Windows platforms.
- See Introspection endpoint descriptions in the REST API Reference Manual.
- See About the platform instrumentation framework in the Troubleshooting Manual.
Search memory is checked periodically, so a rapid spike might exceed the configured limit.
The functionality is wired into the DispatchDirectoryReaper, so stalls in the reaper components also cause stalls in how often the memory of searches are checked.
Enable a search process memory threshold
The search process memory tracking is disabled by default.
- Splunk Cloud Platform
- To enable the
search process memory threshold
, request help from Splunk Support. If you have a support contract, file a new case using the Splunk Support Portal at Support and Services. Otherwise, contact Splunk Customer Support.
- Splunk Enterprise
- To enable the
search process memory threshold
in the limits.conf file, follow these steps.
- Prerequisites
- Only users with file system access, such as system administrators, can edit configuration files.
- Review the steps in How to edit a configuration file in the Splunk Enterprise Admin Manual.
Never change or copy the configuration files in the default directory. The files in the default directory must remain intact and in their original location. Make changes to the files in the local directory.
- Steps
- Open or create a local limits.conf file in the desired path. For example, use the
$SPLUNK_HOME/etc/apps/search/local
path to apply this change only to the Search app. - Under the [search] stanza, change the setting for the
enable_memory_tracker
setting totrue
. - Review and adjust the memory limit.
- You can set the limit to an absolute amount or a percentage of the identified system maximum, using
search_process_memory_usage_threshold
orsearch_process_memory_usage_percentage_threshold
, respectively. Searches are always tested against both values, and the lower value applies. See limits.conf.spec in the Admin Manual.
- You can set the limit to an absolute amount or a percentage of the identified system maximum, using
- To enable the configuration changes, restart Splunk Enterprise.
Where is threshold activity logged?
Threshold activity is logged in different places depending on whether the threshold causes a search process to be stopped on a search head or a search peer.
Search head logging
If the threshold causes a search process to be stopped on a search head, an error is inserted into the search.log and the search artifact file info.csv
on the search head. The error message is also displayed in Splunk Web below the search bar and logged in the splunkd.log
file in the DispatchReaper category.
The error states that the process was terminated and specifies the limit setting and value. The error message differs depending on whether the physical memory usage (in megabytes) or relative physical memory usage (in percent), or both, exceeded the threshold. The message looks something like this:
The search process with sid=<sid name> was forcefully terminated because both its physical memory usage ( <specified in MB> ) and its relative physical memory usage ( <specified in percent> ) have exceeded the 'search_process_memory_usage_threshold' ( <specified in MB> ) and 'search_process_memory_usage_percentage_threshold' (<specified in percent>) settings in limits.conf.
Search peer logging
If the threshold causes a search process to be stopped on a search peer, an error message is logged in the splunkd.log
file in the StreamedSearch category and the splunkd.log
file in the DispatchReaper category.
The error states that the process was terminated and specifies the limit setting and value. The error message differs depending on whether the physical memory usage (in megabytes) or relative physical memory usage (in percent), or both, exceeded the threshold. The message looks something like this:
Forcefully terminated search process with sid=<sid name> since both its physical memory usage ( <specified in MB> ) and the relative physical memory usage (<specified in percent>) has exceeded the physical memory thresholds specified in limits.conf / search_process_memory_usage_threshold ( <specified in MB> ) and limits.conf/search_process_memory_usage_percentage_threshold <specified in percent>) respectively.
Dispatch directory and search artifacts | Manage Splunk Enterprise jobs from the OS |
This documentation applies to the following versions of Splunk Cloud Platform™: 8.2.2112, 8.2.2201, 8.2.2202, 8.2.2203, 9.0.2205, 9.0.2208, 9.0.2209, 9.0.2303, 9.0.2305, 9.1.2308, 9.1.2312, 9.2.2403, 9.2.2406 (latest FedRAMP release)
Feedback submitted, thanks!