Splunk® Enterprise

Troubleshooting Manual

Splunk Enterprise version 7.0 is no longer supported as of October 23, 2019. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.

Troubleshoot high memory usage

Problem

Your Splunk platform instance goes down because it runs out of memory.

Or, the Monitoring Console alerts you to excessive physical memory usage (either through a platform alert or a health check).

Causes

To diagnose the cause of the excessive memory usage, confirm whether Splunk software is responsible, identify which process type is responsible, and understand how the memory usage changes over time.

First, determine whether Splunk software is responsible for the excessive memory usage:

  1. Navigate to the Monitoring Console Resource Usage: Machine dashboard. See About the Monitoring Console in Monitoring Splunk Enterprise.
  2. At the top of the historical data section, select a time range of the 30-60 minutes leading up to the time that the issue appeared.
  3. Scroll down to the Physical Memory Usage panel (which can be median, average, or max).
  4. Verify that the Splunk software physical memory usage nears or exceeds the capacity of the machine. If the machine runs an operating system supported by platform instrumentation, this should be easy to determine at a glance. See About the platform instumentation framework.


Next, look at the Physical Memory Usage panel to assess the system memory usage issue and note the growth pattern, or the shape of the data. The growth pattern helps distinguish between a leak and high usage as follows:

  • A memory leak grows steadily and does not go away until you restart splunkd. A leak is likely a Splunk software defect.
  • If the memory issue is not a leak, if it grows then plateaus at a high level (that is, a level near capacity), your Splunk software usage might simply require that much memory.


Finally, identify which process class (search, main splunkd, or other) is involved as follows:

  1. Navigate to the Monitoring Console Resource Usage: Instance dashboard.
  2. Scroll down to the Physical memory usage by process class panel. Most Splunk software out-of-memory situations are search related, but not all.

Solution

If you confirm that Splunk software is not using a large amount of memory, consult your sysadmin about pruning non-Splunk processes.

For cases that are related to Splunk software but not attributed to search processes (especially if the main splunkd process grows in memory usage over time), contact Splunk Support.

If you have attributed the excessive memory usage to searches, in Splunk Web select Settings > Monitoring Console > Search > Activity > Search activity: Instance. Scroll down to the Top 20 Memory-Consuming Searches panel to identify and review the individual offending searches. The following is a list of solutions to the most common search memory usage problems:

  • If a few of your searches are using a lot of memory, make sure they are as efficient as possible. Remember to filter early in a search and choose search commands that use memory efficiently. See Quick tips for optimization and Write better searches in the Search Manual.
  • Consider limiting the memory usage per search. See Limit search process memory usage in the Search Manual.
  • Assess your hardware provisioning. See System requirements in the Installation Manual.
  • Note that certain Splunk apps have additional system requirements. For example, Enterprise Security requires a search head with significantly more memory than Splunk Enterprise requires by default. See Deployment planning in the Enterprise Security documentation.
  • If you have a single search using unreasonable amounts of memory, and you are not sure why, check Known Issues and file a Support ticket. The problem is especially likely to be caused by a defect if the search process displays a growth pattern indicating a leak.
  • Remember not to schedule all your reports on the hour. Offset scheduled reports to avoid reaching your concurrent search limit.
  • Enable the "Critical system physical memory usage" Platform Alert for this for next time. See Enable and configure platform alerts in the Monitoring Splunk Enterprise Manual.
Last modified on 18 September, 2018
Advanced help troubleshooting Splunk software for Windows   I get errors about ulimit in splunkd.log

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters