Splunk® Enterprise

Monitoring Splunk Enterprise

Acrobat logo Download manual as PDF

Splunk Enterprise version 6.x is no longer supported as of October 23, 2019. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Acrobat logo Download topic as PDF

Access and customize health check

The Monitoring Console comes with preconfigured health checks in addition to its preconfigured platform alerts. You can modify existing health checks or create new ones.

Use the health check

Once you have set up the Monitoring Console, find the health check at Monitoring Console > Health Check. Start the health check by clicking Start at the top right.

Each health check item is an ad hoc search. The searches run sequentially. When one finishes, the next one starts. After all searches have completed, the results are sorted by severity: Error, Warning, Info, Success, or N/A.

Click a severity level at the top of the results to see only results with that severity level. Click a row to see more information, including suggested actions.

See Multi-instance deployment Monitoring Console setup steps.

Exclude a check

You can disable a specific check to prevent it from running when you click Start.

  1. Navigate to Monitoring Console > Settings > Health Check Items.
  2. Locate the check you wish to disable in the list.
  3. Click Disable.
  4. Reload or navigate back to Monitoring Console > Health Check. You do not need to restart Splunk Enterprise.

Modify an existing check

You can modify an existing check. For example, say you want to modify the warning threshold for the Excessive physical memory usage check from 90% to 80%.

  1. Navigate to Monitoring Console > Settings > Health Check Items.
  2. In the Excessive physical memory usage row, click Edit.
  3. Edit the Search and Description fields.
  4. (Optional) Rename the health check item to reflect your modification.
  5. Click Save.

The modifications are saved to your filesystem in $SPLUNK_HOME/etc/apps/splunk_monitoring_console/local/checklist.conf

Create a new check

To add a new health check item:

  1. Navigate to Monitoring Console > Settings > Health Check Items.
  2. Click New Health Check Item at the top right.
  3. Fill in the form fields. Be sure to include a severity level in your search (| eval severity_level). Without this, the search returns results as N/A. See below for guidance filling in the Search and Drilldown fields.
  4. Click Save.

The modifications are saved to your filesystem in $SPLUNK_HOME/etc/apps/splunk_monitoring_console/local/checklist.conf

About searches

In single-instance mode, the search string generates the final result. In multi-instance mode, this search generates one row per instance in the result table.

The search results must be in the following format.

instance metric severity_level
<instance name> <metric number or string> <level number>

Severity level names correspond to values as follows.

Severity level name Severity level value
Error 3
Warning 2
Info 1
Success 0
N/A -1

About drilldowns

You can optionally include a drilldown to another search or to a dashboard, for example a monitoring console dashboard, in your health check results.

To include a monitoring console dashboard drilldown:

  1. Choose an existing dashboard in the monitoring console that is relevant to the data you want to run a health check on. This dashboard should have a dropdown to choose an instance or machine.
  2. Inspect the URL using the dropdown to see which parts of the URL are needed to specify the instance you want. Look for &form.splunk_server=$instance$ toward the end of the URL.
  3. Trim the URL to a URI that starts with /app/ and has a $ delimited variable name that is a column in the search results for your health check item. For example, /app/splunk_monitoring_console/distributed_search_instance?form.splunk_server=$search_head$

To include a search drilldown, find or create a search with a $ delimited variable in it. The variable must exist as a column name in the health check search results. For example, a drilldown of index=_internal $instance$ will work, as long as "instance" is a column name in the health check search.

Most likely, you want a drilldown search of the search you just ran. In that case, replace $rest_scope$ or $hist_scope$ with $instance$, where instance is a column name in the health check search. For example:

`dmc_set_index_internal` host=$instance$ earliest=-60m source=*splunkd.log* (component=AggregatorMiningProcessor OR component= LineBreakingProcessor OR component=DateParserVerbose) (log_level=WARN OR log_level=ERROR)

Proactively alert on health check conditions

Many health check items already have a platform alert counterpart. If you wish to turn another health check into an alert, you can do that, too.

The following table lists the health check items with platform alert counterparts.

Health check Corresponding Platform Alert Condition
Indexing status Abnormal State of Indexer Processor Tests the current status of the indexer processor on indexer instances.
Excessive physical memory usage Critical System Physical Memory Usage Assesses system-wide physical memory usage and raises a warning for those servers where it is >90%.
Expiring or expired licenses Expired and Soon To Expire Licenses Checks for licenses that are expired, or will expire within 2 weeks.
Missing forwarders Missing forwarders Checks for forwarders that have not connected to indexers for >15 minutes in the recent past.
Near-critical disk usage Near Critical Disk Usage Checks for 80% of the disk usage of partitions that Splunk Enterprise reads or writes to.
Saturation of event-processing queues Saturated Event-Processing Queues One or more of your indexer queues is reporting a fill percentage, averaged over the last 15 minutes, of 90% or more.
Distributed search health assessment Search Peer Not Responding Checks the status of the search peers (indexers) of each search head.

To create a new alert from a health check, when a counterpart does not already exist:

  1. Run the health check.
  2. Click the Open in search spyglass icon.
  3. Modify the search with a where clause.
  4. Save it as a new scheduled search with an alert action, for example, email the admin.

Export health check results

You can export the results from a health check item to your local machine to share with others.

To export results from a health check item:

  1. Run the health check.
  2. Click the row with the results you want to export.
  3. In the results table on the right, click the Export icon.
  4. Choose the format of the results (XML, CSV, or JSON) and optionally a file name and number of results.
  5. Click Export.
Last modified on 20 April, 2017
Enable and configure platform alerts
Indexing performance dashboards

This documentation applies to the following versions of Splunk® Enterprise: 6.5.0, 6.5.1, 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.5.7, 6.5.8, 6.5.9, 6.5.10

Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters