About proactive Splunk component monitoring
Proactive Splunk component monitoring lets you view the health of Splunk Enterprise features from the output of a REST API endpoint. Individual features report their health status through a tree structure that provides a continuous, real-time view of the health of your deployment, without affecting your search load.
You can access feature health information using the splunkd
health report in Splunk Web. See View the splunkd health report.
You can also access feature health information programmatically from the server/health/splunkd
endpoint. See Query the server/health/splunkd endpoint.
For detailed information on the splunkd
process, see Splunk Enterprise Processes in the Installation Manual.
Splunk Enterprise health report
The Splunk Enterprise health report records the health status of splunkd
in a tree structure, where leaf nodes represent particular Splunk Enterprise features, and intermediary nodes categorize the various features. Feature health status is color-coded in four states:
- Green: The feature is functioning properly.
- Yellow: The feature is experiencing a problem.
- Red: The feature has a severe issue and is negatively impacting the functionality of your deployment.
- Grey: Health report is disabled for the feature.
The health status tree structure
The splunkd
health status tree has the following nodes:
Health status tree node | Description |
---|---|
splunkd | The top level node of the status tree shows the overall health status (color) of the splunkd process. The current status of splunkd indicates the state of the least healthy feature in the tree. The REST endpoint retrieves the instance health from the splunkd node.
|
Feature categories | Feature categories represent the second level in the health status tree. Feature categories are logical groupings of features. For example, "BatchReader" and "TailReader" are features that form a logical grouping with the name "File Monitor Input". Feature categories act as buckets for groups of features and reflect the health status of the least healthy feature in the group. For example, if the health status of the "Search Lag" feature is red, then the "Search Scheduler" category displays red. |
Features | The next level in the status tree is feature nodes. Each node contains information on the health status of a particular feature. Each feature contains one or more indicators that determine the status of the feature. The overall health status of a feature is based on the least healthy color of any of its indicators. |
Indicators | Indicators are the fundamental elements of the splunkd health report. These are the lowest levels of functionality that are tracked by each feature, and change colors as functionality changes. Indicator values are measured against red or yellow threshold values to determine the status of the feature. See What determines the status of a feature?
|
For a list of supported Splunk Enterprise features, see Supported features.
What determines the status of a feature?
The current status of a feature in the status tree depends on the value of its associated indicators. Indicators have configurable thresholds for yellow and red. When an indicator's value meets threshold conditions, the feature's status changes.
For information on how to configure indicator thresholds, see Set feature indicator thresholds.
For information on how to troubleshoot the root cause of feature status changes, see Investigate feature health status changes.
Default health status alerts
By default, each feature in the splunkd
health report generates an alert when a status change occurs, for example from green to yellow, or yellow to red. You can enable/disable alerts for any feature and set up alert notifications via email, mobile, VictorOps, or PagerDuty in health.conf
or via REST endpoint. For more information, see Configure health report alerts.
splunkd health status viewpoint
The splunkd
health report shows the health of your Splunk Enterprise deployment from the viewpoint of the local instance on which you are monitoring. For example, in an indexer cluster environment, the cluster manager and peer nodes each show a different set of features contributing to the overall health of splunkd
.
Distributed health report
The distributed health report lets you monitor the overall health status of your distributed deployment from a single central instance, such as a search head, search head cluster captain, search peer, or cluster manager. The distributed health report differs from the single-instance health report, which shows feature health status from the viewpoint of the local instance only.
The instances on which the distributed health report is enabled collect data from connected instances across your deployment. When a feature indicator reaches a specified threshold, the feature health status changes, for example, from green to yellow, or yellow to red, in the same way as the single-instance health report.
The distributed health report is enabled by default on on search heads, search head cluster captains, search peers, and cluster managers.
Disable the distributed health report
The distributed health report is enabled (set to disabled = 0
) in health.conf
. If for any reason you need to disable the distributed health report, you can do so as follows:
- Log in to the instance from which you monitor your deployment.
- Edit
$SPLUNK_HOME/etc/system/local/health.conf
. - In the
[distributed_health_reporter]
stanza, setdisabled = 1
. For example:[distributed_health_reporter] disabled = 1
Set up distributed health report alert actions
You can set up alert actions, such as sending email or mobile alert notifications, directly on the distributed health report's central instance.
This lets you configure alert actions in one location, which simplifies the alert configuration process, when compared to the single-instance view health report that requires you to set up alert actions on individual instances.
The distributed health report must be enabled to receive alerts from your deployment.
To set up distributed health report alert actions:
- Log in to the distributed health report's central instance. In most cases the cluster manager or search head captain.
- Edit
$SPLUNK_HOME/etc/system/local/health.conf
- Make sure the distributed health report is enabled, as shown:
[distributed_health_reporter] disabled = 0
- Add the specific alert actions that you want to run when the distributed health report receives an alert. For example, to send email alert notifications, add the
[alert_action:email]
stanza:[alert_action:email] disabled = 0 action.to = <recipient@example.com> action.cc = <recipient_2@example.com> action.bcc = <other_recipients@example.com>
For details on how to configure available alert actions, see Set up health report alert actions.
If you enable alerts on both the central instance and on individual instances, the distributed health report sends duplicate alerts. To avoid duplicate alerts, disable alerts on individual instances.
View distributed health report using REST API
You can view the health status of your distributed deployment using the Splunk REST API.
To view the overall health status of your distributed deployment, send an HTTP GET request to:
server/health/deployment
For endpoint details, see server/health/deployment in the REST API reference.
To view the health status of individual features reporting to the distributed health report (for example, indexers), send an HTTP GET request to:
server/health/deployment/details
For endpoint details, see server/health/deployment/details in the REST API reference.
Forwarders | Requirements |
This documentation applies to the following versions of Splunk® Enterprise: 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12
Feedback submitted, thanks!