Monitor system health
If you have the monitoring console configured, you can use platform alerts and the health check to monitor your system health.
If you do not have the monitoring console configured, this topic directs you to a few tools to get started. But if you have read and understand the content in the previous several topics, it might be time to think about setting up the monitoring console.
With the monitoring console
Run a health check
The monitoring console comes with preconfigured health checks in addition to its preconfigured platform alerts. You can modify existing health checks or create new ones.
See Access and customize health check in Monitoring Splunk Enterprise.
Understand platform alerts
Platform alerts are saved searches included in the Monitoring Console. Platform alerts notify Splunk Enterprise administrators of conditions that might compromise their Splunk environment. Notifications appear in the Monitoring Console user interface and can optionally start an alert action such as an email.
See which platform alerts are enabled:
- In the monitoring console, click Overview.
- Scroll down the dashboard until you see the Triggered Alerts panel.
- Click Enable or Disable to get to the Platform Alerts Setup page.
- Look for an enabled alert.
- Click Advanced edit to see what alert actions exist for that alert. Add your email address if you want to receive email alerts. If you do not set up an alert action like Send an email, view any triggered alerts in the monitoring console Overview dashboard.
See Platform alerts in Monitoring Splunk Enterprise for instructions for adding alert actions and for a list of available platform alerts.
Rebuild the forwarder asset table
If a forwarder is decommissioned, it remains on the forwarder dashboards until you rebuild the forwarder asset table. This step might not be immediately necessary, but if you find that your forwarder dashboards contain null results from several forwarders, you can rebuild the asset table.
- In the monitoring console, click Settings > Forwarder Monitoring Setup.
- Click Rebuild forwarder assets.
- Select a time range or leave the default of 24 hours.
- Click Start Rebuild.
Without the monitoring console
Ensure that internal logs are searchable
Make sure that your deployment is following the best practice recommendation of forwarding internal logs, in both $SPLUNK_HOME/var/log/splunk
and $SPLUNK_HOME/var/log/introspection
, to indexers from all other instance types. See Best practice: Forward search head data in the Distributed Search Manual. These other instance types include:
- search heads
- license masters
- indexer cluster manager nodes
- deployment servers
See What Splunk software logs about itself in Troubleshooting Splunk Enterprise for an overview of the Splunk Enterprise internal log files.
Survey existing monitoring apps
Survey your deployment for apps monitoring system health, either from Splunkbase or a custom app that a previous administrator developed.
- The Fire Brigade app gives you insight into the health of your indexers.
- The popular Splunk on Splunk app (SoS) reached its end of life with Splunk Enterprise 6.3.0 and most of its functionality was incorporated into the monitoring console. If your monitoring strategy relies on SoS, consider upgrading to the monitoring console.
Use default monitoring tools
Even without the monitoring console, there are a few resources that are included in Splunk Enterprise for you to check your system health. You can view some status information about indexer clustering, search head clustering, KV store, and errors that Splunk software logs internally.
For information on indexer clustering dashboards, see View the manager node dashboard and the two following topics in Managing Indexers and Clusters of Indexers.
You can run status checks on portions of your deployment using the Splunk command line, like search head clustering and KV store.
You can check components of a search head cluster from the command line. See Use the CLI to view information about a search head cluster in the Distributed Search manual.
Check KV store status:
- Log into a search head.
- In a terminal window, navigate to the bin directory in the Splunk installation directory.
- Type
./splunk show kvstore-status
See About the CLI in the Admin Manual for information about using the Splunk CLI.
Generate a report of general errors:
- Log into Splunk Web on an indexer cluster manager or search head.
- Click Apps > Search & Reporting.
- Click Reports > Splunk errors last 24 hours.
Look for custom monitoring tools
In addition to a custom monitoring app, your previous admin might have created some custom reports or alerts for system health. Look for custom reports or alerts:
- In Splunk Web on an indexer cluster manager node or search head, go to Settings > Searches, reports, and alerts.
- Review any alert actions and make sure that they meet your requirements.
- (Optional) Add your email address or a custom script.
Plan a monitoring strategy
Any production Splunk Enterprise deployment requires robust, proactive monitoring to minimize down time and other problems.
Your Splunk Enterprise monitoring strategy needs to address the following points, the monitoring of which is included in the monitoring console:
- CPU load, memory utilization, and disk usage
- On a *nix system, OS level settings such as THP and ulimits
- Indexing rate
- Skipped searches
- Bad data onboarding practices
Consider setting up the monitoring console. More than likely, this will consist of provisioning a new machine for this use. See Multi-instance deployment Monitoring Console setup steps in Monitoring Splunk Enterprise.
Learn about licensing | Investigate knowledge object problems |
This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12
Feedback submitted, thanks!