What are platform alerts?
Platform alerts are saved searches included in the distributed management console (DMC). Platform alerts notify Splunk Enterprise administrators of conditions that might compromise their Splunk Enterprise environment. The included platform alerts get their data from REST endpoints.
Platform alerts are disabled by default.
Enable platform alerts
- Configure your distributed management console. From the DMC, click Setup. See "Configure the distributed management console."
- Set up alerting notifications for alerts on your deployment.
1. From the DMC Overview page, click Alerts > Enable or Disable.
2. Click the Enabled check box next to the alert or alerts that you want to enable.
After an alert has triggered, you can view the alert and its results by going to Overview > Alerts > Managed triggered alerts.
See Configure platform alerts, next, for alert actions that you can configure, such as email notifications.
Configure platform alerts
From the DMC, navigate to Overview > Alerts > Enable or Disable. Find the alert you want to configure and click edit. You can view the default settings and change parameters such as:
- alert schedule
- suppression time
- alert actions (such as emails)
You can also view the complete list of default parameters for platform alerts in
$SPLUNK_HOME/etc/apps/splunk_management_console/default/savedsearches.conf. If you choose to edit configuration files directly, put the new configurations in a local directory instead of the default.
Which alerts are included?
To start monitoring your deployment with platform alerts, you must enable the individual alerts that you want. See "Enable platform alerts."
|Alert name||Description||For more information|
|Abnormal state of indexer processor||Fires when one or more of your indexers reports an abnormal state. This abnormal state can be either throttled or stopped.||For details on which indexer is in which abnormal state, and to begin investigating causes, see the DMC Indexing Performance: Deployment dashboard's Indexing Performance by Instance panel. See "Indexing performance: deployment" for information about the dashboard, and "How indexing works."|
|Critical system physical memory usage||Fires when one or more instances exceeds 90% memory usage. On most Linux distributions, this alert can trigger if the OS is engaged in buffers and filesystem cacheing activities. The OS releases this memory if other processes need it, so it does not always indicate a serious problem.||For details on instance memory usage, navigate to the DMC Resource Usage: Deployment dashboard, and see "Resource usage: deployment" in this manual.|
|Near-critical disk usage||Fires when you have used 80% of your disk capacity.||For more information about your disk usage, navigate to the three DMC Resource Usage dashboards and read the corresponding topics in this manual.|
|Saturated event-processing queues||Fires when one or more of your indexer queues reports a fill percentage, averaged over the last 15 minutes, of 90% or more. This alert can inform you of potential indexing latency.||For more details about your indexer queues, navigate to the two DMC Indexing Performance dashboards and read the corresponding topics in this manual.|
|Search peer not responding||Fires when any of your search peers (indexers) is unreachable.||For the status of all your instances, see the DMC Instances view.|
|Total license usage near daily quota||Fires when you have used 90% of your total daily license quota.||For more information about your license usage, click Licensing in the DMC.|
About search artifacts
In savedsearches.conf, the
dispatch.ttl setting dictates that the searches from platform alerts keep search artifacts for four hours.
But if an alert is triggered, its search artifact stays for seven days. This means that the link sent in an email to inspect the search results of a triggered alert expires in seven days (by default).
Return the DMC to default settings
Indexing performance: Instance
This documentation applies to the following versions of Splunk® Enterprise: 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.2.14, 6.2.15