What are platform alerts?
Platform alerts are saved searches included in the Distributed Management Console (DMC). Platform alerts notify Splunk Enterprise administrators of conditions that might compromise their Splunk environment. Notifications appear in the DMC user interface and can optionally start an alert action such as an email. The included platform alerts get their data from REST endpoints.
Platform alerts are disabled by default.
Enable platform alerts
1. From the DMC Overview page, click Triggered Alerts > Enable or Disable toward the bottom of the page.
2. Click the Enabled check box next to the alert or alerts that you want to enable.
After an alert has triggered, the DMC Overview page displays a notification. You can also view the alert and its results by going to Overview > Alerts > Managed triggered alerts.
You can optionally set an alert action, such as an email notification.
Configure platform alerts and set alert actions
From the DMC, navigate to Overview > Triggered Alerts > Enable or Disable. Find the alert you want to configure and click edit.
You can view the default settings and change parameters such as:
- alert schedule
- suppression time
- alert actions (such as sending an email or starting a custom script)
If you enable email notification, be sure that you have defined a valid mail host in Settings > Server settings > Email settings.
You can also view the complete list of default parameters for platform alerts in
$SPLUNK_HOME/etc/apps/splunk_management_console/default/savedsearches.conf. If you choose to edit configuration files directly, put the new configurations in a local directory instead of the default.
Which alerts are included?
To start monitoring your deployment with platform alerts, you must enable the individual alerts that you want. See "Enable platform alerts" above.
|Alert name||Description||For more information|
|Abnormal state of indexer processor||Fires when one or more of your indexers reports an abnormal state. This abnormal state can be either throttled or stopped.||For details on which indexer is in which abnormal state, and to begin investigating causes, see the DMC Indexing Performance: Deployment dashboard's Indexing Performance by Instance panel. See "Indexing performance: deployment" for information about the dashboard, and "How indexing works."|
|Critical system physical memory usage||Fires when one or more instances exceeds 90% memory usage (by any process, Splunk software or otherwise). On most Linux distributions, this alert can trigger if the OS is engaged in buffers and filesystem cacheing activities. The OS releases this memory if other processes need it, so it does not always indicate a serious problem.||For details on instance memory usage, navigate to the DMC Resource Usage: Deployment dashboard, and see "Resource usage: deployment" in this manual.|
|Expired and Soon To Expire Licenses||Fires when you have licenses that have expired or will expire within two weeks.||For information about your licenses and license usage, click Licensing in the DMC.|
|Missing forwarders||Fires when one or more forwarders are missing.||See the forwarders dashboards in the DMC.|
|Near-critical disk usage||Fires when you have used 80% of your disk capacity.||For more information about your disk usage, navigate to the three DMC Resource Usage dashboards and read the corresponding topics in this manual.|
|Saturated event-processing queues||Fires when one or more of your indexer queues reports a fill percentage, averaged over the last 15 minutes, of 90% or more. This alert can inform you of potential indexing latency.||For more details about your indexer queues, navigate to the two DMC Indexing Performance dashboards and read the corresponding topics in this manual.|
|Search peer not responding||Fires when any of your search peers (indexers) is unreachable.||For the status of all your instances, see the DMC Instances view.|
|Total license usage near daily quota||Fires when you have used 90% of your total daily license quota.||For more information about your license usage, click Licensing in the DMC.|
About search artifacts
In savedsearches.conf, the
dispatch.ttl setting dictates that the searches from platform alerts keep search artifacts for four hours.
But if an alert is triggered, its search artifact stays for seven days. This means that the link sent in an email to inspect the search results of a triggered alert expires in seven days (by default).
Configure forwarder monitoring for the DMC
Search activity dashboards
This documentation applies to the following versions of Splunk® Enterprise: 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11