Use throttling to suppress alert triggering for a specific time period. Alerts can trigger frequently because of similar search results or scheduling.
Throttling an alert is different from configuring alert trigger conditions. Trigger conditions evaluate an alert's initial search results to check for specified field counts, event timing, or other patterns. To review alert triggering information, see Configuring alert trigger conditions.
When creating or editing an alert, you can enable and configure alert throttling, also known as suppression.
|Alert type||Triggering option||How to configure throttling|
|Scheduled||Once||Indicate a suppression period using the time value field and dropdown increments. Time values must be greater than zero.|
|Real-time||Rolling time window||Indicate a suppression period using the time value field and dropdown increments. Time values must be greater than zero.|
If you have throttling set for an existing alert action, editing the details of the alert or the throttle configuration causes the throttling to be disregarded. This includes any changes to fields you throttle on, the SPL in the correlation search, the cron schedule, and so on. The change causes the throttle file, which notes how long to ignore events, to be removed. Therefore the throttling does not occur until the next event is triggered based on the new parameters.
- An admin uses a real-time alerts with per-result triggering to monitor system events, including errors. System events occur twenty or more times per minute. For notification purposes, alert triggers can be suppressed for an hour. The admin uses field values and a one hour suppression period to throttle the events.
- A real-time alert with per-result triggering monitors disk errors. Some events in the alert's search results have the same
hostvalues but can cause multiple alert triggers in a short amount of time. An admin throttles the alert so that, after an initial alert triggers, subsequent triggering is suppressed for ten minutes.
- A scheduled alert searches for sales events on an hourly basis. The alert triggers whenever the number of results rises by 100 and is configured to send an email notification to the sales team. The sales team wants to limit email notifications. An admin throttles the alert so that triggering is suppressed for three hours after an initial alert triggers and initializes an email notification.
If you have a set of alerts that run over similar datasets, they can each alert on the same data at the same time. This means that their collective notification frequency can still be high, even when you have throttling rules set up for all of them. You can create suppression groups for these alerts so that when one of them alerts, the entire group of alerts is suppressed.
See Define alert suppression groups to throttle sets of similar alerts.
Throttle scheduled and real-time searches
Throttling for alerting works similarly to throttling for scheduled and real-time searches.
If you have scheduled searches that run frequently and you do not want to be notified each time results generate, set the throttling controls to suppress the alert for a longer time period.
For real-time searches, if you configure an alert so that it triggers once when a specific triggering condition is met, you do not need to configure throttling. If the alert triggers for each result, you might need to configure throttling to suppress additional alerts.
When you configure throttling for a real-time search, start with a throttling period that matches the length of the base search time range. Expand the throttling period if necessary. This prevents multiple notifications for a given event.
Configure alert trigger conditions
Define alert suppression groups to throttle sets of similar alerts
This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4
Feedback submitted, thanks!