Splunk® Enterprise

User Manual

Download manual as PDF

Splunk version 4.x reached its End of Life on October 1, 2013. Please see the migration information.
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

Create an alert

Splunk alerts are based on saved searches that run either on a regular interval (if the saved search is a standard scheduled search) or in real time (if the saved search is a real-time search). When they are triggered, different actions can take place, such as the sending of an email with the results of the triggering search to a predefined list of people.

Splunk enables you to design three broad categories of alerts:

Type of alert Base search is a... Description Alert examples
Real-time alerts that are triggered every time the base search returns a result. Real time search (runs over all time) Use this alert type if you need to know the moment a matching result comes in. Useful if you need to design an alert for machine consumption (such as a workflow-oriented application). You can also throttle these alerts to ensure that they aren't triggered too frequently. Referred to as a "per-result alert."
  • Trigger an alert for every failed login attempt, but alert at most once an hour for any given username.
  • Trigger an alert when a "file system full" error occurs on any host, but only send notifications for any given host once per 30 minutes.
Alerts based on historical searches that run on a regular schedule. Historical, scheduled search This alert type triggers whenever a scheduled run of a historical search returns results that meet a particular condition that you have configured in the alert definition. Best for cases where immediate reaction to an alert is not a priority. You can use throttling to reduce the frequency of redundant alerts. Referred to as a "scheduled alert."
  • Trigger an alert whenever the number of items sold in the previous day is less than 500.
  • Trigger an alert when the number of 404 errors in any 1 hour interval exceeds 100.
Real-time alerts that monitor events within a rolling time "window". Real-time search Use this alert type to monitor events in real time within a rolling time window of a width that you define, such as a minute, 10 minutes, or an hour. The alert is triggered when its conditions are met by events as they pass through this window in real time. You can throttle these alerts to ensure that they aren't triggered too frequently. Referred to as a "rolling-window alert."
  • Trigger an alert whenever there are three consecutive failed logins for a user between now and 10 minutes ago, but don't alert for any given user more than once an hour.

For more information about these alert types, see the sections below.

You can also create scheduled searches that fire off an action (such as an email with the results of the scheduled search) each time they are run, whether or not results are received. For example, you can use this method to set up a "failed logins" report that is sent out each day by email and which provides information on the failed logins over the previous day. For more information, see "Set up alert actions" in this manual.

Note: When Splunk is used out-of-the-box, only users with the Admin role can run and save real-time searches, schedule searches, or create alerts. In addition you cannot create saved searches unless your role permissions enable you to do so. For more information on managing roles, see "Add and edit roles" in the Admin Manual.

For a series of alert examples showing how you might design alerts for specific situations using both scheduled and real-time searches, see "Alert use cases"

Get started

If you run a search, like the results it's giving you, and decide that you'd like to base an alert on it, then click the Create button that appears above the search timeline.

4.3 alerting search action.png

Select Alert... to open the Create alert dialog on the Schedule step. Give the alert a Name and then select the alert Schedule. Use Schedule to determine the type of alert you want to configure. Your choice depends upon what you want to do with your alert.

4.3 alerting per-result schedule.png

You can choose:

Select the option that best describes the kind of alert you'd like to create.

Define alerts that trigger in real-time whenever a result matches

If you want an alert that is triggered whenever a matching result comes in, select a Schedule of Trigger in real-time whenever a result matches from the Schedule step of the Create alert dialog. Then click Next to go to the Actions step.

This "per-result alert" is the most common alert type. It runs in real-time over an "all-time" timespan. It is designed to always alert whenever the search returns a result.

Keep in mind that "events" are not exactly the same as "results": an event is a type of result. If the underlying search for the alert is designed to return individual events, that's fine--the alert will trigger each time a matching event is returned. But you can also design searches that return other kinds of results.

For example, say you're tracking logins to an employees-only section of the store and you've been having problems with people trying to guess employee logins. You'd like to set up an alert that is triggered whenever it finds a user that has made more than two login attempts. This search would return not events, but usernames of people who fit the search criteria. Each time the search returns a result (a username) is returned, Splunk triggers the alert, which sets off an action, such as the sending of an email with the username to a set of recipients.

Enable actions for a per-result alert

On the Actions step for a per-result alert, you can enable one or more alert actions. These actions are set off whenever the alert is triggered.

There are three kinds of alert actions that you can enable through the Create alert dialog. For Enable actions you can select any combination of:

  • Send email - Select to have Splunk send an email to a list of recipients that you define. You can opt to have this email contain the results of the triggering search job (the result that triggered the alert, in other words).
  • Run a script - Select to have Splunk run a shell script that can perform some other action, such as the sending of an SNMP trap notification or the calling of an API. You determine which script is run.
  • Show triggered alerts in Alert manager - Have triggered alerts display in the Alert Manager with a severity level that you define. The severity level is non-functional and is for informational purposes only. (Note: In Manager > Searches and Reports, to have trigger records for an alert display in the Alert Manager, you enable the Tracking alert action.)

You can enable any combination of these alert actions for an individual alert.

4.3 alerting enable actions.png

Note: You can also arrange to have a triggered alert post its results to an RSS feed. To enable this option, go to Manager > Searches and Reports and click the name of the saved search that the alert is based upon. Then, in the Alert actions section, click Enable for Add to RSS.

Important: Before enabling actions, read "More on alert actions," in this manual. This topic discusses the various alert actions at length and provides important information about their setup. It also discusses options that are only available via the Searches and reports page in Manager, such as the ability to send reports with alert emails in PDF format, RSS feed notification, and summary indexing enablement.

Set up throttling for a per-result alert

On the Actions step for a per-result alert, you can define its throttling rules. You use throttling to reduce the frequency at which an alert is triggered. For example, if your alert is being triggered by very similar events approximately 10 times per minute, you can set up throttling rules that cut that frequency down to a much more manageable rate. Throttling rules are especially important for per-result alerts, because they are based on real-time searches and are triggered each time they find a matching result.

Splunk's alert throttling rules enable you to throttle results that share the same field value for a given number of seconds, minutes, or hours. For example, say you have a search that returns results with username=cmonster and username=kfrog every 2-3 minutes or so. You don't want to get these alerts every few minutes; you'd rather not see alerts for any one username value more than once per hour. So here's what you do when you define an alert for this search: You select Throttling, enter username under Suppress for results with the same field value, and select 60 minutes for the throttling interval.

4.3 alerting throttling per-result.png

Now, say that after this alert is saved and enabled, the alerting real-time search matches a result where username=cmonster. The alert is triggered by this result and an alert email is sent to you. But for the subsequent hour, all following matching results with username=cmonster are ignored. After 60 minutes are up, the next matching result with username=cmonster triggers the alert again and another alert email goes out to you--and then it won't be triggered a third time for that particular username value until another hour has passed. The Throttling setting ensures you're not swamped by alert emails from results with username=cmonster (or any other username value, for that matter).

You can use the Throttling setting to suppress on more than one field. For example, you might set up a per-result alert that throttles events that share the same clientip and host values.

For example, you could have a real-time search with a 60 second window that alerts every time an event with "disk error" event appears. If ten events with the "disk error" message come in within that window, five disk error alerts will be triggered--ten alerts within one minute. And if the alert is set up so that an email goes out to a set of recipients each time it's triggered (see "Specify alert actions," below), those recipients probably won't see the stack of alert emails as being terribly helpful.

You can set the Throttling controls so that when one alert of this type is triggered, all successive alerts of the same type are suppressed for the next 10 minutes. When those 10 minutes are up, the alert can be triggered again and more emails can be sent out as a result--but once it is triggered, another 10 minutes have to pass before it can be triggered a third time.

In general, when you set up throttling for a real-time search it's best to start with a throttling period that matches the length of the base search's window, and then expand the throttling period from there, if necessary. This prevents you from getting duplicate notifications for a given event.

Note: Throttling settings are not usually required for scheduled searches, because only one alert is sent out per run of a scheduled search. But if the search is scheduled to run on a very frequent basis (every five minutes, for example), you can set the throttling controls to suppress the alert so that a much larger span of time--say, 60 minutes--has to pass before Splunk can send out another alert of the same type.

Share per-result alerts with others

On the Sharing step for a per-result alert, you can determine how the alert is shared. Sharing rules are the same for all alert types: you can opt to keep the search private, or you can share the alert as read-only to all users of the app you're currently using. For the latter choice, "read-only" means that other users of your current app can see and use the alert, but they can't update its definition via Manager > Searches and reports.

4.3 alerting sharing.png

If you have edit permissions for the alert, you can find additional permission settings in Manager > Searches and reports. For more information about managing permissions for Splunk knowledge objects (such as alert-enabled searches) read "Curate Splunk knowledge with Manager" in the Knowledge Manager Manual, specifically the section titled "Share and promote knowledge objects."

Define alerts that are based on scheduled, historical searches

Scheduled alerts are the second most common alert type. If you want to set up an alert that evaluates the results of a historical search that runs over a set range of time on a regular schedule, you would select a Schedule of Run on a schedule once every... on the Schedule step of the Create alert dialog. Then you'd select an interval for the schedule and define the alert triggering conditions.

This schedule can be of any duration that you define. For example, say you handle operations for an online store, and you'd like the store administrators to receive an alert via email if the total sum of items sold through the store in the previous day drops below a threshold of 500 items. They don't need to know the moment this happens, they just want to be alerted on a day by day basis.

4.3 alerting scheduled schedule.png

To meet these requirements, you'd create an alert based on a search that runs on a regular schedule. You would schedule the search to run every day at midnight, and configure it to be triggered whenever the result returned by a scheduled run of the search--the sum of items sold over the past day--is a number below 500. And you'd set up an email alert action for the search. This ensures that when the search is triggered, Splunk sends out an email to every email address in the alert action definition, which in this case would be the set of store administrators. Now your administrators will be informed when any midnight run of the search finds that less than 500 items were purchased in the previous day.

Schedule the alert

You can schedule a search for a scheduled alert on the Schedule step of the Create Alert dialog. After you select Run on a schedule once every... for the Schedule field, select the schedule interval from the list that appears. By default the schedule will be set to Hour but you can change this to Day, Week, Month, or select Cron schedule to set up a more complicated interval that you define using standard cron notation.

4.3 alerting schedule by day.png

Note: Splunk only uses 5 parameters for cron notation, not 6. The parameters (* * * * *) correspond to minute hour day month day-of-week. The 6th parameter for year, common in other forms of cron notation, is not used.

Here are some cron examples:

*/5 * * * *       : Every 5 minutes
*/30 * * * *      : Every 30 minutes
0 */12 * * *      : Every 12 hours, on the hour
*/20  * * * 1-5   : Every 20 minutes, Monday through Friday
0 9 1-7 * 1       : First Monday of each month, at 9am.

If you set choose Cron schedule, you also need to enter a Search time range over which you want to run the search. What you enter here will override the time range you set when you first designed and ran the search. It is recommended that the execution schedule matches the search time range so that no overlaps or gaps occurs - for example if you choose to run a search every 20 minutes then it is recommended that the search's time range be 20 minutes too (-20m)

Alert scheduling: Best practices

  • Coordinate the alert's search schedule with the search time range. This prevents situations where event data is accidentally evaluated twice by the search (because the search time range exceeds the search schedule, resulting in overlapping event data sets), or not evaluated at all (because the search time range is shorter than the search schedule).
  • Schedule your alerting searches with at least 60 seconds of delay. This practice is especially important in distributed search Splunk implementations where event data may not reach the indexer precisely at the moment when it is generated. A delay ensures that you are counting all of your events, not just the ones that were quickest to get indexed.

The following example sets up a search that runs every hour at the half hour, and which collects an hour's worth of event data, beginning an hour and a half before the search is actually run. This means that when the scheduled search kicks off at 3:30pm, it is collecting the event data that Splunk indexed from 2:00pm to 3:00pm. In other words, this configuration builds 30 minutes of delay into the search schedule. However both the search time range and the search schedule span 1 hour, so there will be no event data overlaps or gaps.

  • Select Run on a schedule once every... for Schedule
  • Select Cron schedule from the interval list.
  • In the cron notation field, enter 30 * * * * to have your search run every hour on the half hour.
  • Set the time range of the search using relative time modifier syntax. Enter an Earliest time value of -90m and a Latest time value of -30m. This means that each time the search runs it covers a period that begins 90 minutes before the search launch time, and ends 30 minutes before the search launch time.

For more information about the relative time modifier syntax for search time range definition, see "Syntax for relative time modifiers" in this manual.

Alert scheduling: Manage the priority of concurrently scheduled searches

Depending on how you have your Splunk implementation set up, you may only be able to run one scheduled search at a time. Under this restriction, when you schedule multiple searches to run at approximately the same time, Splunk's search scheduler works to ensure that all of your scheduled searches get run consecutively for the period of time over which they are supposed to gather data. However, there are cases where you may need to have certain searches run ahead of others in order to ensure that current data is obtained, or to ensure that gaps in data collection do not occur (depending on your needs).

You can configure the priority of scheduled searches through edits to savedsearches.conf. For more information about this feature, see "Configure the priority of scheduled searches" in the Knowledge Manager manual.

Set up triggering conditions for a scheduled alert

An alert based on a scheduled search is triggered when the results returned by a scheduled run of the base search meet specific conditions such as passing a numerical threshold.

These triggering conditions break scheduled alerts into two subcategories: basic conditional scheduled alerts and advanced conditional scheduled alerts. You set these triggering conditions when you set values for the Trigger if field on the Schedule step of the Create Alert dialog.

A basic conditional alert is triggered when the number of events in the results of a scheduled search meet a simple alerting condition (for example, a basic conditional alert can be triggered when the number of events returned by its search are greater than, less than, or equal to a number that you provide).

Note: If you define or edit your alert at Manager > Searches and Reports, the Condition field enables you to additionally set up basic conditional alerts that monitor numbers of hosts or sources amongst the events reviewed by the search.

An advanced conditional alert uses a secondary "conditional" search to evaluate the results of the scheduled or real-time search. With this setup, the alert is triggered when the secondary search returns any results.

Triggering conditions: Define a basic conditional alert

On the Schedule step of the Create Alert dialog, follow this procedure to define a basic conditional alert that notifies you when a simple alerting condition is met by the number of events returned by the search. Schedule must be set to Run on a schedule once every... in order for you to see the fields described in this procedure.

4.3 alerting basic conditional-1.png

1. Set Trigger if to Number of results. This is the basic condition that triggers the result.

Note: If you are defining or updating your alert at Manager > Searches and reports the Condition field additionally enables you to select Number of hosts and Number of sources.

2. Choose a comparison operation from the list below the Trigger if field. You can select Is greater than, Is less than, Equals to, or Is not equal to depending on how you want the alerting condition to work.

Note: If you are defining or updating your alert at Manager > Searches and reports you can additionally select rises by and drops by for comparison operations. These operations compare the results of the current run of the search against the results returned by the last run of the search. Rises by and drops by are not available for alerts based on real-time searches.)

3. In the field adjacent to the comparison operation list, enter an integer to complete the basic conditional alert. This integer represents a number of events.

The alert is triggered if the results for a scheduled run of the search meet or exceed the set condition. For example, you can set up an alert for a scheduled search that sends out a notification if the number of results returned by the search is less than a threshold of 10 results.

Note: Basic conditional alerts work differently for scheduled searches (which use a historical, scheduled search) and rolling-window alerts (which use a real-time search with a rolling time window of a specified width).

When you define a rolling-window alert as a basic conditional alert, the alert is triggered when the set condition occurs within the rolling time window of the search. For example, you could have a rolling-window alert that triggers the moment that the rolling 60 second window for the search has 5 or more results all at the same time.

Just to be clear, this means that if the real-time search returns one result and then four more results five minutes later, the alert is not triggered. But if all five results are returned within a single 60-second span of time, the alert is triggered.

Triggering conditions: Define an advanced conditional alert

In an advanced conditional alert, you define a secondary conditional search that Splunk applies to the results of the scheduled search. If the secondary search returns any results, Splunk triggers the alert. This means that you need to use a secondary search that returns zero results when the alerting conditions are unmet.

By basing your alert conditions on the result of a secondary conditional search, you can define specific conditions for triggering alerts and reduce the incidence of false positive alerts.

4.3 alerting advanced conditional.png

Follow this procedure to define an advanced conditional alert:

1. In the Trigger if list, select Custom condition is met. The Custom condition field appears.

2. Enter your conditional search in the Custom condition field.

3. Complete your alert definition by defining alert actions, throttling rules, and sharing (see below).

Every time the base search runs on its schedule, the conditional search runs against the output of that search. Splunk triggers the alert if the conditional search returns one or more results.

Note: If you have set up a Send email action for an alert that uses a conditional search, and you've arranged for results to be included in the email, be aware that Splunk will include the results of the original base search. It does not send the results of the secondary conditional search.

Triggering conditions: Advanced conditional alert example

Lets say you're setting up an alert for the following scheduled search, which is scheduled to run every 10 minutes:

failed password | stats count by user

This search returns the number of incorrect password entries associated with each user name.

What you want to do is arrange to have Splunk trigger the alert when the scheduled search finds more than 10 password failures for any given user. When the alert is triggered, Splunk sends an email containing the results of the triggering search to interested parties.

Now, it seems like you could simply append | search count > 10 to the original scheduled search:

failed password | stats count by user | search count > 10

Unfortunately, if you create a basic conditional alert based on this search, where an alert is triggered when the number of results returned by the base search is greater than 10, you won't get the behavior you desire. This is because this new search only returns user names that are associated with 10+ failed password entries--the actual count values are left out. When the alert is triggered and the results are emailed to stakeholders, you want the recipients to have a listing that matches each user name to the precise number of failed password attempts that it is associated with.

What you want to do is set Condition to If custom condition is met and then place search count > 10 in the Custom condition search field (while removing it from the base search). This conditional search runs against the results of the original scheduled search (failed password | stats count by user). With this, the alert is triggered only when the custom condition is met--when there are 1 or more user names associated with 10 failed password entries. But when it is triggered, the results of the original search--the list of user names and their failed password counts--is sent to stakeholders via email.

Note: Advanced conditional alerts work slightly differently when you are designing them for rolling-window alerts, which run in in real-time rather than on a schedule. In the case of the above example, you could design a rolling-window alert with the same base search and get similar results with the custom condition search as long as the rolling window was set to be 10 minutes wide. As soon as the real-time search returns 10 failed password entries for the same user within that 10-minute span of time, the alert is triggered.

For more examples of scheduled alerts, see "Alert examples," in this manual.

Enable actions for an alert based on a scheduled search

On the Actions step for a scheduled alert, you can enable one or more alert actions. These actions are set off whenever the alert is triggered.

There are three kinds of alert actions that you can enable through the Create alert dialog. For Enable actions you can select any combination of:

  • Send email - Send an email to a list of recipients that you define. You can opt to have this email contain the results of the triggering search job.
  • Run a script - Run a shell script that can perform some other action, such as the sending of an SNMP trap notification or the calling of an API. You determine which script is run.
  • Show triggered alerts in Alert manager - Have triggered alerts display in the Alert Manager with a severity level that you define. The severity level is non-functional and is for informational purposes only. (Note: In Manager > Searches and Reports, to have trigger records for an alert display in the Alert Manager, you enable the Tracking alert action.)

You can enable any combination of these alert actions for an individual alert.

4.3 alerting enable actions.png

Note: You can also arrange to have Splunk post the result of the triggered alert to an RSS feed. To enable this option, go to Manager > Searches and Reports and click the name of the search that the alert is based upon. Then, in the Alert actions section, click Enable for Add to RSS.

Important: Before enabling actions, read "More on alert actions," in this manual. This topic discusses the various alert actions at length and provides important information about their setup. It also discusses options that are only available via the Searches and reports page in Manager, such as the ability to send reports with alert emails in PDF format, RSS feed notification, and summary indexing enablement.

Determine how often actions are executed when scheduled alerts are triggered

When you are setting up an alert based on a scheduled, historical search, you use the last two settings on the Actions step--Execute actions on and Throttling--to determine how often Splunk executes actions after an alert is triggered.

Execute actions on enables you to say that once an alert is triggered, the alert actions are executed for All results returned by the triggering search, or Each result returned by the triggering search. In other words, you can set it up so that actions are triggered only once per search, or so that actions are triggered multiple times per search: once for each result returned.

After you choose that setting, you can choose whether or not those actions should be throttled in some manner.

If you select All results, you can say that later alert actions should be suppressed for a specific number of seconds, minutes, or hours.

4.3 alerting scheduled allresultsthrottle.png

For example, say you have an alert based on a scheduled search that runs every half hour.

  • It has All results selected and throttling set to suppress actions for two hours, and on the Actions step it has Send email and Show triggered results in Alert manager set as alert actions.
  • It runs on its schedule, and the alerting conditions are met, so it is triggered.
  • Alert actions are executed once for all results returned by the alert, per the All results setting. Splunk sends alert emails out to the listed recipients and shows the triggered alert on the Alert manager page.
  • Then the alert runs again on its schedule, a half-hour later, and it's triggered again. But because the alert's throttling controls are set to suppress alert actions for two hours, nothing happens. No more alert actions can be executed until two hours (or three more runs of the scheduled search upon which the alert is based) have passed.

If the alert is triggered again after the two hours are up, the alert actions are executed again, same as they were last time. And then they are suppressed again for another two hours.

If you select Each result, the throttling rules are different, because when the alert is triggered, multiple actions can be executed, one for each result returned by the search. You can throttle action execution for results that share a particular field value.

4.3 alerting scheduled eachresultthrottle.png

Here's an example of a scheduled alert that performs actions for every result once it's triggered:

Say you're a system administrator who is responsible for a number of network servers, and when these servers experience excessive amounts of errors over a 24-hour period, you'd like to run a system check on them to get more information. Servers are identified in your logs with the servername field. Here's what you do:

1. Start by designing a script that performs a system check of a network server when it's given a value for the servername field, and which subsequently returns the results of that check back to you.

2. Then design a search that returns the servername values of machines that have experienced 5 or more errors over the past 24 hours, one value per result.

3. Next, open the Create Alert dialog for this search. In the Schedule step, use cron notation to schedule the search so it runs once each day at midnight. Because its defined time range is the past 24 hours, this means it'll return results for the previous day.

4. Set the search up so that it's triggered whenever more than 0 results are returned.

5. On the Actions step, enable the Run a script action and assign your system check script to it.

6. Finally, set up the alert so that when it's triggered, it executes the "run script" action for each result received.

This is a case where you'd likely keep throttling off, since your search is set up to return error sums by servername, and only for those servers that experience > 5 errors in the 24 hour time range of the search. You won't have to worry about getting multiple results with the same servername value, and there probably isn't much value in suppressing events when the search is run on a daily basis.

Share scheduled alerts with others

On the Sharing step for an alert based on a scheduled search, you can determine how the alert is shared. Sharing rules are the same for all alert types: you can opt to keep the search private, or you can share the alert as read-only to all users of the app you're currently using. For the latter choice, "read-only" means that other users of your current app can see and use the alert, but they can't update its definition via Manager > Searches and reports.

4.3 alerting sharing.png

If you have edit permissions for the alert, you can find additional permission settings in Manager > Searches and reports. For more information about managing permissions for Splunk knowledge objects (such as alert-enabled searches) read "Curate Splunk knowledge with Manager" in the Knowledge Manager Manual, specifically the section titled "Share and promote knowledge objects."

Define alerts that monitor matching results in real-time within a rolling window

The third alert type enables you to set up alerts that monitors and evaluates events in real time within a rolling window. The moment that alert conditions are met by the events that are returned within this window, the alert is triggered.

The rolling-window alert type is in some ways a hybrid of the first two alert types. Like the per-result alert type, it is based on a real-time search. However, it isn't triggered each time a matching result is returned by the search. Instead, it evaluates all of the events within the rolling window in real time, and is triggered the moment that specific conditions are met by the events passing through that window, just like a scheduled alert is triggered when specific conditions are met by a scheduled run of its search.

When you define a real-time rolling window search, you first set the length of the real-time window, and then you define the triggering conditions. Then you enable the alert actions and define action execution and throttling rules.

4.3 alerting rollwin schedule.png

For example, you could up an alert that is triggered whenever there are three failed logins for the same username value over the last 10 minutes (using a real-time search with a 10 minute window). You can also arrange to throttle the alert so that it is not triggered for the same username value more than once an hour.

Set the width of the rolling window

When you define a rolling-window alert, the first thing you do is set the width of the real-time window. Real-time search windows can be set to any number of minutes, hours or days. In the 'Schedule step, select Monitor in real-time over a rolling window of... for the Schedule field. Then, in the fields that appear below the Schedule field define the width of the real-time search window by entering a specific number of minutes, hours, or days.

The alert will monitor events as they pass through this window in real-time. For example, you might have an alert that is triggered whenever any particular user fails to login more than 4 times in a 10 minute span of time. After the alert is set up, various login failure events will pass through this window, but the alert is only triggered when 4 login failures for the same user exist within the span of the 10 minute window at the same time.

If a user experiences three login failures in quick succession, then waits 11 minutes, and then has another login failure, the alert won't be triggered, because the first three events will have passed out of the window by the time the fourth one took place.

Set up triggering conditions for a rolling-window alert

Rolling-window alerts are triggered when the results within their rolling window meet specific conditions such as passing a numerical threshold.

These triggering conditions break rolling-window alerts into two subcategories: basic conditional rolling-window alerts and advanced conditional rolling-window alerts. You define these triggering conditions when you set values for the Trigger if field on the Schedule step of the Create Alert dialog.

The definition of these triggering conditions is handled in exactly the same manner for rolling-window alerts as for scheduled alerts, except that in this case the alert is triggered whenever results within the rolling window meet the specified triggering conditions.

For example, in the case of a basic conditional alert setup, where the triggering condition involves the search result count being greater than, less than, equal to, or unequal to a specific number, this condition must exist within the rolling real-time window for the alert to be triggered. If the alert is triggered when the number of results becomes greater than 100, then the alert won't be triggered until 101 results exist within the rolling window at the same time.

Advanced conditional searches also work in much the same way for rolling-window alerts as they do for scheduled alerts. The only difference is that in this case the secondary, conditional search runs in real time as well. It continuously evaluates the results returned in the time range window of the original real time search. The alert is triggered at the moment when a single result is returned by the conditional search.

Note: How do you deal with a situation where an alert would continue to be triggered with each new result received? To take the basic conditional alert example, what if there's a rush of matching results and the "greater than 100" condition is met by all of them? It could potentially lead to a corresponding rush of alert emails--something that wouldn't be appreciated by their recipients. That's why you use throttling to keep alerts from being triggered too frequently. See the section on configuring throttling for rolling-window alerts, below.

For a full explanation of triggering condition set up and the differences between basic and advanced conditional alerts, see the section "Set up triggering conditions for a scheduled alert," above and the three "Triggering conditions" topic sections that follow.

Enable actions for a rolling-window alert

On the Actions step for a rolling-window alert, you can enable one or more alert actions. These actions are set off whenever the alert is triggered.

There are three kinds of alert actions that you can enable through the Create alert dialog. For Enable actions you can select any combination of:

  • Send email - Send an email to a list of recipients that you define. You can opt to have this email contain the results of the triggering search job.
  • Run a script - Run a shell script that can perform some other action, such as the sending of an SNMP trap notification or the calling of an API. You determine which script is run.
  • Show triggered alerts in Alert manager - Have triggered alerts display in the Alert Manager with a severity level that you define. The severity level is non-functional and is for informational purposes only. (Note: In Manager > Searches and Reports, to have trigger records for an alert display in the Alert Manager, you enable the Tracking alert action.)

You can enable any combination of these alert actions for an individual alert.


4.3 alerting enable actions.png

Note: You can also arrange to have Splunk post the result of the triggered alert to an RSS feed. To enable this option, go to Manager > Searches and Reports and click the name of the search that the alert is based upon. Then, in the Alert actions section, click Enable for Add to RSS.

Important: Before enabling actions, read "More on alert actions," in this manual. This topic discusses the various alert actions at length and provides important information about their setup. It also discusses options that are only available via the Searches and reports page in Manager, such as the ability to send reports with alert emails in PDF format, RSS feed notification, and summary indexing enablement.

Determine how often actions are executed when rolling-window alerts are triggered

When you are setting up an alert based on a real-time search with a rolling window, you use the last two settings on the Actions step--Execute actions on and Throttling to determine how often Splunk executes actions after an alert is triggered.

This functionality works for rolling-window alerts in exactly the same way that it does for scheduled alerts, except that in this case you're dealing with alerts that are being triggered in real time.

You can use Execute actions on to say that once the results in the rolling window meet the conditions required to trigger the alert, the alert actions are carried out once for All results triggering the alert or Each result. You might choose the latter if your search is triggered by a small number of results, or if you are using a script to feed information about each individual result into a machine process.

Execute actions on enables you to say that once an alert is triggered, the alert actions are executed for All results returned by the triggering search, or Each result returned by the triggering search. And then you can choose whether or not these actions should be throttled, and if so, how.

If you select All results, you can say that later alert actions should be throttled for a specific number of seconds, minutes, or hours.

If you select Each result, the throttling rules are different, because when the alert is triggered, multiple actions can be executed, one for each result returned by the search. You can throttle action execution for results that share a particular field value.

For example, say you have an rolling-window alert with a 10-minute window that is set to alert whenever any user has more than 10 password failures within that timeframe. The essentially performs a running count of password fail events per user, and then uses a conditional search to look through those events for users with > 10 password failures.

  • On the Actions step it has Send email and Show triggered results in Alert manager selected.
  • It's set to execute actions on Each result. In this case there should be a single result: a username with a corresponding failed password event count.
  • For Throttling it's set to suppress for results that have the same value of username for an hour. This means that even if a user keeps making failed password attempts every few seconds you won't see more alerts triggered for that same person for another hour.

4.3 alerting rollwin eachresultthrottle.png

So you start the alert and eventually user mpoppins makes more than 10 password attempts within the past 10 minutes. This triggers the alert, which sends out an email with his name and the event count to the list of recipients. The alert is also recorded in the Alert manager. Even though mpoppins keeps on making failed password attempts the throttling setting ensures that the alert won't be triggered again by matching events featuring mpoppins for an hour.

For more examples of alerts that use the All results and Each result settings in conjunction with various throttling settings, see the corresponding discussion for scheduled alerts, above.

Share rolling-window alerts with others

On the Sharing step for rolling-window alert, you can determine how the alert is shared. Sharing rules are the same for all alert types: you can opt to keep the search private, or you can share the alert as read-only to all users of the app you're currently using. For the latter choice, "read-only" means that other users of your current app can see and use the alert, but they can't update its definition via Manager > Searches and reports.

4.3 alerting sharing.png

If you have edit permissions for the alert, you can find additional permission settings in Manager > Searches and reports. For more information about managing permissions for Splunk knowledge objects (such as alert-enabled searches) read "Curate Splunk knowledge with Manager" in the Knowledge Manager Manual, specifically the section titled "Share and promote knowledge objects."

Use Manager to update and expand alert functionality

Alerts are essentially saved searches that have had extra settings configured for them. If you want to add or change alert settings for a preexisting saved search, go to Manager > Searches and Reports and locate the search you'd like to update (if you're updating an existing alert, look for a search with the same name as the alert). Click the search name to open the search detail page. This page contains all of the settings that you would otherwise see in the Create Alert dialog, plus a few additional settings that are only available there. You may need to select Schedule this search to expose the scheduling and alert setup controls if the search hasn't been defined as an alert already.

When you are in Manager, keep in mind that you can only edit existing searches that you have both read and write permissions for. Searches can also be associated with specific apps, which means that you have to be using that app in order to see and edit the search. For more information about sharing and promoting saved searches (as well as other Splunk knowledge objects), see "Curate Splunk knowledge with Manager" in the Knowledge Manager manual.

Define the alert retention time

You can determine how long Splunk keeps a record of your triggered alerts. You can manage alert expiration for preexisting alerts in Manager > Searches and Reports. On the detail page for an alerting search, use the Expiration field to define the amount of time that an alert's triggered alert records (and their associated search artifacts) are retained by Splunk.

You can choose a preset expiration point for the alert records associated with this search, such as after 24 hours, or you can define a custom expiration time.

4.3 alerting mgr expiration.png

Note: If you set an expiration time for the alert records, be sure to also set the alert up so that Splunk keeps records of the triggered alerts on the Alert Manager page. To do this in the Alert Manager dialog, select Show triggered alerts in Alert Manager under Enable actions on the Actions step. To set this up in Manager > Searches and Reports, go to the detail page for the alerting search and enable the Tracking alert action.

To review and manage your triggered alerts, go to the Alert manager by clicking the Alerts link in the upper right-hand corner of the Splunk interface. For more information about using it, see the "Review triggered alerts" topic in this manual.

Specify fields to show in alerts through search language

When Splunk provides the results of the alerting search job (in an alert email, for example), it includes all the fields in those results. To have certain fields included in or excluded from the results, use the fields command in the base search for the alert.

  • To eliminate a field from the search results, pipe your search to fields - $FIELDNAME.
  • To add a field to the search results, pipe your search to fields + $FIELDNAME.

You can specify multiple fields to include and exclude in one string. For example, your Search field may be:

yoursearch | fields - $FIELD1,$FIELD2 + $FIELD3,$FIELD4

The alert you receive will exclude $FIELD1 and $FIELD2, but include $FIELD3 and $FIELD4.

PREVIOUS
Monitor recurring situations
  NEXT
Set up alert actions

This documentation applies to the following versions of Splunk® Enterprise: 4.3, 4.3.1, 4.3.2, 4.3.3, 4.3.4, 4.3.5, 4.3.6, 4.3.7


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters