Splunk® Enterprise

Alerting Manual

Download manual as PDF

This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

Define scheduled alerts

Note: This topic explains how to define scheduled alerts, one of three types of alerts that Splunk provides. For an overview of the alert types, and more information about getting started with alert creation, go to "About alerts" in this manual.

Scheduled alerts are the second most common alert type. If you want to set up an alert that evaluates the results of a historical search that runs over a set range of time on a regular schedule, you would select Scheduled as the Alert Type in the Save As Alert dialog box. Then you'd select a Time range and Schedule, then define the alert's Trigger condition.

This schedule can be of any duration that you define. For example, say you handle operations for an online store, and you'd like the store administrators to receive an alert via email if the total sum of items sold through the store in the previous day drops below a threshold of 500 items. They don't need to know the moment this happens, they just want to be alerted on a day by day basis.

60 saveasalert scheduled cronschedule.png

To meet these requirements, you'd create an alert based on a search that runs on a regular schedule. You would schedule the search to run every day at midnight, and configure it to be triggered whenever the result returned by a scheduled run of the search--the sum of items sold over the past day--is a number below 500. And you'd set up an email alert action for the search. This ensures that when the search triggers, Splunk sends out an email to every email address in the alert action definition, which in this case would be the set of store administrators. Now your administrators will be informed when any midnight run of the search finds that less than 500 items were purchased in the previous day.

Schedule the alert

You can schedule a search for a scheduled alert in the Save As Alert dialog. After you click the Scheduled button, the Time range, Schedule, and Trigger condition controls appear. The Time range button allows you to select the time period. By default, it is Hour, but you can change it to the following:

  • Day
  • Week
  • Month
  • Cron schedule - this option allows you to set up a more complicated interval using standard cron notation.

60 saveasalert scheduled weekly.png

Note: Splunk only makes available five parameters for cron notation, not six. The parameters (* * * * *) correspond to minute hour day month day-of-week. Splunk does not use the 6th parameter for year, common in other forms of cron notation.

Here are some "Cron schedule" examples:

*/5 * * * *       : Every 5 minutes
*/30 * * * *      : Every 30 minutes
0 */12 * * *      : Every 12 hours, on the hour
*/20  * * * 1-5   : Every 20 minutes, Monday through Friday
0 9 1-7 * 1       : First Monday of each month, at 9am.

If you choose Cron schedule, you also must enter a Search time range over which you want to run the search. What you enter here overrides the time range you set when you first designed and ran the search. It is recommended that the execution schedule matches the search time range so that no overlaps or gaps occur - for example, if you choose to run a search every 20 minutes then it is recommended that the search's time range also be 20 minutes (-20m).

Alert scheduling best practices

  • Coordinate the alert's search schedule with the search time range. This prevents situations where event data is accidentally evaluated twice by the search (because the search time range exceeds the search schedule, resulting in overlapping event data sets), or not evaluated at all (because the search time range is shorter than the search schedule).
  • Schedule your alerting searches with at least 60 seconds of delay. This practice is especially important in distributed search Splunk implementations where event data might not reach the indexer precisely at the moment when it is generated. A delay ensures that you are counting all of your events, not just the ones that were quickest to get indexed.

The following example sets up a search that runs every hour at the half hour, and which collects an hour's worth of event data, beginning an hour and a half before the search is actually run. This means that when the scheduled search kicks off at 3:30pm, it is collecting the event data that Splunk indexed from 2:00pm to 3:00pm. In other words, this configuration builds 30 minutes of delay into the search schedule. However both the search time range and the search schedule span 1 hour, so there are no event data overlaps or gaps.

In the Save As Alert panel:

1. Click Scheduled.

2. When the Time Range drop down button appears, select Run on Cron Schedule from the list.

3. Set the time range of the search using relative time modifier syntax. In the Earliest field, enter a value of -90m.

4. In the Latest field, enter a value of -30m.

This sets the time that the search covers to a period that begins 90 minutes before the search launch time, and ends 30 minutes before the search launch time.

5. In the Cron Expression field, enter 30 * * * * to have your search run every hour on the half hour.

For more information about the relative time modifier syntax for search time range definition, see "Specify time modifiers in your search" in the Search Manual.

Manage the priority of concurrently scheduled searches

Depending on how you have your Splunk implementation set up, you might only be able to run one scheduled search at a time. Under this restriction, when you schedule multiple searches to run at approximately the same time, Splunk's search scheduler works to ensure that all of your scheduled searches get run consecutively for the period of time over which they are supposed to gather data. However, there are cases where you might need to have certain searches run ahead of others in order to ensure that the searches obtain current data, or to ensure that gaps in data collection do not occur (depending on your needs).

You can configure the priority of scheduled searches through edits to savedsearches.conf. For more information about this feature, see "Configure the priority of scheduled reports" in the Reporting Manual

Set up triggering conditions for a scheduled alert

An alert based on a scheduled search triggers when the results returned by a scheduled run of the base search meet specific conditions, such as passing a numerical threshold.

These triggering conditions break scheduled alerts into two subcategories: basic conditional scheduled alerts and advanced conditional scheduled alerts. You set these triggering conditions when you set values for the Trigger condition field in the Save As Alert dialog.

A basic conditional alert triggers when the number of events in the results of a scheduled search meet a simple alerting condition (for example, a basic conditional alert can be triggered when the number of events returned by its search are greater than, less than, or equal to a number that you provide).

Note: If you define or edit your alert at Settings > Searches and Reports, the Condition drop-down enables you to additionally set up basic conditional alerts that monitor numbers of hosts or sources amongst the events reviewed by the search.

An advanced conditional alert uses a secondary "conditional" search to evaluate the results of the scheduled or real-time search. With this setup, the alert triggers when the secondary search returns any results.

Define a basic conditional alert

On the Save As Alert dialog, follow this procedure to define a basic conditional alert that notifies you when a simple alerting condition is met by the number of events returned by the search. Alert Type must be set to Scheduled in order for you to see the fields described in this procedure.

60 saveasalert page2 trigger numresults.png

1. Set Trigger Condition to Number of results. This is the basic condition that triggers the result.

Note: If you are defining or updating your alert at Settings > Searches and reports the Condition drop-down additionally enables you to select Number of hosts and Number of sources.

2. Choose a comparison operation from the Trigger if number of results is drop-down. You can select Is greater than, Is less than, Equals to, or Is not equal to depending on how you want the alerting condition to work.

Note: If you are defining or updating your alert at Settings > Searches and reports you can additionally select rises by and drops by for comparison operations. These operations compare the results of the current run of the search against the results returned by the last run of the search. Rises by and drops by are not available for alerts based on real-time searches.)

3. In the field adjacent to the comparison operation list, enter an integer to complete the basic conditional alert. This integer represents a number of events.

The alert triggers if the results for a scheduled run of the search meet or exceed the set condition. For example, you can set up an alert for a scheduled search that sends out a notification if the number of results returned by the search is less than a threshold of 10 results.

Note: Basic conditional alerts work differently for scheduled searches (which use a historical, scheduled search) and rolling-window alerts (which use a real-time search with a rolling time window of a specified width).

When you define a rolling-window alert as a basic conditional alert, the alert triggers when the set condition occurs within the rolling time window of the search. For example, you could have a rolling-window alert that triggers the moment that the rolling 60 second window for the search has 5 or more results all at the same time.

What this means is that if the real-time search returns one result and then four more results five minutes later, the alert does not trigger. But if all five results are returned within a single 60-second span of time, the alert does trigger.

Define an advanced conditional alert

In an advanced conditional alert, you define a secondary conditional search that Splunk applies to the results of the scheduled search. If the secondary search returns any results, Splunk triggers the alert. This means that you need to use a secondary search that returns zero results when the alerting conditions are not met.

By basing your alert conditions on the result of a secondary conditional search, you can define specific conditions for triggering alerts and reduce the incidence of false positive alerts.

60 saveasalert page2 trigger customcondition.png

Follow this procedure to define an advanced conditional alert:

1. In the Trigger if list, select Custom condition is met. The Custom condition field appears.

2. Enter your conditional search in the Custom condition field.

3. Complete your alert definition by defining alert actions, throttling rules, and sharing (see below).

Every time the base search runs on its schedule, the conditional search runs against the output of that search. Splunk triggers the alert if the conditional search returns one or more results.

Note: If you have set up a Send email action for an alert that uses a conditional search, and you've arranged for results to be included in the email, be aware that Splunk will include the results of the original base search. It does not send the results of the secondary conditional search.

Advanced conditional alert example

Lets say you're setting up an alert for the following scheduled search, which is scheduled to run every 10 minutes:

failed password | stats count by user

This search returns the number of incorrect password entries associated with each user name.

What you want to do is arrange to have Splunk trigger the alert when the scheduled search finds more than 10 password failures for any given user. When the alert triggers, Splunk sends an email containing the results of the triggering search to interested parties.

Now, it seems like you could simply append | search count > 10 to the original scheduled search:

failed password | stats count by user | search count > 10

Unfortunately, if you create a basic conditional alert based on this search, where an alert is triggered when the number of results returned by the base search is greater than 10, you won't get the behavior you desire. This is because this new search only returns user names that are associated with 10+ failed password entries--the actual count values are left out. When the alert is triggered and the results are emailed to stakeholders, you want the recipients to have a listing that matches each user name to the precise number of failed password attempts that it is associated with.

What you want to do is set Condition to If custom condition is met and then place search count > 10 in the Custom condition search field (while removing it from the base search). This conditional search runs against the results of the original scheduled search (failed password | stats count by user). With this, the alert is triggered only when the custom condition is met--when there are 1 or more user names associated with 10 failed password entries. But when it is triggered, the results of the original search--the list of user names and their failed password counts--is sent to stakeholders via email.

Note: Advanced conditional alerts work slightly differently when you are designing them for rolling-window alerts, which run in real-time rather than on a schedule. In the case of the above example, you could design a rolling-window alert with the same base search and get similar results with the custom condition search as long as the rolling window was set to be 10 minutes wide. As soon as the real-time search returns 10 failed password entries for the same user within that 10-minute span of time, the alert is triggered.

For more examples of scheduled alerts, see "Alert examples," in this manual.

Enable actions for an alert based on a scheduled search

On the Actions step for a scheduled alert, you can enable one or more alert actions. These actions are set off whenever the alert is triggered.

There are three kinds of alert actions that you can enable through the Create alert dialog. For Enable actions you can select any combination of:

  • Send email - Send an email to a list of recipients that you define. You can opt to have this email contain the results of the triggering search job.
  • Run a script - Run a shell script that can perform some other action, such as the sending of an SNMP trap notification or the calling of an API. You determine which script is run.
  • Show triggered alerts in Alert manager - Have triggered alerts display in the Alert Manager with a severity level that you define. The severity level is non-functional and is for informational purposes only. (Note: In Manager > Searches and Reports, to have trigger records for an alert display in the Alert Manager, you enable the Tracking alert action.)

You can enable any combination of these alert actions for an individual alert.

60 saveasalert page2 enableactions listintrigger sendemail.png

Note: You can also arrange to have Splunk post the result of the triggered alert to an RSS feed. To enable this option, go to Manager > Searches and Reports and click the name of the search that the alert is based upon. Then, in the Alert actions section, click Enable for Add to RSS.

Important: Before enabling actions, read "Set up alert actions," in this manual. This topic discusses the various alert actions at length and provides important information about their setup. It also discusses options that are only available via the Searches and reports page in Manager, such as the ability to send reports with alert emails in PDF format, RSS feed notification, and summary indexing enablement.

Determine how often actions execute when scheduled alerts trigger

When you set up an alert based on a scheduled, historical search, you use the When triggered, execute actions and Throttle? controls--to determine how often Splunk executes actions after an alert triggers.

When triggered, Execute actions allows you to say that once an alert triggers, the alert actions execute Once for the triggering search, or For each result returned by the triggering search. In other words, you can set it up so that actions are triggered only once per search, or so that actions are triggered multiple times per search: once for each result returned.

After you choose that setting, you can choose whether or not those actions should be throttled in some manner.

If you select All results, you can say that later alert actions should be suppressed for a specific number of seconds, minutes, or hours.

60 saveasalert page2 triggeronce throttle.png

For example, say you have an alert based on a scheduled search that runs every half hour.

  • It has Once selected and throttling set to suppress actions for two hours. It also has Send email and List in Triggered Alerts set as alert actions.
  • It runs on its schedule, and the alerting conditions are met, so it triggers.
  • Alert actions execute once for all results returned by the alert, per the Once setting. Splunk sends alert emails out to the listed recipients and shows the triggered alert on the Alert Manager page.
  • Then the alert runs again on its schedule, a half-hour later, and it triggers again. But because the alert's throttling controls are set to suppress alert actions for two hours, nothing happens. No more alert actions can be executed until two hours (or three more runs of the scheduled search upon which the alert is based) have passed.

If the alert triggers again after the two hours are up, the alert actions execute again, same as they were last time. And then they get suppressed again for another two hours.

If you select For each result, the throttling rules are different, because when the alert triggers, multiple actions can be executed, one for each result returned by the search. You can throttle action execution for results that share a particular field value.

60 saveasalert page2 throttle perresult.png

Here's an example of a scheduled alert that performs actions for every result once it's triggered:

Say you're a Settings administrator who is responsible for a number of network servers, and when these servers experience excessive amounts of errors over a 24-hour period, you'd like to run a Settings check on them to get more information. Your logs identify servers with the servername field. Here's what you do:

1. Start by designing a script that performs a Settings check of a network server when it's given a value for the servername field, and which subsequently returns the results of that check back to you.

2. Then design a search that returns the servername values of machines that have experienced 5 or more errors over the past 24 hours, one value per result.

3. Next, open the Save As Alert dialog for this search.

4. Set the Alert Type to "Scheduled" by clicking the Scheduled button.

5. In the Time Range drop-down that appears, select Run on Cron Schedule.

6. Use cron notation to schedule the search so it runs once each day at midnight. In the Cron schedule field, enter 0 0 * * *.

Because its defined time range is the past 24 hours, this means the alert will return results for the previous day.

7. Set the search up so that it triggers whenever more than 0 results are returned. In the Trigger results drop down, select Number of results.

8. Next, in the Trigger if number of results is drop-down, select Greater than, and in the adjacent field, enter 0.

9. Click Next to go to the alert actions page of the Save As Alert dialog.

10. On the alert actions page, enable the Run a script action, and in the field that appears, enter the path to your Settings check script to it.

Your script should already exist in $SPLUNK_HOME/bin/scripts. If it is not there, place it there first.

11. Finally, set up the alert so that when it triggers, it executes the "run script" action for each result received.

This is a case where you'd likely keep throttling off, since your search is set up to return error sums by servername, and only for those servers that experience more than 5 errors in the 24 hour time range of the search. You won't have to worry about getting multiple results with the same servername value, and there probably isn't much value in suppressing events when you run the search on a daily basis.

Share scheduled alerts with others

On the alert actions page for an alert based on a scheduled search, you can determine how to share the alert if you have a role that gives you Write access to the knowledge objects in your app (such as the Power or Admin roles).

Sharing rules are the same for all alert types: you can opt to keep the search private, or you can share the alert as read-only to all users of the app you're currently using. For the latter choice, "read-only" means that other users of your current app can see and use the alert, but they can't update its definition via Settings > Searches and reports.

60 saveasalert page2 sharing.png

If you have edit permissions for the alert, you can find additional permission settings in Settings > Searches and reports. For more information about managing permissions for Splunk knowledge objects (such as alert-enabled searches) read "Manage knowledge object permissions" in the Knowledge Manager Manual.

PREVIOUS
Define per-result alerts
  NEXT
Define rolling-window alerts

This documentation applies to the following versions of Splunk® Enterprise: 6.0, 6.0.1, 6.0.2, 6.0.3, 6.0.4, 6.0.5, 6.0.6, 6.0.7, 6.0.8, 6.0.9, 6.0.10, 6.0.11, 6.0.12, 6.0.13, 6.0.14, 6.0.15


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters