Create an alert
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
- Define a basic conditional alert
- Define an advanced conditional alert
- Advanced conditional alert example
- Coordinate the search schedule with the search time range
- Manage the priority of concurrently scheduled searches
- Send email
- Create an RSS feed
- Run a script
- Track triggered alerts
Create an alert
Splunk alerts are based on saved searches that run either on a regular interval (if the saved search is a standard scheduled search) or in real time (if the saved search is a real-time search). The alert is triggered when the results of the scheduled or real-time search meet a particular condition that you add to the alert definition.
For example, you could base an alert on a scheduled search that runs every ten minutes. When it runs, it looks back at the syslog events that have been received from a particular host over the past 10 minutes. The alert is only triggered when the search returns a syslog event from this host that has a "disk full" error message. When the alert is triggered, an email goes out to a set of system administrators to inform them that the error has come up.
However, if your system administrators need even timelier alerts, you can base the disk-full alert on a real-time search that runs continuously in the background, looking at incoming syslog events from the same host. The moment the real-time search returns an event with the "disk full" error message, the alert is triggered, and Splunk sends an alert to the same set of system administrators.
Note: When Splunk is used out-of-the-box, only users with the Admin role can run and save real-time searches, schedule searches, or create alerts. In addition you cannot create searches unless your role permissions enable you to do so. For more information on managing roles, see "Add and edit roles" in the Admin Manual.
Start by defining and saving a search
If you run a search, like the results it's giving you, and decide that you want to create an alert based on it, then click the Create alert link that appears above the search timeline after the search starts running. This opens the Create Alert window. The Create Alert window is broken up into three steps: Save Search, Set Up Alert, and Define Actions.
The first step, Save Search, enables you to define and save the search. If you need more information about this step, see "Save searches and share search results" in this manual.
This topic focuses on the last two steps of the Create Alert window, which are specific to alert creation:
- Set Up Alert: This is where you define the condition that must be met for the search to be triggered, the schedule of the search upon which the alert is based (if the search is not a real-time search), and the alert throttling period, expiration time, and severity level.
- Define Actions: This is where you determine what happens when the alert conditions are met by the results of a scheduled run of the search and an alert is triggered. Alert actions include sending alert emails to a designated list of recipients, posting alert information to an RSS feed, and running a user-designated script that performs another action. The Define actions step is also where you determine whether this alert is tracked by the Alert manager when it is triggered.
Note: If you need to add or update alert settings for a preexisting saved search, go to Manager > Searches and Reports and locate the saved search. Click the search name to open the search detail page. This page contains all of the settings that you would otherwise see in the Create Alert window. You may need to select Schedule this search to expose the scheduling and alert setup controls.
When you are in Manager, keep in mind that you can only edit existing searches that you have both read and write permissions for. Searches can also be associated with specific apps, which means that you have to be using that app in order to see and edit the search. For more information about sharing and promoting saved searches (as well as other Splunk knowledge objects), see "Curate Splunk knowledge with Manager" in the Knowledge Manager manual.
For a series of alert examples showing how you might design alerts for specific situations using both scheduled and real-time searches, see "Alert use cases"
Set up the alerting condition
Alerts are triggered when a triggering condition is met. You can set up basic and advanced conditional alerts. You do this in Set Up Alert, the second step of the Create Alert dialog box.
A basic conditional alert is triggered when the number of events, sources, or hosts in the results of a scheduled or real-time search meet a simple alerting condition (for example, a basic conditional alert can be triggered when the number of events returned by its search are greater than, less than, or equal to a number that you provide).
An advanced conditional alert uses a secondary "conditional" search to evaluate the results of the scheduled or real-time search. With this setup, the alert is triggered when the secondary search returns any results.
You can also set up a scheduled search to alert every time it runs by setting Condition to always. This setting is typically used for summary indexing or to deliver a PDF report on some recurring interval.
You can find these controls in the Set Up Alert step of the Create Alert window. You can also find them in Manager > Searches and Reports, when you add a new saved search or click through to the detail page for an existing saved search.
Define a basic conditional alert
On Set Up Alert, the second step of the Create Alert dialog box, follow this procedure to define a basic conditional alert that notifies you when a simple alerting condition is met by the number of events, hosts, or sources returned by the search.
1. In the Condition section, determine what basic condition triggers the alert. Choose either If the number of events, If the number of hosts, or If the number of sources from the Condition list, depending on what you want to track. Choosing one of these three values causes the comparison operation field to appear underneath the list.
Note: Alternatively, you can select a Condition of always to have Splunk trigger the alert each time the search is run. This can be handy if the search runs on an infrequent basis and you just want to be notified of the results, no matter what they are.
2. Choose a comparison operation from the list that appears after one of the aforementioned If the number of values is selected. Click is greater than, is less than, is equal to, drops by, or rises by depending on how you want the alerting condition to work. (The rises by and drops by operations are not available for real-time searches.)
3. In the field adjacent to the comparison operation list, enter an integer to complete the basic conditional alert. This integer represents a number of events.
Note: Basic conditional alerts work differently for real-time searches and standard searches that are set up to run on a schedule:
- For a scheduled search, the alert is triggered if the results of a scheduled run of the search meet or exceed the set condition. For example, you can set up an alert for a scheduled search that sends out a notification if the number of events returned by the search rises by a threshold of 10 or more events since the last time the search was run.
- For a real-time search, the alert is triggered if the set condition occurs within the time range window of the search. For example, you could have a real-time search alert that triggers the moment that five or more events are returned by the search within its sliding 60-second window. (Just to be clear, this means that if the real-time search returns one event and then four more events five minutes later, the alert is not triggered. But if all five events are returned within a single 60-second span of time, the alert is triggered.)
Define an advanced conditional alert
Advanced conditional alerting enables you to set up an alerting condition that is based on the result of a conditional search that is applied to the results of the scheduled search. When the secondary search returns any results, the alert is triggered (so you should develop a secondary search that usually returns zero results unless the alerting conditions have been met).
By basing your alert conditions on the result of a secondary conditional search, you can define specific conditions for triggering alerts and reduce the incidence of false positive alerts.
Follow this procedure to define an advanced conditional alert:
1. In the Condition list, click if custom condition is met. The Conditional search field appears.
2. Enter your conditional search in the Custom condition search field.
When the conditional search returns 1 or more events, the alert is triggered. If you have arranged for email to go out, take note that results of the original scheduled search are sent to stakeholders (not the results of the conditional search).
Note: Advanced conditional alerts work differently for real-time searches and standard searches that are set up to run on a schedule:
- For a scheduled search, the alert is triggered when the conditional search evaluates the results of a run of the original scheduled search and returns one or more results.
- For a real-time search, the secondary, conditional search runs in real time as well. It continuously evaluates the results returned in the time range window of the original real time search. When a single event is returned by the conditional search, the alert is triggered.
Advanced conditional alert example
Lets say you're setting up an alert for the following scheduled search, which is scheduled to run every 10 minutes:
failed password | stats count by user
This search returns the number of incorrect password entries associated with each user name.
What you want to do is arrange to have Splunk trigger the alert when the scheduled search finds more than 10 password failures for any given user. When the alert is triggered, an email containing the results of the triggering search gets sent to interested parties.
Now, it seems like you could simply append
| search count > 10 to the original scheduled search:
failed password | stats count by user | search count > 10
Unfortunately, if you create a basic conditional alert based on this search, where an alert is triggered if the number of events returned is greater than 1, you won't get the behavior you desire. This is because this new search only returns user names that are associated with 10+ failed password entries--the actual count values are left out. When the alert is triggered and the results are emailed to stakeholders, you want the recipients to have a listing that matches each user name to the precise number of failed password attempts that it is associated with.
What you want to do is set Condition to If custom condition is met and then place
search count > 10 in the Custom condition search field (while removing it from the base search). This conditional search runs against the results of the original scheduled search (
failed password | stats count by user). With this, the alert is triggered only when the custom condition is met--when there are 1 or more user names associated with 10 failed password entries. But when it is triggered, the results of the original search--the list of user names and their failed password counts--is sent to stakeholders via email.
Note: If this search used the same string but was running in real-time, it will work the same way as long as its time range window is set to be 10 minutes wide. As soon as the real-time search returns 10 failed password entries for the same user within that 10-minute span of time, the alert is triggered.
For more examples of scheduled and real-time alerts, see "Alert examples," in this manual.
Schedule the search
Alerts that are based on standard searches (searches that collect data from the past rather than in real time, in other words) require that those searches be scheduled to run on a regular interval, such as every 10 minutes, every two hours, or every night at midnight. Use the Schedule list in Set Up Alert, the second step of the Create Alert dialog box. to define this interval. The Schedule list will not display for real-time searches.
Then, pick an alert schedule. You can pick a pre-set schedule like every 12 hours, every 30 minutes, every day at 6pm and so on, or you can pickcustom, which enables you to set up an interval using standard cron notation.
Note: Splunk only uses 5 parameters for cron notation, not 6. The parameters (
* * * * *) correspond to
minute hour day month day-of-week. The 6th parameter for
year, common in other forms of cron notation, is not used.
Here are some cron examples:
*/5 * * * * : Every 5 minutes */30 * * * * : Every 30 minutes 0 */12 * * * : Every 12 hours, on the hour */20 * * * 1-5 : Every 20 minutes, Monday through Friday 0 9 1-7 * 1 : First Monday of each month, at 9am.
Coordinate the search schedule with the search time range
When you set up an interval for a scheduled search, it's good to keep the time range that you've defined for the search in mind. This can be especially true for distributed search setups where event data may not reach the indexer exactly when it is generated. In this case, it can be a good idea to schedule your searches with at least 60 seconds of delay.
This example sets up a search that runs every hour at the half hour, but collects an hour's worth of event data, beginning an hour and a half before the search is actually run. This means that when the scheduled search kicks off at 3:30pm, it is collecting the event data that Splunk indexed from 2:00pm to 3:00pm.)
- Set an Earliest time value of -90m and a Latest time value of -30m.
- Set an Alert schedule custom cron notation of
30 * * * *to have your search run every hour on the half hour.
For more information about the relative time modifier syntax for search time range definition, see "Syntax for relative time modifiers" in this manual.
Manage the priority of concurrently scheduled searches
Depending on how you have your Splunk implementation set up, you may only be able to run one scheduled search at a time. Under this restriction, when you schedule multiple searches to run at approximately the same time, Splunk's search scheduler works to ensure that all of your scheduled searches get run consecutively for the period of time over which they are supposed to gather data. However, there are cases where you may need to have certain searches run ahead of others in order to ensure that current data is obtained, or to ensure that gaps in data collection do not occur (depending on your needs).
You can configure the priority of scheduled searches through edits to
savedsearches.conf. For more information about this feature, see "Configure the priority of scheduled searches" in the Knowledge Manager manual.
Throttle frequent alerts
The Throttling controls in Set Up Alert, the second step of the Create Alert dialog box, help you determine how soon you would like to be notified again after receiving an alert. They are especially useful for real-time searches where triggering conditions might be met over and over again in a very short amount of time. You can use the Throttling settings to ensure that multiple occurrences of the alert are suppressed for a given number of minutes, hours, or days.
For example, you could have a real-time search with a 60 second window that alerts every time an event with "disk error" event appears. If ten events with the "disk error" message come in within that window, five disk error alerts will be triggered--ten alerts within one minute. And if the alert is set up so that an email goes out to a set of recipients each time it's triggered (see "Specify alert actions," below), those recipients probably won't see the stack of alert emails as being terribly helpful.
You can set the Throttling controls so that when one alert of this type is triggered, all successive alerts of the same type are suppressed for the next 10 minutes. When those 10 minutes are up, the alert can be triggered again and more emails can be sent out as a result--but once it is triggered, another 10 minutes have to pass before it can be triggered a third time.
In general, when you set up throttling for a real-time search it's best to start with a throttling period that matches the length of the base search's window, and then expand the throttling period from there, if necessary. This prevents you from getting duplicate notifications for a given event.
Note: Throttling settings are not usually required for scheduled searches, because only one alert is sent out per run of a scheduled search. But if the search is scheduled to run on a very frequent basis (every five minutes, for example), you can set the throttling controls to suppress the alert so that a much larger span of time--say, 60 minutes--has to pass before Splunk can send out another alert of the same type.
Define the alert retention time
You can determine how long Splunk keeps a record of your triggered alerts. When you are creating a new alert, use the Expiration list in Set Up Alert, the second step of the Create Alert dialog box. to define the amount of time that its triggered alert records (and their associated search artifacts) will be retained by Splunk. You can manage alert expiration for preexisting alerts in Manager > Searches and Reports. Just find the search that the alert is associated with and go to its detail page.
You can choose a preset expiration point for the alert records associated with this search, such as after 5 hours, or you can define a custom expiration point.
Note: If you set an expiration time for the alert records, be sure to also select the Tracking checkbox on Define Actions, the third step of the Create Alert dialog box. Splunk will not keep records of the triggered alerts in the Alert Manager unless Tracking is selected.
To review and manage your triggered alerts, go to the Alert manager by clicking the Alerts link in the upper right-hand corner of the Splunk interface. For more information about using it, see the "Review triggered alerts" topic in this manual.
Give the alert a severity level
Each alert can be labeled with a severity level that helps people know how important a triggered alert is in relation to other alerts. For example, an alert that lets you know that a server is approaching disk capacity could be given a High label, while an alert triggered by a "disk full" error could have a Critical label. You can define the alert severity label for a new alert in Set Up Alert, the second step of the Create Alert dialog box.
Severity labels are informational in purpose and have no additional functionality. You can use them to quickly pick out important alerts from the alert listing on the Alerts page, which you can get to by clicking the Alerts link in the upper right-hand corner of the Splunk interface.
Specify alert actions
When the results of a scheduled search meet the alerting conditions for that search, an alert is triggered, causing an alert action to take place. You define alert actions for a new alert in Define Actions, the third step of the Create Alert dialog box.
There are three kinds of alert actions:
- Notification by email - Splunk sends an email to a list of recipients that you define, and it can optionally contain the results of the triggering search job.
- Addition to an RSS feed - Splunk posts an alert notification to an RSS feed for the alert.
- Run of a shell script - Splunk runs a shell script that can perform some other action, such as the sending of an SNMP trap notification or the calling of an API. You determine which script is run.
- Tracking - Select to have triggered alerts display in the Alert manager.
You can enable any combination of these alert action types for a single alert.
If you want Splunk to contact stakeholders when the alert is triggered, select Enable next to Send email.
For the Subject field, supply a subject header for the email. By default, it is set to be
Splunk Alert: $name$. When it sends the email, Splunk replaces
$name$ with the saved search name.
Splunk provides additional variables that you can use in the Subject field. They include, but are not limited to, the following:
| ||The search that triggered the alert.|
| ||The severity level of the alert.|
| ||The number of results returned by the search.|
| ||A Splunk Web URL where users can view the results.|
| ||The absolute path to the results file.|
| ||The search ID of the job that triggered the alert.|
You can find a full list of available variables in the
savedsearches.conf specification file in the Admin Manual.
For the Addresses field, enter a comma-separated list of email addresses to which the alert should be sent.
Note: For your email notifications to work correctly, you first need to have your email alert settings configured in Manager. See the subsection "Configure email alert settings in Manager," below.
Send results in alert emails
You can arrange to have email alert notifications contain the results of the searches that trigger them. This works best when the search returns a truncated list (such as a list that returns the top 20 results) or a table. To do this, select Include search results and then use the list to identify the format that the results should be delivered in. You can have the results sent inline (as part of the body of the alert email, in other words), or you can have it delivered as a .csv or .pdf attachment with the alert email.
The method of inclusion is controlled via
alert_actions.conf (at a global level) or
savedsearches.conf (at an individual search level); for more information see "Set up alerts in savedsearches.conf" in the Admin Manual.
Important: You cannot configure alerts to send results as PDF attachments until you install the PDF Printer app on a central Linux host. If you aren't running this app, the as PDF option for Include search results will be unavailable. To get PDF printing working, contact a system administrator. For more information see "Configure PDF printing for Splunk Web" in the Installation manual.
You can also arrange to have PDF printouts of dashboards delivered by email on a set schedule. For more information, see "Schedule delivery of dashboard PDF printouts via email" in this manual.
The following is an example of what an email alert looks like:
Configure email alert settings in Manager
Email alerting will not work if the email alert settings in Manager are not configured, or are configured incorrectly. You can define these settings at Manager > System settings > Email alert settings. For details on how to fill out this screen, see "How alerting works" in the Admin manual.
If you don't see System settings or Email alert settings in Manager, you do not have permission to edit the settings. In this case, contact your Splunk Admin.
You can also use configuration files to set up email alert settings. You can configure them for your entire Splunk implementation in
alert_actions.conf, and you can configure them at the individual search level in
savedsearches.conf. For more information about .conf file management of saved searches and alert settings see "Set up alerts in savedsearches.conf" in the Admin Manual.
In Splunk Web, navigate to Manager > System settings > Email alert settings. Here you can define the Mail server settings (the mail host, username, password, and so on) and the Email format (link hostname, email subject & format, include inline results, and so on).
Note: If you are using PDF Server, the link hostname field must be the search head hostname for the instance sending requests to a PDF Report Server. Set this option only if the hostname that is autodetected by default is not correct for your environment.
Create an RSS feed
If you want Splunk to post this alert to an RSS feed when it is triggered, select Enable next to Add to RSS.
When the alert conditions are met for a scheduled search that has RSS feed notification enabled, Splunk sends a notification out to its RSS feed. The feed is located at
http://[splunkhost]:[port]/rss/[saved_search_name]. So, let's say you're running a search titled "errors_last15" and have a Splunk instance that is located on
localhost and uses port 8000, the correct link for the RSS feed would be
You can also access the RSS feed for a scheduled search through Manager > Searches and Reports. If a scheduled search has been set up to provide an RSS feed for alerting searches, when you look it up on the Searches and reports page, you will see a RSS symbol in the RSS feed column:
You can click on this symbol to go to the RSS feed.
Note: The RSS feed for a scheduled search will not display any searches until the search has run on its schedule and the alerting conditions that have been defined for it have been met. If you set the search up to alert each time it's run (by setting Perform actions to always), you'll see searches in the RSS feed after first time the search runs on its schedule.
Warning: The RSS feed is exposed to any user with access to the webserver that displays it. Unauthorized users can't follow the RSS link back to the Splunk application to view the results of a particular search, but they can see the summarization displayed in the RSS feed, which includes the name of the search that was run and the number of results returned by the search.
Here's an example of the XML that generates the feed:
<?xml version="1.0" encoding="UTF-8"?> <rss version="2.0"> <channel> <title>Alert: errors last15</title> <link>http://localhost:8000/app/search/@go?sid=scheduler_Z2d1cHRh</link> <description>Saved Searches Feed for saved search errors last15</description> <item> <title>errors last15</title> <link>http://localhost:8000/app/search/@go?sid=scheduler_Z2d1cHRh</link> <description>Alert trigger: errors last15, results.count=123 </description> <pubDate>Mon, 01 Feb 2010 12:55:09 -0800</pubDate> </item> </channel> </rss>
Run a script
If you want Splunk to run an alert script when the alert is triggered, select Enable for Run a script and enter the file name of the script that you want Splunk to execute.
For example, you may want an alert to run a script that generates an SNMP trap notification and sends it to another system such as a Network Systems Management console when its alerting conditions are met. Meanwhile, you could have a different alert that, when triggered, runs a script that calls an API, which in turn sends the triggering event to another system.
Note: For security reasons, all alert scripts must be placed in the
$SPLUNK_HOME/bin/scripts directory. This is where Splunk will look for any script triggered by an alert.
Check out this excellent topic on troubleshooting alert scripts on the Splunk Community Wiki.
For more details on configuring alerts, including instructions for configuring alerts using
savedsearches.conf, see the Admin Manual topic on alerts.
Track triggered alerts
If you want to have the Alert Manager keep records of the triggered alerts related to a particular alert configuration, select the Tracking checkbox. The Alert Manager will keep records of triggered alerts for the duration specified in the Expiration field on the Set Up Alert step of the Create Alert dialog box.
For more information about the Alert Manager and how it is used, see the "Review triggered alerts" topic in this manual.
Setting up tracking when upgrading to 4.2
When you upgrade your Splunk instance to 4.2, be aware that by default existing alerts do NOT show up in the alert manager. To quickly update your existing alerts so that they show up in the alert manager, edit the relevant copy of
alert.track = true to the stanzas of each saved search that you have set up as an alert and want to see tracked in the Alert Manager. Review "About configuration files" in the Admin Manual for details about configuration files.
Specify fields to show in alerts through search language
When Splunk sends out alert emails with the results of the alerting search job, Splunk includes all the fields in those results. If you want to have certain fields included in or excluded from the results, you need to use the fields command in your saved search string to specify the field inclusions or exclusions.
- To eliminate a field from the search results, pipe your search to
fields - $FIELDNAME.
- To add a field to the search results, pipe your search to
fields + $FIELDNAME.
You can specify multiple fields to include and exclude in one string. For example, your Search field may be:
yoursearch | fields - $FIELD1,$FIELD2 + $FIELD3,$FIELD4
The alert you receive will exclude
$FIELD2, but include
Enable summary indexing
Summary indexing is an action that you can configure for any alert. You use summary indexing when you need to perform analysis/reports on large amounts of data over long timespans, which typically can be quite time consuming, and a drain on performance if several users are running similar searches on a regular basis.
With summary indexing, you base an alert on a search that computes sufficient statistics (a summary) for events covering a slice of time. The search is set up so that each time it runs on its schedule, the search results are saved into a summary index that you designate. You can then run searches against this smaller (and thus faster) summary index instead of working with the much larger dataset from which the summary index receives its events.
To set up summary indexing for an alert, go to Manager > Searches and Reports, and either add a new saved search or open up the detail page for an existing search or alert. (You cannot set up summary indexing through the Create Alert window.) To enable the summary index to gather data on a regular interval, set its Alert condition to always and then select Enable under Summary indexing at the bottom of the view.
Note: There's more to summary indexing--you should take care how you construct the search that populates the summary index and in most cases special reporting commands should be used. Do not attempt to set up a summary index until you have read and understood "Use summary indexing for increased reporting efficiency" in the Knowledge Manager manual.