Alerts in the Analytics Workspace
Use alerts to monitor and respond to specific behavior in your data. Analytics Workspace alerts are based on a specific chart. Alerts use a scheduled search of chart data and trigger when search results meet specific conditions.
To create alerts in the workspace, you need specific permissions. See Requirements for the Analytics Workspace for details.
To learn more about alerting in the Splunk platform, see Getting started with alerts in the Alerting Manual.
Parts of an alert
Alerts in the Analytics Workspace consist of alert settings, trigger conditions, and trigger actions.
Alert settings
Configure what you want to monitor in alert settings. Alert settings include:
- Alert title
- Alert description
- Permissions. Whether the alert is private or shared in the workspace.
- Alert Type. Scheduled alerts periodically search for trigger conditions. Streaming alerts continuously search for trigger conditions. Streaming alerts can also reduce search processing load by enabling similar alerts to share the same search process.
- How often you want to check alert conditions. For example, "Evaluate every 10 minutes".
Trigger conditions
Set trigger conditions to manage when an alert triggers. Trigger conditions consist of an aggregation to measure, a threshold value, and a time period to evaluate.
For example, set trigger conditions to "Alert when Avg (over 10-second intervals) cpu.usage is greater than 10k in the last 20 minutes". The alert triggers when the aggregate average for cpu.usage exceeds 10,000 at any point in the last twenty minutes.
An alert does not have to trigger every time conditions are met. Throttle an alert to control how soon the next alert can trigger after an initial alert.
Trigger actions
Configure trigger actions to manage alert responses. By default, you can view detailed information for triggered alerts on the Triggered Alerts page in Splunk. To access the Triggered Alerts page, select Activity > Triggered Alerts from the top-level navigation bar.
Specify a severity level to assign a level of importance to an alert. Severity levels can help you sort or filter alerts on the Triggered Alerts page. Available severity levels include Info, Low, Medium, High, and Critical.
For detailed information about the various actions that can be set up for triggered alerts, see Set up alert actions in the Alerting Manual.
The Alerting Manual also has instructions for configuring mail server settings so Splunk can send email alerts. See Email notification action.
Create an alert
Create an alert in the Analytics Workspace to monitor your data for certain conditions.
- In the main panel, select the chart you want to use for the alert.
- Click the ellipsis () icon.
- Click Save as Alert.
- If your chart contains more than one time series, select the time series you want to use for the alert from the Source list.
- Fill in the Settings and Trigger Conditions for your alert.
- (Optional) Under Trigger Actions, click the + Add Actions drop-down list, and select additional actions for when the alert triggers. Triggered alerts are added to the Triggered Alerts page in the Splunk platform by default.
- Click the Severity drop-down list, and select a severity level for the alert.
- Click Save.
Manage alerts
View alerts that were previously created in the Analytics Workspace to monitor and respond to alert activity. Alerts show the same time range and hairline as other charts. Add an alert to the workspace through the Data panel. For more information, see Types of data in the Analytics Workspace.
Alert chart actions
Click the ellipsis () icon in the top-right corner of an alert chart to view a list of alert chart actions.
Action | Description |
---|---|
Edit Alert | Modify alert conditions. |
Open in Search | Show the SPL that drives the alert in the Search & Reporting App. |
Clone this Panel | Open the alert query in a metrics chart for further analysis. |
Search Related Events | View a list of related log events. |
Alert details
Select an alert in the Analytics Workspace to view its details. Alert details show in the Analysis panel. These details include the settings, threshold, and severity level configured for the alert. A scheduled alert displays the scheduled alert () badge next to the alert title. A streaming alert displays the streaming alert () badge next to the alert title.
Show triggered instances to see when alert conditions are met.
- In the main panel, select the alert to show triggered instances.
- In the Analysis panel under Settings, select Show triggered instances.
Triggered instances appear as annotations on the chart.
Triggered instance annotations appear at the end of the evaluation window in which the alert triggers, not at the time the alert threshold is crossed.
Use alert badges ( and ) to gauge the alert severity level. To help you monitor alert activity, badge colors are based on the most recent severity level of a triggered alert.
Severity level | Badge color |
---|---|
No trigger | Gray |
Info | Blue |
Low | Green |
Medium | Yellow |
High | Orange |
Critical | Red |
Example
The following alert shows CPU overutilization for the aws.ec2.CPUUtilization
metric.
This alert is based on the aggregate average values for the aws.ec2.CPUUtilization
metric. The blue alert badge indicates a severity level of Info. The horizontal blue line shows the alert threshold (1.0m). The annotations show triggered instances for the alert.
Follow up on alerts
Follow up on a triggered alert to perform additional analysis of the underlying data. To investigate a situation highlighted in an alert, open the alert query in a metrics chart.
Analyze a triggered alert in a metrics chart
To perform additional analysis of alert conditions, clone the alert in the Analytics Workspace.
- In the Data panel, search or browse for the alert that you want to investigate.
- Click on the alert name to open the alert in the Analytics Workspace.
- To view a list of alert chart actions, click the ellipsis ( ) icon in the top-right corner of the alert chart.
- Click Clone this Panel.
The alert query opens in a new metrics chart in the Analytics Workspace. You can perform additional analytic functions, such as filtering, modifying the time range, and splitting the chart by a dimension, to follow up on the conditions that triggered the alert.
Streaming metrics alert features not available in the Analytics Workspace
There are a few features for streaming metric alerts that are available only to users who can make direct edits to metric_alerts.conf
, where streaming metric alert configurations are stored, or engage with the alerts/metric_alerts
REST API endpoint.
Additional alert feature | Description | Setting |
---|---|---|
Set multiple group-by dimensions | You can identify a list of group-by dimensions for an alert. This results in a separate aggregation value for each combination of group-by dimensions, instead of just one aggregation value. The Splunk software evaluates the alert against each of these aggregation values. | groupby
|
Define complex eval expressions for alert conditions
|
You can set alert conditions that include multiple Boolean operators, eval functions, and metric aggregations. They can also reference dimensions specified in the groupby setting.
|
condition
|
Adjust lifespan of triggered streaming metric alert records | By default, records of triggered streaming metric alerts live for 24 hours. You can adjust this time on a per-alert basis for streaming metric alerts. | trigger.expires
|
Adjust maximum number of triggered alert records for a given streaming metric alert | By default, only 20 triggered alert records of a given streaming metric alert can exist at any given time. You can raise or lower this limit on a per-alert basis according to your needs. | trigger.max_tracked
|
For more information, see the metric_alerts.conf topic in the Admin Manual.
The streaming metric alert settings are also documented in the context of the alerts/metric_alerts endpoint in the REST API Reference Manual.
Analytics in the Analytics Workspace | Dashboards in the Analytics Workspace |
This documentation applies to the following versions of Splunk Cloud Platform™: 9.3.2408, 8.2.2112, 8.2.2201, 8.2.2202, 8.2.2203, 9.0.2205, 9.0.2208, 9.0.2209, 9.0.2303, 9.0.2305, 9.1.2308, 9.1.2312, 9.2.2403, 9.2.2406 (latest FedRAMP release)
Feedback submitted, thanks!