Docs » Alert on Splunk RUM data

Alert on Splunk RUM data 🔗

Splunk RUM leverages the Infrastructure Monitoring platform to create detectors and alerts. Configure detectors to alert on your Splunk RUM metrics so that you can monitor and take timely action on alerts associated with your application.

How alerts work in Splunk RUM 🔗

In Splunk RUM for Browser, alerts are triggered on aggregate metrics for the entire application. If you want to create an alert for a page level metric, first create a custom event for the metric, then create an alert for the custom event. To learn more, see Create custom events. If you are new to alerts and detectors, see Introduction to alerts and detectors in Splunk Observability Cloud.

Example 🔗

Web vitals have a standard range that denotes good performance. For example, a Largest Contentful Paint (LCP) metric of more than 2.5 seconds might lead to bad user experience on your application. With Splunk RUM, you can create an alert to notify you when your aggregated LCP is more than 2.5 seconds, send a Slack notification to your team, and link to the runbook with the steps on how to remedy the slow LCP.

This image shows the detector modal in Splunk RUM. To create the detector click :guilabel:`Activate`.

Integrations 🔗

You can use the following methods and integrations to receive alerts from Splunk RUM:

  • Email notifications

  • Jira

  • PagerDuty

  • ServiceNow

  • Slack

  • VictorOps

  • XMatters

You can also add a link in your message such as a link to a run book or other troubleshooting information in your organization.

Data retention 🔗

Alerts are triggered based on Infrastructure Monitoring metrics. Metrics are stored for 13 months. For more, see Data retention in Splunk Observability Cloud.

Types of metrics you can alert on 🔗

You can create alerts on the following kind of metrics. For a comprehensive list of all Splunk RUM metrics, see Splunk RUM metrics reference. To learn more about web vitals, see https://web.dev/vitals/ in the Google developer documentation.

Category

Metrics

Web vitals

  • LCP (Largest Contentful Paint)

  • FID (First Input Delay)

  • CLS (Cumulative Layout Shift)

Custom events

Page metrics

  • Page views and route changes

  • JavaScript errors

  • Long task length

  • Long task count

Endpoint metrics

  • Endpoint requests

  • Endpoint latency

  • TTFB (Time to First Byte)

Alert trigger conditions 🔗

RUM alert conditions are designed to reduce noise and provide clear, actionable insights on your data. You can configure the sensitivity of the alert to suit your needs.

Configuration example 🔗

The following image shows an example configuration for the fictitious Buttercup Industries that sends an alert if 50% of the data points in a five minute window are longer than 200 ms.

This image shows the detector modal in Splunk RUM. The trigger threshold is 200 ms and the sensitivity is set to 50% of 5 minutes.

Increase the sensitivity of your alerts 🔗

If you want an alert that is more sensitive to smaller changes, you can reduce the percentage. For example, if you set your sensitivity to 10%, then you’d be alerted when only 10% of the data in the given time frame crosses the threshold you set.

Create a detector 🔗

You can create a detector from either the RUM overview page or from Tag Spotlight.

Follow these steps to create a detector in RUM:

  1. In Splunk RUM, select a metric that is of interest to you to open Tag Spotlight.

  2. Select Create New Detector.

  3. Configure your detector:

    • Name your detector

    • Select the metric that is of interest to you and the type of data

    • Set the static threshold for your alert

    • Select the scope of your alert

    • Select the severity of the alert

  4. Share your alert with others by integrating with the tool your team uses to communicate and adding a link to your runbook.

  5. Select Activate.

Create dashboards for your RUM alerts 🔗

You can create dashboards for both web and mobile metrics. To see a list of the metrics available in Splunk RUM, see Splunk RUM metrics reference.

To create charts and dashboard for your RUM alerts and detectors, see: