Docs » Alert on Splunk RUM data

Alert on Splunk RUM data 🔗

Configure detectors to alert on your Splunk RUM metrics so that you can monitor and take timely action on alerts associated with your application. In Splunk RUM for Browser, alerts are triggered on aggregate metrics for the entire application. If you want to create an alert for a page level metric, first create a custom event for the metric, then create an alert for the custom event. To learn more, see Create custom events.

If you are new to alerts and detectors, start with Introduction to alerts and detectors in Splunk Observability Cloud.

How alerts work in RUM 🔗

Splunk RUM leverages the Infrastructure Monitoring platform to create detectors and alerts.

What metrics can I trigger alerts on? 🔗

You can trigger alerts on the following kind of metrics:

Category

Metrics

Web vitals

  • LCP (Largest Contentful Paint)

  • FID (First Input Delay)

  • CLS (Cumulative Layout Shift)

Custom events

Page metrics

  • Front-end requests

  • Front-end errors

  • Long task length

  • Long task count

Endpoint metrics

  • Endpoint requests

  • Endpoint latency

  • TTFB (Time to First Byte)

If you are new to web vitals, see https://web.dev/vitals/ in the Google developer documentation.

Alert trigger conditions 🔗

RUM alert conditions are designed to reduce noise and provide clear, actionable insights on your data. When 50% of the data points in a five minute window cross the static threshold you defined then an alert is triggered.

Integrations for RUM alerts 🔗

You can use the following methods and integrations to receive alerts from Splunk RUM:

  • Email notifications

  • Jira

  • PagerDuty

  • ServiceNow

  • Slack

  • VictorOps

  • XMatters

You can also add a link in your message such as a link to a run book or other troubleshooting information in your organization.

Data retention 🔗

Alerts are triggered based on Infrastructure Monitoring metrics. Metrics are stored for 13 months. For more, see Data retention in Splunk Observability Cloud.

Use case for RUM alerts 🔗

Web vitals have a standard range that denotes good performance. For example, a Largest Contentful Paint (LCP) metric of more than 2.5 seconds might lead to bad user experience on your application. With Splunk RUM, you can create an alert to notify you when your aggregated LCP is more than 2.5 seconds, send a Slack notification to your team, and link to the runbook with the steps on how to remedy the slow LCP.

Create a detector 🔗

You can create a detector from either the RUM overview page or from Tag Spotlight.

Follow these steps to create a detector in RUM:

  1. In Splunk RUM, select a metric that is of interest to you to open Tag Spotlight.

  2. Select Create New Detector.

  3. Configure your detector:

    • Name your detector

    • Select the metric that is of interest to you and the type of data

    • Set the static threshold for your alert

    • Select the scope of your alert

    • Select the severity of the alert

  4. Share your alert with others by integrating with the tool your team uses to communicate and adding a link to your runbook.

  5. Select Activate.

Create dashboards for your RUM alerts 🔗

You can create dashboards for both web and mobile metrics.

The following table lists the name for each metric in RUM.

Metric

Name

LCP

rum.webvitals_lcp.time.ns.p75

CLS

rum.webvitals_cls.score.p75

FID

rum.webvitals_fid.time.ns.p75

Mobile crash

rum.crash.count

App error

rum.app_error.count

Cold start

rum.cold_start.time.ns.p75

Cold start count

rum.cold_start.count

Warm start count

rum.warm_start.count

Warm start time

rum.warm_start.time.ns.p75

Hot start count

rum.hot_start.count

Hot start time

rum.hot_start.time.ns.p75

Event Requests/Errors

rum.workflow.count

Front-end requests

rum.page_view.count

Document load latency

rum.page_view.time.ns.p75

Front-end errors

rum.client_error.count

Long Task Count

rum.long_task.count

Long Task Length

rum.long_task.time.ns.p75

Endpoint Requests/Errors

rum.resource_request.count

Endpoint Latency

rum.resource_request.time.ns.p75

TTFB

rum.resource_request.ttfb.time.ns.p75

To create charts and dashboard for your RUM alerts and detectors, see: