Docs » Use an Uptime Test to test port or HTTP uptime » Interpret Uptime test results

Interpret Uptime test results πŸ”—

Every run of an Uptime test in Splunk Synthetic Monitoring produces a set of results that help you understand the performance of your application in real time.

To learn more about the types of Uptime tests, see Use an Uptime Test to test port or HTTP uptime. To set up an Uptime test, see Set up an Uptime test.

View Uptime test history πŸ”—

On the Test History page, view a customizable summary of recent run results so you can assess the performance of your test at a glance.

  1. To open the Test History view for a test, select its row in the Tests list.

  2. You can take the following actions in the Test History page:

    • Select Edit test to edit your test configuration.

    • Select Create detector to create a detector based on your test. See Set up detectors and alerts in Splunk Synthetic Monitoring to learn more.

    • Select Actions > Pause test to pause your test.

    • Select Actions > Copy test to make a copy of your test. This opens the New Uptime Test page with the details of the test pre-filled.

Customize the Performance KPIs chart πŸ”—

The Performance KPIs chart offers a customizable visualization of your recent test results. Use these steps to customize the visualization:

In the Performance KPIs chart, use the selectors to adjust the following settings:

Option

Default

Description

Time

Last 8 hours

Choose the amount of time shown in the chart.

Interval

Run level

Interval between each pair of data points.

When you choose Run level, each data point on the chart corresponds to an actual run of the test; choosing larger intervals shows an aggregation of results over that time interval.

If you choose a level higher than Run level, the data points you see are aggregations of multiple runs. You can select an aggregate data point in the chart to zoom in and view the data at a per-run level.

Scale

Linear

Choose whether the y-axis has a linear or logarithmic scale.

Segment by

Location

Choose whether the data points are segmented by run location or no segmentation:

- Choose No segmentation to view data points aggregated from across all locations, pages, and synthetic transactions in your test.
- Choose Location to compare performance across multiple test locations.

Locations

All locations selected

Choose the run locations you want to display on the chart.

Filter

All locations selected

If you have enabled segmentation by location, choose the run locations you want to display on the chart.

Metrics

Run duration

By default, the chart displays the Duration metric. Use the drop-down list to choose the metrics you want to view in the chart.

View results for a specific run πŸ”—

To navigate to the Run Results view for a single run, select a data point within the Performance KPIs chart with the visualization interval at Run level and the segmentation set to Location.

If you’re viewing aggregate data (for example, at a 20-minute interval instead of at run level, or with no segmentation by location), selecting a data point zooms you in to see run-level detail. Then you can select a specific run to see the Run Results from there.

You can also select a row in the Recent run results table below the Performance KPIs chart.

Interpret Uptime test run results πŸ”—

When you navigate to the Run Results page for a particular run of an Uptime test, what you see depends on whether the test is a Port or HTTP test, and whether the run was successful or not.

Run Results: Success πŸ”—

For successful HTTP tests, the Run Results page shows the following metrics:

  • DNS time

  • Time to first byte

  • Response time

  • Uptime

For successful Port tests, the Run Results page shows the following metric:

  • Response time

Run Results: Failure πŸ”—

For failed Uptime tests, the Run Results page shows the following additional diagnostics to help you understand the root cause of availability issues:

  • Request header

  • Response header

  • Response body

  • Nslookup, a series of queries to the Domain Name System (DNS) of the mappings between domain name and IP address

  • Traceroute, a list of packet transit delays across the IP network

  • Connection log

Splunk RUM integration πŸ”—

Integrate with Splunk RUM so that you can automatically measure Web Vital metrics against your run results. Web vitals capture key metrics that affect user experience and assess the overall performance of your site. For more, see Compare run results to Web Vitals with Splunk RUM.

Uptime test metrics πŸ”—

Uptime tests capture a set of key metrics that offer insight into your webpage’s performance at a glance. The following table provides a list of these metrics:

Metric label

Source metric name

Description

DNS time

synthetics.dns.time.ms

Time required to resolve a host name from the DNS server. This metric is available for HTTP Uptime tests, but not Port Uptime tests.

Time to first byte

synthetics.ttfb.time.ms

Time from the start of the first request until receiving the first byte of the first non-redirect request. 3xx redirects will increase this time. This metric is available for HTTP Uptime tests, but not Port Uptime tests.

Response time

synthetics.duration.time.ms

Total time for the request/response to complete. This metric is also referred to as duration. For HTTP tests, this is the total time in seconds from the previous transfer, including name resolving, TCP connection, and so on. For Port tests, this is the approximate total time in seconds that it took to ping the host.

Uptime

synthetics.run.uptime.percent

Percentage uptime of an endpoint in the selected time frame.

Dimensions πŸ”—

All Splunk Synthetic Monitoring metrics have the following dimensions:

Dimension

Description

success

true if the run succeeds; false if it fails.

failed

true if the run fails, false if it succeeds.

location_id

The ID of the location for this run.

test_id

The ID of this test.

test_type

The test type dimension for Uptime tests is either http or port.