Interpret Browser test results π
Every run of a Browser Test in Splunk Synthetic Monitoring produces a set of diagnostics that help you understand the performance of your application in real time.
View Browser test history π
On the Test History page, view a customizable summary of recent run results so you can assess the performance of your test at a glance.
To open the Test History view for a test, select its row in the Tests list.
You can take the following actions in the Test History page:
Select Edit test to edit your test configuration. Note that if you change the name of your test or the name of a synthetic transaction, it may take up to 20 minutes for the updated name to appear in your charts and detectors.
Select Create detector to create a detector based on your test. See Set up detectors and alerts in Splunk Synthetic Monitoring to learn more.
Select Actions > Pause test to pause your test.
Select Actions > Copy test to make a copy of your test. This opens the New Browser test page with the details of the test pre-filled.
Customize the Performance KPIs chart π
The Performance KPIs chart offers a customizable visualization of your recent test results. Use these steps to customize the visualization:
In the Performance KPIs chart, use the selectors to adjust the following settings:
Option
Default
Description
Time
Last 8 hours
Choose the amount of time shown in the chart.
Interval
Run level
Interval between each pair of data points.When you choose Run level, each data point on the chart corresponds to an actual run of the test; choosing larger intervals shows an aggregation of results over that time interval.If you choose a level higher than Run level, the data points you see are aggregations of multiple runs. You can select an aggregate data point in the chart to zoom in and view the data at a per-run level.Scale
Linear
Choose whether the y-axis has a linear or logarithmic scale.
Segment by
Location
Choose whether the data points are segmented by run location, test page, synthetic transaction, or no segmentation:- Choose No segmentation to view data points aggregated from across all locations, pages, and synthetic transactions in your test.- Choose Location to compare performance across multiple test locations.- Choose Page if your test includes multiple pages and you want to compare performance across pages.- Choose Synthetic transaction to compare performance across multiple synthetic transactions in your test.Toggle between these options to see your test data sliced in various ways.Filter
All options selected
If you have enabled segmentation, choose the run locations, pages, or transactions you want to display on the chart.
Metrics
Duration
By default, the chart displays the Duration metric. Use the drop-down list to choose the metrics you want to view in the chart.
View results for a specific run π
To navigate to the Run results view for a single run, select a data point within the Performance KPIs chart with the visualization interval at Run level and the segmentation set to Location.
If youβre viewing aggregate data (for example, at a 20-minute interval instead of run level), selecting a data point zooms you in to see run-level detail. Then you can select a specific run to see the Run results from there.
You can also select a row in the Recent run results table below the Performance KPIs chart.
Interpret Browser test run results π
Every run of a Browser test generates a set of results including a waterfall chart and metrics.
For a single-page Browser test, you get a single waterfall chart with all requests from that run.
For a transactional Browser test, the waterfall chart is divided into sections based on the synthetic transactions in your test. Select the name of a synthetic transaction to expand the list of steps involved in that synthetic transaction. Select the name of a step within a synthetic transaction to expand the list of requests involved in each step.
Waterfall chart π
Every run of a Browser test in Splunk Synthetic Monitoring also generates a HTTP archive format (HAR) file that logs the interaction between the test runner and the site being tested. This file records the time it takes for each resource in the site to load.
A waterfall chart is a visual representation of the data in a HAR file. The chart contains a horizontal bar for each resource in the page. To provide detail on these resources, the chart contains the following columns:
Column name |
Example |
Description |
---|---|---|
Method |
|
HTTP method for each resource. Most requests to load a page are GET requests, though there might also be POST requests when a user or synthetic test enters data into the page. |
File |
|
File name of the resource, extracted from the URL where the resource is located. Hover over the cell to view the entire URL. |
Domain |
|
Domain where the resource is hosted. |
Size |
|
Uncompressed size of the resource. |
Status |
|
HTTP response code of the request for the resource. |
Timeline |
Colored bars indicating the durations of parts of the request |
Timeline for the page load. This timeline begins at |
Using the waterfall chart, you can do the following:
Expand the details in a row to show the request and response headers for that resource.
Hover over a row of the timeline to view a pop-up message with detailed request timings for that resource.
Search resources in a page by keywords in the URL.
Follow a direct link to related back-end spans if the same app is instrumented with APM. See Link Synthetic spans to APM spans.
Use the tabs to filter the waterfall chart by resource type, including JS, CSS, Image, Media, JSON, and XML.
Download the raw HAR file, using the API .
Show or hide columns in the chart
Filmstrip π
Available in Enterprise Edition.
The filmstrip offers a screenshot of site performance at specific intervals on a timeline, so that you can see how the page responds in real time. By default, the filmstrip provides a screenshot and the time in milliseconds for every visual change as the page loads. You can also use the interval selector to view screenshots for every 100 milliseconds, 500 milliseconds, and one second. The maximum number of steps for optimal performance is 35. The maximum data per filmstrip is 3GB. If your filmstrip is larger than 2GB, the remaining video isnβt colleceted but all the other metrics are still stored.
Video π
Available in Enterprise Edition.
In the filmstrip view, you can also view a video of the site loading in real time. This lets you see exactly what a user trying to load your site from the location and device of a particular test run would experience. You can use the Download Video button to download this video as an .mp4 file for later reference.
Browser test metrics π
In addition to these diagnostics, every run of a Browser Test produces a set of 40+ metrics that offer a picture of website performance. See Browser test metrics for a complete list of these metrics.
Detect and report on your synthetic metrics π
To get even more value out of your synthetic metrics, use the metrics engine to create custom metrics, charts, and detectors. See the following links for more information:
To build charts and dashboards using your metrics, see Dashboards in Splunk Observability Cloud.
To create static threshold detectors natively in Splunk Synthetic Monitoring, see Set up detectors and alerts in Splunk Synthetic Monitoring.
To build more advanced detectors using the Splunk Observability Cloud metrics engine, see Introduction to alerts and detectors in Splunk Observability Cloud.
To learn more about metrics in Splunk Observability Cloud, see Metrics, data points, and metric time series in Splunk Observability Cloud.
(Optional) Splunk RUM integration π
Splunk Synthetic Monitoring automatically collects web vitals for Browser tests. If you also want to measure web vital metrics against your run results, then integrate with Splunk RUM. Web vitals capture key metrics that affect user experience and assess the overall performance of your site. For more, see Compare run results to Web Vitals with Splunk RUM.