Docs » Use an API Test to test an endpoint » API test metrics

API test metrics 🔗

API Tests capture a set of key metrics that offer insight into your API’s performance at a glance.

Dimensions 🔗

Splunk Synthetic Monitoring metrics have the following dimensions:




true if the run succeeds; false if it fails


true if the run fails, false if it succeeds


The ID of the location for this run


The ID of this test.


The test type for an API test is set to api.

Request level metrics 🔗

The following metrics are collected for each request.

Request-level metrics include an additional request_number dimension that refers to the position of the request within the test. The position of the first request in the test is 0, the second request has position 1, and so on. If you choose a request-level metric in the Performance KPIs chart or in a detector without specifying a request with the request_number dimension, the metric value is aggregated across all requests.


Metric name


DNS time

Time required to resolve a host name from the DNS server. Name resolving is the process when libcurl translates a name into an IP address.

Time to first btye (TTFB)

Time from the start of the first request until receiving the first byte of the first non-redirect request. Each 3xx redirect increases this time.


Total time for the request and response to complete.

Receive time


Total time it takes to receive the previous transfer, for example name resolving.

TCP connect time

Time it takes to connect to the to remote host or proxy.

TLS time

Time from start to finish of the SSL/SSH handshake.

Start transfer time

Time elapsed from the start of the transfer until libcurl receives the first byte.

Request size


Number of bytes in the http request.

Run-level metrics 🔗

Each occurrence of a test from a particular device and location at a specific time is called a run. These metrics are calculated based on each run:


Metric name



Total duration of the run.


The percentage of non-failed test runs. Uptime is calculated by taking the average score of all runs in the selected time frame, where a successful run receives a score of 100 and a failure receives a score of 0.


The percentage of failed runs within the selected time frame. Downtime is calculated by taking the average score of all runs in the selected time frame, where a failed run receives a score of 100 and a successful run receives a score of 0.