Docs » Browser tests for webpages » Set up a Browser test

Set up a Browser test ๐Ÿ”—

Use a Browser test to monitor the user experience for a single page or a multi-step user flow by running a synthetic test of the URLs you provide. Use this type of test to monitor conversion paths or any path that requires multiple steps or runs JavaScript. For an example, see Scenario: Monitor a multi-step workflow using a Browser test.

For each page checked in a Browser test, Splunk Synthetic Monitoring captures an HTTP Archive (HAR) file, represented in a waterfall chart, which illustrates the performance of specific resources within the page. Browser tests also capture a set of 40+ metrics. See Waterfall chart and Browser test metrics to learn more.

Note

If the site or application you are monitoring uses allow lists or block lists for visitors or an analytics tool to measure traffic, check that itโ€™s configured to accommodate traffic from Splunk Synthetic Monitoring. See Get your site ready to run synthetic tests for instructions.

Set up a Browser test ๐Ÿ”—

For optimal experience, synthetics browser tests use a stable version of Google Chrome: 116.0.5845.96-1 to simulate user activity.

Follow these steps to set up a Browser test:

  1. From the landing page of Splunk Observability Cloud, navigate to Splunk Synthetic Monitoring.

  2. Under Tests, select Add New Test and select Browser Test from the drop-down list. The test creation view opens.

  3. In the Name field, enter a name for your test.

  4. To add steps and synthetic transactions to your Browser test, select Edit steps or synthetic transactions. See Add synthetic transactions to your Browser Test to learn more.

  5. As you build your test, you can use Try now to check that the configuration of your test is valid. Try now results are ephemeral and donโ€™t impact persisted run metrics. For more, see Validate your test configuration with try now.

  6. (Optional) Add a wait time before a step executes. See, Wait times.

  7. (Optional) Turn on automatic test retry in the event a test initially fails.

  8. Save your test.

Customize your test details ๐Ÿ”—

Use these steps to customize your test configuration and finish creating your test:

  1. In the Locations field, enter the locations from which you want to test the URL. You can select one or multiple locations.

  2. In the Device Type field, use the list to select the device from which youโ€™d like to conduct the test.

  3. In the Frequency field, select your desired test frequency from the list.

  4. (Optional) Use the Round Robin selector to switch between options: enabling Round Robin means your test cycles through your selected locations one at a time, while disabling Round Robin runs the test from all selected locations concurrently at your selected frequency.

  5. If you want to receive alerts from this test, select + Create detector to set up a detector on the test. Use the dialog box to customize your detector.

  6. Select Submit. This redirects you to the Test History page for your new test. If youโ€™ve just created the test, allow at least one test frequency interval for your test to begin collecting synthetic data.

  7. (Optional) Select Edit test or the three-dot Actions menu in the row for your test to edit, pause, duplicate, or delete this test.

See also ๐Ÿ”—

Import a JSON file generated from Google Chrome Recorder ๐Ÿ”—

To simplify the test creation process, make a recording using Google Chrome Recorder. Then, import the JSON file to Splunk Synthetic Monitoring to automatically import the steps in the workflow instead of adding each individual interaction you want to track. Recordings are especially helpful for complex user flows, or tests that have a large number of steps.

Create a Google Chrome Recorder JSON file ๐Ÿ”—

For steps on how to make a Google Chrome recording, see Record, replay, and measure user flows in the Chrome Developer user guide in Google documentation.

Requirements

  • In Google Chrome Recorder, select either CSS or XPATH for Selector type to record.

  • Browser tests run in one Browser tab only. Your recording canโ€™t span multiple tabs.

Import a Google Chrome Recorder JSON file ๐Ÿ”—

Note

Included within recordings from Google Chrome Recorder is the specific viewport size of the browser window used in the recording. When imported, this recorded viewport is not imported into the Synthetics Browser test. Check that the Synthetics Browser test device selection accurately represents the viewport size used by the recorded browser window.

Follow these steps to import a JSON file from Google Chrome Recorder to a new or existing Browser test.

  1. In Splunk Synthetic Monitoring, select Edit on an existing Browser test to open the test configuration page, or create a new test.

  2. Select Import.

  3. Upload the Google Chrome Recorder JSON file.

  4. If a step is not supported, you need to edit or delete the step in the test configuration page.

  5. (Optional) Add a name to each step.

  6. Save your changes.

Troubleshoot unsupported steps ๐Ÿ”—

If your recording contains unsupported steps, you need to edit the step to reformat it into one of the supported Synthetic Browser step types. The following table shows how Google Chrome Recorder step names and code snippets map to their counterparts in Splunk Synthetic Browser tests. These examples use Buttercup Games, a fictitious game company.

{
 // Google Chrome Recorder
  "type": "navigate",
  "url": "www.buttercupgames.com",
  "assertedEvents": [
     {
        "type": "navigation",
        "url": "www.buttercupgames.com",
        "title": "Buttercup Games"
     }
  ]
  }
{
// Google Chrome Recorder
   "type": "click",
   "target": "main",
   "selectors": [
      [
         "div:nth-of-type(2) > div:nth-of-type(2) a > div"
      ],
      [
         "xpath//html/body/main/div/div/div[2]/div[2]/div/a/div"
      ]
   ],
   "offsetY": 211,
   "offsetX": 164,
   "assertedEvents": [
      {
         "type": "navigation",
         "url": "www.buttercupgames.com/product/example",
         "title": "Buttercup Games"
      }
   ]
}
{
 // Google Chrome Recorder
 "type": "click",
  "target": "main",
  "selectors": [
     [
        "div:nth-of-type(2) > div:nth-of-type(2) a > div"
     ],
     [
        "xpath//html/body/main/div/div/div[2]/div[2]/div/a/div"
     ]
  ],
  "offsetY": 211,
  "offsetX": 164,
  "assertedEvents": []

  }
{
 // Google Chrome Recorder
     "type": "change",
     "value": "5",
     "selectors": [
        [
           "#quantity"
        ],
        [
           "xpath///*[@id=\"quantity\"]"
        ]
     ],
     "target": "main"
     }
{
// Google Chrome Recorder
      "type": "waitForElement",
      "selectors": [
         [
            "body",
            "#homepage_example",
            ".css-4t2fjl",
            ".eanm77i0"
         ]
      ]
      }
{
// Google Chrome Recorder
   "type": "waitForElement",
   "selectors": [
      [
         "body",
         "#homepage_product_brand-example",
         ".css-4t2fjl",
         ".eanm77i0"
      ]
   ],
   "visible": false
}
{
// Google Chrome Recorder
   "type": "customStep",
   "timeout": 5000,
   "target": "main",
   "name": "customParam",
   "parameters": {}
}

View your Browser test ๐Ÿ”—

Now that you created and saved a test, check whether itโ€™s collecting data as expected:

  1. From the Tests list, select the three-dot Actions menu and select Play arrow icon to manually trigger a live run of the test, or wait for at least one duration of the test frequency you set so that the test has time to run and collect data.

  2. Select the test youโ€™re interested in to open the Test history view, where you can view visualizations of recent test results and metrics.

  3. See Interpret Browser test results to learn more about Browser test results.

Edit your Browser test ๐Ÿ”—

To edit your Browser test, do the following:

  1. Select the row for the test you want to edit in the Tests list to open the Test history view.

  2. Select Edit test to edit your test configuration.

If you change the name of your test or the name of a synthetic transaction, it may take up to 20 minutes for the updated name to appear in your charts and detectors.

Advanced settings for Browser tests ๐Ÿ”—

There are many reasons why you might want to configure advanced settings for your synthetics tests. Here are a few:

  • Accessing a site with a modal that appears randomly and interrupts the flow of the test. For example, a marketing modal might prompt a user to sign up for a rewards program. To circumvent this issue you can set a cookie to stop the popup modal from appearing and interfering with your test.

  • Running a test on a site that requires users to log in to access the site.

  • Specifying the type of device on which you want to run your test by setting the User-Agent header on requests.

  • Testing out a CDN. For example, you might want to load the HTML page in the browser, but rewrite the hosts for some or all requests to a new host.

  • Filtering out requests from analytics on the back end by sending a specific header in the requests.

  • Running a test on a pre-production site that has a self-signed certificate.

Collect interactive metrics ๐Ÿ”—

Interactive metrics are collected by default for each page in the test flow, but this can result in longer run durations depending on how long it takes for the page to become fully interactive. You can turn off interactive metrics in advanced settings to speed up run durations and see results faster. If you turn off interactive metrics then the following metrics might be missing from your test:

  • First CPU idle: Time until the page is minimally interactive and responds to user input.

  • Time to interactive: This measures the time until the page responds to user input quickly. It is used to identify when the page is actually usable, not just when the page load looks complete.

  • Lighthouse score: A weighted aggregation of several Browser test metric values calculated using v10 of the Lighthouse desktop scoring algorithm. See https://developer.chrome.com/docs/lighthouse/performance/performance-scoring#lighthouse_10 in the Google developer documentation to learn more about Lighthouse scoring.

Auto-retry ๐Ÿ”—

Run a test again automatically if it fails without any user intervention. Itโ€™s a best practice to turn on auto-retry to reduce unnecessary failures from temporary interruptions like network issues, timeouts, or intermittent issues on your site. Auto-retry runs do not impact subscription usage, only the completed run result counts towards your subscription usage. Auto-retry requires at least runner version 0.9.29.

TLS/SSL validation ๐Ÿ”—

When activated, this feature is used to enforce the validation of expired, invalid hostname, or untrusted issuer on TLS/SSL certificates.

Note

When testing pre-production environments that have self-signed or invalid certificates, itโ€™s best to leave TLS/SSL validation feature deactivated.

Authentication ๐Ÿ”—

Add credentials to authenticate with sites that require additional security protocols, for example from within a corporate network. To use Authentication, a username and password need to be provided. The username can be entered as plain text or be defined by a global variable, but the password must be defined using a global variable. It is recommended to use a concealed global variable for your password to create an additional layer of security for your credentials. For more, see What happens when you conceal a global variable?.

When executing the browser test, the Chrome browser is configured with the credentials defined in the test configuration. Authentication is not integrated at the OS level, it is only configured in the browser. At this time, Chrome supports the following authentication protocols:

  • Basic Authentication

  • NTLM

  • Kerberos

  • Digest

More details on Chrome authentication are available here list .

Custom headers ๐Ÿ”—

Specify custom headers to send with each request. For example, you can add a header in your request to filter out data from back-end analytics. To add a custom header, a name and value are required. You may optionally provide a domain to scope the header to. If a domain is specified, only requests sent to that domain will include the header. Otherwise the header will be included in requests to all domains.

You can also use headers to change the user agent. The default user agent is the given one for the selected device, which updates whenever the Chrome version changes for synthetic runners. To customize the user agent header for all domains, turn off the โ€œUse device defaultโ€ toggle next to the user agent field and then enter the new value. To change the user agent for specific domains, add a custom header and provide the domain you wish to apply that user agent to. If a domain is not specified, the top-level user agent setting takes precedence.

Custom headers can be used to set cookies, but we recommend using the Cookies settings instead outlined in the section below.

Cookies ๐Ÿ”—

Set cookies in the browser before the test starts. For example, your site may respond to a cookie to circumvent a popup modal from randomly appearing and interfering with your test flow.

To set a cookie, a key and value must be provided. A domain and path may optionally be provided to only apply the cookie to requests to the given domain and/or path. By default, the cookie will aply to the to the domain of the starting URL of the check and all paths on that domain. Splunk Synthetics Monitoring uses the public suffix list to determine the domain.

Host overrides ๐Ÿ”—

Add host override rules to reroute requests from one host to another. For example, you can create a host override to test an existing production site against page resources loaded from a development site or a specific CDN edge node.

You can also indicate whether to retain the original HOST header by activating Keep host headers. If activated, the original requestโ€™s headers remain intact (recommended). If deactivated, a change in the HOST header to the new host might occur, potentially leading to an internal direct (307). It is activated by default.

Wait times ๐Ÿ”—

Optimize your test coverage by adding custom wait times to capture longer page loads and improve the accuracy of run results. Applications with long load times can cause a Browser test to fail. If you know that there are certain steps in a workflow that take longer than 10 seconds, add a custom wait time to your Browser test.

  • Wait times are available with Browser tests only.

  • The maximum custom wait time for each test is 200 seconds.

Follow these steps to configure custom wait times for your Browser tests:

  1. In Splunk Synthetic Monitoring, select Edit on the Browser test to open the configuration panel.

  2. Select New step > Wait, from the step type drop down.

  3. Add a name and the wait time in ms.

  4. When you finish instrumenting your test, save the workflow: Return to test > Save.

The following image shows how to configure a test to go to a URL, wait for 10 seconds, then log in.

This image shows a browser test with three steps: go to url, wait 20 seconds, then log in.

Limits and defaults for configurable wait times ๐Ÿ”—

Here are the limits for each type of wait time. The maximum limit for a run is 30 minutes, after which it times out.

Description

Limit

Assert steps

90 seconds

Wait for navigation

20 seconds

Description

Default

Wait time for assert

10 seconds

Wait for navigation

2 seconds

Chrome flags ๐Ÿ”—

Google Chrome flags are a helpful tool for troubleshooting. Activate browser features that are not available by default to test custom browser configurations and specialized use cases, like a proxy server.

For more, see What are Chrome flags? in the Google Chrome Developer guide.

Note: Global variables are incompatible with Chrome flags.

These are the flags available:

Chrome flag

Description

--disable-http2

Requests are made using using http/1.1 instead of http/2.0. This HTTP version is viewable in the HAR file.

--disable-quic

Deactivates QUIC, which also deactivates HTTP3.

--disable-web-security

Deactivate enforcement of same origin policy.

--unsafely-treat-insecure-origin-as-secure=http://a.test,http://b.test

Treat given insecure origin as secure. Multiple origins can be supplied in a comma-separated list.

--proxy-bypass-list="*.google.com;*foo.com;127.0.0.1:8080"

Proxy bypass list for any specified proxy for the given semi-colon-separated list of hosts. This flag must be used with --proxy-server.

--proxy-server="foopy:8080"

Uses a specified proxy server to override default settings.

--no-proxy-server

Donโ€™t use a proxy server, always make direct connections. This flag can be used to override any other proxy server flags that you may have set up in a private location.

Custom properties ๐Ÿ”—

Add custom properties in the test creation page in advanced settings. Use key-value pairs to create custom properties to filter and group dashboards, charts, and create alerts. A list of suggested custom properties is available for each test based on the tags associated with your test. For example: env:test, role:developer, product:rum. When you have multiple key-value pairs the logic is AND among the results. So in this case, the results show all tests for the RUM product with a developer role in the environment test.

This image shows two custom property key value pairs, env:prod and role:developer.

Custom properties are single-valued and donโ€™t support multiple values, like region:eu, us. For each test, you can only use one and unique key. For example, you can have env1:test and env:test in the same test, but you canโ€™t have env:test, and env:prod.

Key requirements:

  • Keys must start with an uppercase or lowercase letter. Keys canโ€™t start with special characters or numbers.

  • The remainder of the key can contain letters, numbers, underscores and hyphens.

  • Keys canโ€™t be named test_id or test.

  • Key size canโ€™t exceed 128 characters.

See, Custom properties.

Example ๐Ÿ”—

For an example, see Scenario: Monitor a multi-step workflow using a Browser test.

This page was last updated on Dec 09, 2024.