Docs » Advanced test configurations » Private locations

Private locations ๐Ÿ”—

A private location is a name you create in Splunk Synthetic Monitoring to represent a custom location from which you can run synthetic tests. The name you give to a private location allows you to specify that name in a synthetic testโ€™s Locations field. To run synthetic tests from private locations, you must also set up one or more private runners within the private location to do the actual communication with your test targets and with Splunk Synthetic Monitoring.

Use cases for private locations ๐Ÿ”—

Private locations provide a way for you to find, fix, and prevent performance problems in internal applications in any environment, inside or outside of your firewalls. You can use private locations to run tests earlier in your development cycle against internal sites or applications that arenโ€™t available to the public. You can also use private locations to test public endpoints from locations that arenโ€™t included in the list of Splunk Synthetic Monitoring public locations.

To summarize, here are sample use cases for private locations:

  • Test private applications that arenโ€™t exposed to the public.

  • Test pre-production applications which donโ€™t have public staging sites.

  • Gain a higher level of flexibility by giving Splunk Synthetic Monitoring access to applications.

  • Test from locations not currently supported by Splunk Synthetic Monitoringโ€™s public locations.

Set up a new private location ๐Ÿ”—

Follow these steps to set up a new private location:

  1. In Splunk Synthetic Monitoring, select the settings gear icon, then Private locations.

  2. Select Create private location and add a name.

  3. Follow the steps in the guided setup to set up one or more private runners for that private location.

  4. Save your private location.

What you can do with your private location ID ๐Ÿ”—

Each private location has a corresponding private location ID. With this ID, you can:

  • Build charts or dashboards

  • Search for metrics by private location

  • Refer to your private location ID if youโ€™re interacting with the Splunk Synthetics Monitoring APIs.

Manage your tokens ๐Ÿ”—

It is your responsibility to update and manage your tokens. Tokens are valid for one year. For added security, create a secret environment variable for your token in Docker. Consider creating a second token to provide coverage before your first token expires. You are not notified of expiring tokens.

Assess the health of a private location ๐Ÿ”—

A private locationโ€™s health depends on three factors:

Factor

Description

Solution

Active runner

At least one runner is actively checking in.

If no runners are checking in, set up new runners for the private location.

Used in tests

The private location is currently being used in one or more tests.

If you need to delete a private location, you need to first delete it from all tests.

Clear queue

The queue for a given location is being cleared periodically and is not backed up.

If the queue is backed up, add new runners to the private location.

Troubleshoot queue length and latency ๐Ÿ”—

If both the queue latency and length increase over time, then add more runners to improve performance.

If your queue latency increases but your queue length doesnโ€™t, try these troubleshooting methods:

  • Check to see if a step is delaying the rest of the test.

  • Investigate whether you have the sufficient resources to run private location runners on your machines.

The maximum number of runs in a queue is 100,000.

Any runs older than one hour are removed from the queue.

Private runners ๐Ÿ”—

A private runner queries Splunk Synthetic Monitoring for tests configured to run in its inherent private location, performs the testโ€™s steps on your private target, and reports the results back to Splunk Synthetic Monitoring. Because a private runner must have access to your private target, it is a Docker image which you deploy on your own infrastructure, within your own internal network. See Private locations.

If you deploy multiple private runners on behalf of a single private location, they can process that locationโ€™s test queue faster. Splunk Synthetic Monitoring doesnโ€™t track how many private runners youโ€™ve deployed for a given private location. Itโ€™s up to you to manage your own fleet of private runners.

Requirements for private runners ๐Ÿ”—

Requirement

Description

Docker

  • The Docker container requires the host have the ifb kernel module installed.

  • The Docker container needs outbound internet access; however it doesnโ€™t need inbound access.

Allowlist

  • runner.<realm>.signalfx.com

  • *.signalfx.com

  • *.amazonaws.com

  • quay.io/signalfx

  • quay.io/v2

Operating system

Linux, Windows, or macOS

For optimal performance when running browser tests:

  • Linux

  • 2.3 GHz Dual-Core Intel Xeon (or equivalent) processor

  • 8 GB RAM, 2 cores

Supported platforms ๐Ÿ”—

  • Docker

  • Docker Compose

  • AWS ECS

  • Docker for Mac or Windows

  • Helm

  • Kubernetes

  • OpenShift

  • Podman

  • Podman for MacOS or Windows

  • ARM64 machines on AWS and GCP

Browser compatibility ๐Ÿ”—

The private runner uses a headless browser to run the browser tests. The private runner Docker image for AMD64 architecture contains the Chrome browser, and the private runner Docker image for ARM64 architecture contains the Chromium browser, because Chrome is unavailable for ARM64. Chrome and Chromium versions might not be the same.

To find the browser type and version, look at the labels browser-type and browser-version in the Docker image. You can find these labels either at http://quay.io/ or in the output of the following commands:

docker pull quay.io/signalfx/splunk-synthetics-runner:latest
docker inspect -f '{{ index .Config.Labels "browser-type" }}' quay.io/signalfx/splunk-synthetics-runner:latest
docker inspect -f '{{ index .Config.Labels "browser-version" }}' quay.io/signalfx/splunk-synthetics-runner:latest

Required container permissions ๐Ÿ”—

This section outlines the minimum requirements for the container on which the private runner Docker image is running.

Minimum container permissions ๐Ÿ”—

The container must have the following permissions at a minimum. The private runner Docker image already has these permissions set up by default, so if you donโ€™t change the container runtime user, you donโ€™t need to take any action:

  • Read and write access to the applicationโ€™s home or working directory (usually this is /home/pptruser/)

  • Read and write access to /tmp (the system grants this permission to all users by default)

Note

Donโ€™t set the containerโ€™s root filesystem to read-only (the readOnlyRootFilesystem flag), because this prevents the container from starting up.

Optional permissions for custom CA certificates ๐Ÿ”—

If the tests you send to the private runner need to use a custom CA certificate for API and uptime tests, the container must support privilege escalation in an init container, which adds the custom certificate to the runnerโ€™s system CA certificates. To allow privilege escalation, set containers.securityContext.allowPrivilegeEscalation to true:

containers:
  securityContext:
    allowPrivilegeEscalation: true

To verify that the container allows privilege escalation, see if it runs the sudo command.

Optional permissions for network shaping ๐Ÿ”—

If the tests you send to the private runner need to simulate different network throughputs, the Docker container must support privilege escalation along with the NET_ADMIN capability (for network shaping).

containers:
  securityContext:
    capabilities:
      add:
      - NET_ADMIN
    allowPrivilegeEscalation: true

If you see the following warning message sudo: unable to send audit message: Operation not permitted, also add the AUDIT_WRITE capability:

containers:
  securityContext:
    capabilities:
      add:
      - NET_ADMIN
      - AUDIT_WRITE
    allowPrivilegeEscalation: true

Required container resources ๐Ÿ”—

The Docker container on which you deploy the private runner must have the following resources.

Increase the resources allocated to the private runnerโ€™s pod when you see:

  • Browser crashes or errors

  • Log messages indicating that there are errors spawning the browser

Minimum container resources ๐Ÿ”—

  • CPUs: 1

  • Memory: 2GB

Resource intensive tests ๐Ÿ”—

The CPU and memory required for each test are heavily dependent on the test being executed. If the tests you send to the private runner are resource-intensive browser tests such as those listed below, the Docker container must have at least the recommended resources instead of the minimum resources.

  • Loading complex webpages with high resolution images or complex JavaScript

  • Media playback such as video streaming

  • Heavy JavaScript execution such as extensive DOM manipulations or memory hogging

  • Loading and interacting with large data sets (for example, sorting, filtering, or searching operations)

  • Uploading or downloading large files

These are only some examples of resource-intensive browser tests; your test cases may vary.

Private runners on Docker ๐Ÿ”—

Install a private runner ๐Ÿ”—

Start your container using the Docker run command and the following flags:

docker run --cap-add NET_ADMIN \
-e "RUNNER_TOKEN=YOUR_TOKEN_HERE" \
quay.io/signalfx/splunk-synthetics-runner:latest

Upgrade a private runner ๐Ÿ”—

Manual upgrades ๐Ÿ”—

To upgrade the Docker image manually, follow these steps:

  1. Pull the latest image:

    docker pull http://quay.io/signalfx/splunk-synthetics-runner:latest
    
  2. Stop the running container:

    docker stop <container_id_or_name>
    
  3. Remove the old container:

    docker rm <container_id_or_name>
    
  4. Start a new container with the updated image:

    docker run --cap-add NET_ADMIN -e "RUNNER_TOKEN=YOUR_TOKEN_HERE" http://quay.io/signalfx/splunk-synthetics-runner:latest
    
Automatic upgrades ๐Ÿ”—

You can automate the upgrade of the private location Docker images by using an automated upgrade solution such as Watchtower , a third party open-source Docker container that connects to remote Docker repositories on a schedule and checks for updates. This section explains how to use Watchtower, but if your operations team already has a mechanism established for deploying updates to Docker images you can use your existing mechanism without making any configuration changes to the private runner. The best practice is to run your upgrade automation at least once every 24 hours. Failing to update the private runner to the latest available image may result in inconsistent data and loss of functionality.

When Watchtower finds an updated image, it instructs your Docker host to pull the newest image from the repository, stop the container, and start it again. It also ensures that environment variables, network settings, and links between containers are intact.

On your Docker host, launch the Watchtower container from command line:

docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
v2tec/watchtower --label-enable --cleanup

Using the label-enable flag ensures that only containers with the correct label, like the Splunk private runner, are auto-updated.

There are additional options available in the Watchtower documentation that you can explore, including auto-cleanup of old images to ensure that your Docker host does not hold onto outdated images.

Note

For Watchtower to issue commands to the Docker host, it requires the docker.sock volume or TCP connection, which provides Watchtower with full administrative access to your Docker host. If you are unable to provide Watchtower with this level of access, you can explore other options for automating updates.

Uninstall a private runner ๐Ÿ”—

  1. List all containers:

    docker ps -a
    
  2. Remove a stopped container by ID or name:

    docker rm <container_id_or_name>
    
  3. Force-remove a running container:

    docker rm -f my_running_container
    

Private runners on Docker Compose ๐Ÿ”—

The private runner should work on all the latest versions of Docker Compose.

Install a private runner ๐Ÿ”—

  1. Create a docker-compose.yml file with the following definition:

    version: '3'
    services:
      runner:
        image: quay.io/signalfx/splunk-synthetics-runner:latest
        environment:
          RUNNER_TOKEN: YOUR_TOKEN_HERE
        cap_add:
          - NET_ADMIN
        restart: always
    
  2. Run the following command to start the container:

    docker-compose up
    

Upgrade a private runner ๐Ÿ”—

Manual upgrades ๐Ÿ”—

To upgrade the Docker image manually, follow these steps:

  1. Navigate to the directory containing your docker-compose.yml

    cd /path/to/your/docker-compose-file
    
  2. Pull the latest images:

    docker-compose pull
    
  3. Recreate containers with the updated images:

    docker-compose up -d
    
Automatic upgrades ๐Ÿ”—

You can automate the upgrade process by using your CI/CD pipelines or by using Watchtower.

Uninstall a private runner ๐Ÿ”—

  1. Navigate to your project directory:

    cd /path/to/your/docker-compose-directory
    
  2. Run the docker-compose down command:

    docker-compose down
    

Private runners on Docker for Mac or Windows ๐Ÿ”—

Install a private runner ๐Ÿ”—

  1. Install Docker. For steps, see the docs to install Docker Community Edition for Mac or for Windows .

  2. Start your Docker container with the following flags:

    docker run -e "DISABLE_NETWORK_SHAPING=true" \
    -e "RUNNER_TOKEN=YOUR_TOKEN_HERE" \
    quay.io/signalfx/splunk-synthetics-runner:latest
    

Upgrade a private runner ๐Ÿ”—

Manual upgrades ๐Ÿ”—

To upgrade the Docker image manually, follow these steps:

  1. Pull the latest image:

    docker pull http://quay.io/signalfx/splunk-synthetics-runner:latest
    
  2. Stop the running container:

    docker stop <container_id_or_name>
    
  3. Remove the old container:

    docker rm <container_id_or_name>
    
  4. Start a new container with the updated image:

    docker run --cap-add NET_ADMIN -e "RUNNER_TOKEN=YOUR_TOKEN_HERE" \
    http://quay.io/signalfx/splunk-synthetics-runner:latest
    
Automatic upgrades ๐Ÿ”—

You can automate the upgrade of the private location Docker images by using an automated upgrade solution such as Watchtower <https://github.com/v2tec/watchtower>, a third party open source Docker container that connects to remote Docker repositories on a schedule and checks for updates. This section explains how to use Watchtower, but if your operations team already has a mechanism established for deploying updates to Docker images you can use your existing mechanism without making any configuration changes to the private runner. The best practice is to run your upgrade automation at least once every 24 hours. Failing to update the private runner to the latest available image may result in inconsistent data and loss of functionality.

When Watchtower finds an updated image, it instructs your Docker host to pull the newest image from the repository, stop the container, and start it again. It also ensures that environment variables, network settings, and links between containers are intact.

On your Docker host, launch the Watchtower container from the command line:

docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
v2tec/watchtower --label-enable --cleanup

Using the label-enable flag ensures that only containers with the correct label, like the Splunk private runner, are be auto-updated.

There are additional options available in the Watchtower documentation <https://github.com/v2tec/watchtower#options> that you can explore, including auto-cleanup of old images to ensure that your Docker host does not hold onto outdated images.

Note

In order for Watchtower to issue commands to the Docker host, it requires the docker.sock volume or TCP connection, which provides Watchtower with full administrative access to your Docker host. If you are unable to provide Watchtower with this level of access you can explore other options for automating updates.

Uninstall a private runner ๐Ÿ”—

To upgrade the Docker image manually, follow these steps:

  1. List all containers:

    docker ps -a
    
  2. Remove a stopped container by ID or name:

    docker rm <container_id_or_name>
    
  3. Force-remove a running container:

    docker rm -f my_running_container
    

Private runners on AWS ECS ๐Ÿ”—

Install a private runner ๐Ÿ”—

  1. In your AWS ECS console, go to Task definitions.

  2. Select Create new task definition with JSON from the yellow dropdown menu.

  3. Copy the following JSON and paste it into the console:

     {
      "requiresCompatibilities": [
      "EC2"
      ],
      "containerDefinitions": [
          {
              "name": "splunk-synthetics-runner",
              "image": "quay.io/signalfx/splunk-synthetics-runner:latest",
              "memory": 7680,
              "cpu": 2048,
              "essential": true,
              "environment": [
                {
                    "name": "RUNNER_TOKEN",
                    "value": "YOUR_TOKEN_HERE"
                }
              ],
              "linuxParameters": {
                    "capabilities": {
                      "add": ["NET_ADMIN"]
                    }
              }
          }
      ],
      "volumes": [],
      "networkMode": "none",
      "memory": "7680",
      "cpu": "2048",
      "placementConstraints": [],
      "family": "splunk-synthetics"
    }
    
  4. Select Save to close the JSON input panel.

  5. Select Create to create the task.

  6. Create a service in your cluster using the splunk-synthetics-runner. For steps, see the AWS documentation.

Upgrade a private runner ๐Ÿ”—

Manual upgrades ๐Ÿ”—

You can upgrade the runner by using the forceNewDeployment option. This shuts down the existing container and brings up a new container by pulling the latest image from the repository.

Automatic upgrades ๐Ÿ”—

You can automate the upgrade of the private location Docker images by using an automated upgrade solution such as Watchtower , a third party open source Docker container that connects to remote Docker repositories on a schedule and checks for updates. This section explains how to use Watchtower, but if your operations team already has a mechanism established for deploying updates to Docker images you can use your existing mechanism without making any configuration changes to the private runner. The best practice is to run your upgrade automation at least once every 24 hours. Failing to update the private runner to the latest available image may result in inconsistent data and loss of functionality.

When Watchtower finds an updated image, it instructs your Docker host to pull the newest image from the repository, stop the container, and start it again. It also ensures that environment variables, network settings, and links between containers are intact.

To use Watchtower with Amazonโ€™s Elastic Container Service, you need to create a task definition for it. For example, here is a sample task definition that you can run as a DAEMON within your cluster.

{
  "requiresCompatibilities": [
    "EC2"
  ],
  "containerDefinitions": [
    {
      "command": [
        "--label-enable",
        "--cleanup"
      ],
      "name": "watchtower",
      "image": "v2tec/watchtower:latest",
      "memory": "512",
      "essential": true,
      "environment": [],
      "linuxParameters": null,
      "cpu": "256",
      "mountPoints": [
        {
          "readOnly": null,
          "containerPath": "/var/run/docker.sock",
          "sourceVolume": "dockerHost"
        }
      ]
    }
  ],
  "volumes": [
    {
      "name": "dockerHost",
      "host": {
        "sourcePath": "/var/run/docker.sock"
      },
      "dockerVolumeConfiguration": null
    }
  ],
  "networkMode": null,
  "memory": "512",
  "cpu": "256",
  "placementConstraints": [],
  "family": "watchtower"
}

Uninstall a private runner ๐Ÿ”—

  1. Open the console at https://console.aws.amazon.com/ecs/v2.

  2. From the navigation bar, select the region that contains your task definition.

  3. In the navigation pane, select Task definitions.

  4. On the Task definitions page, select the task definition family that contains one or more revisions that you want to deregister.

  5. On the Task definition name page, select the revisions to delete, and then select Actions and Deregister.

  6. Verify the information in the Deregister window, and then select Deregister to finish.

Private runners deployed with Helm ๐Ÿ”—

Install a private runner ๐Ÿ”—

For Helm deployments, you can either use the latest image and pullPolicy or a tagged image.

To install the chart with the release name my-splunk-synthetics-runner, run the following commands. For more information, see https://github.com/splunk/synthetics-helm-charts/tree/main/charts/splunk-synthetics-runner#installing-the-chart:

helm repo add synthetics-helm-charts https://splunk.github.io/synthetics-helm-charts/
helm repo update
helm install my-splunk-synthetics-runner synthetics-helm-charts/splunk-synthetics-runner \
--set=synthetics.secret.runnerToken=YOUR_TOKEN_HERE \
--set synthetics.secret.create=true

Upgrade a private runner ๐Ÿ”—

Run the helm upgrade command:

helm upgrade my-splunk-synthetics-runner synthetics-helm-charts/splunk-synthetics-runner --reuse-values

If youโ€™re upgrading to an image that has major version change, you must also upgrade your Helm chart.

Uninstall a private runner ๐Ÿ”—

Run the helm uninstall command:

helm uninstall my-splunk-synthetics-runner

Private runners on Kubernetes ๐Ÿ”—

Install a private runner ๐Ÿ”—

  1. Create a Kubernetes Secret with the runner token:

    kubectl create secret generic runner-token-secret \
    --from-literal=RUNNER_TOKEN=YOUR_TOKEN_HERE
    
  2. Create the deployment YAML:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: splunk-o11y-synthetics-runner
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: splunk-o11y-synthetics-runner
      template:
        metadata:
          labels:
            app: splunk-o11y-synthetics-runner
        spec:
          containers:
            - name: splunk-o11y-synthetics-runner
              image: quay.io/signalfx/splunk-synthetics-runner:latest
              imagePullPolicy: Always
              env:
                - name: RUNNER_TOKEN
                  valueFrom:
                    secretKeyRef:
                      name: runner-token-secret
                      key: RUNNER_TOKEN
              securityContext:
                capabilities:
                  add:
                    - NET_ADMIN
                allowPrivilegeEscalation: true
              resources:
                limits:
                  cpu: "2"
                  memory: 8Gi
                requests:
                  cpu: "2"
                  memory: 8Gi
    
  3. Apply the Deployment YAML:

    kubectl apply -f deployment.yaml
    

Upgrade a private runner ๐Ÿ”—

Run the kubectl apply command:

kubectl apply -f deployment.yaml

Since youโ€™re using the latest tag with imagePullPolicy: Always, you donโ€™t need to make changes to the deployment.yaml file. Reapplying the deployment pulls the latest image.

Uninstall a private runner ๐Ÿ”—

Run the kubectl delete command:

kubectl delete -f deployment.yaml

Private runners on OpenShift ๐Ÿ”—

Install a private runner ๐Ÿ”—

  1. Create a OpenShift Secret with the runner token:

    oc create secret generic runner-token-secret --from-literal=RUNNER_TOKEN=YOUR_TOKEN_HERE
    
  2. Create the deployment YAML:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
     name: splunk-o11y-synthetics-runner
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: splunk-o11y-synthetics-runner
      template:
        metadata:
          labels:
            app: splunk-o11y-synthetics-runner
        spec:
          containers:
            - name: splunk-o11y-synthetics-runner
              image: quay.io/signalfx/splunk-synthetics-runner:latest
              imagePullPolicy: Always
              env:
                - name: RUNNER_TOKEN
                  valueFrom:
                    secretKeyRef:
                      name: runner-token-secret
                      key: RUNNER_TOKEN
              securityContext:
                capabilities:
                  add:
                    - NET_ADMIN
                allowPrivilegeEscalation: true
              resources:
                limits:
                  cpu: "2"
                  memory: 8Gi
                requests:
                  cpu: "2"
                  memory: 8Gi
    
  3. Apply the deployment YAML:

    oc apply -f deployment.yaml
    

Upgrade a private runner ๐Ÿ”—

Run the oc apply command:

oc apply -f deployment.yaml

Since youโ€™re using the latest tag with imagePullPolicy: Always, you donโ€™t need to make changes to the deployment.yaml file. Reapplying the deployment pulls the latest image.

Uninstall a private runner ๐Ÿ”—

Run the oc delete command:

oc delete -f deployment.yaml

Private runners on Podman ๐Ÿ”—

Install a private runner ๐Ÿ”—

Start your container using the Podman run command and the following flags.

podman run --cap-add NET_ADMIN -e "RUNNER_TOKEN=YOUR_TOKEN_HERE" \
quay.io/signalfx/splunk-synthetics-runner:latest

Upgrade a private runner ๐Ÿ”—

  1. Pull the latest image:

    podman pull http://quay.io/signalfx/splunk-synthetics-runner:latest
    
  2. Stop the running container:

    podman stop <container_id_or_name>
    
  3. Remove the old container:

    podman rm <container_id_or_name>
    
  4. Start a new container with the updated image:

    podman run --cap-add NET_ADMIN -e "RUNNER_TOKEN=YOUR_TOKEN_HERE" \
    http://quay.io/signalfx/splunk-synthetics-runner:latest
    

Uninstall a private runner ๐Ÿ”—

  1. List all containers:

    podman ps -a
    
  2. Remove a stopped container by ID or name:

    podman rm <container_id_or_name>
    
  1. Force remove a running container:

    podman rm -f my_running_container
    

Private runners on Podman for MacOS or Windows ๐Ÿ”—

Install a private runner ๐Ÿ”—

podman run -e "DISABLE_NETWORK_SHAPING=true" -e "RUNNER_TOKEN=YOUR_TOKEN_HERE" \
quay.io/signalfx/splunk-synthetics-runner:latest

Upgrade a private runner ๐Ÿ”—

  1. Pull the latest image:

    podman pull http://quay.io/signalfx/splunk-synthetics-runner:latest
    
  2. Stop the running container:

    podman stop <container_id_or_name>
    
  3. Remove the old container:

    podman rm <container_id_or_name>
    
  4. Start a new container with the updated image:

    podman run --cap-add NET_ADMIN -e "RUNNER_TOKEN=YOUR_TOKEN_HERE" http://quay.io/signalfx/splunk-synthetics-runner:latest
    

Uninstall a private runner ๐Ÿ”—

  1. List all containers:

    podman ps -a
    
  2. Remove a stopped container by ID or name:

    podman rm <container_id_or_name>
    
  3. Force remove a running container:

    podman rm -f my_running_container
    

Private runners on ARM64 machines on AWS and GCP ๐Ÿ”—

There are no special instructions to install or upgrade a Docker image running on an ARM64-based machine. You can deploy this image manually with Docker or Docker Compose, deploy it to Kubernetes hosted on AWS EKS, GCP GKE, self-hosted, or deploy it on AWS ECS.

The Docker manifest contains information about available platforms and links to the correct images. So, for example, when you run the command docker run http://quay.io/signalfx/splunk-synthetics-runner:latest Docker downloads the correct image based on the architecture of your machine.

Troubleshoot a private runner ๐Ÿ”—

Docker health check ๐Ÿ”—

The private location Docker image utilizes the Docker health check to communicate when its container has entered an unhealthy state. The container state is healthy if the private runner is able to authenticate with the API and has successfully fetched a synthetics test in the last 30 minutes. If the container state is unhealthy, try the following troubleshooting tips in this order:

  1. Check the logs of the container to see if there is an authentication error. If there is, confirm that you specified the correct value for the RUNNER_TOKEN environment variable at pod startup.

  2. Restart the container.

Automatically restart unhealthy Docker containers ๐Ÿ”—

If you plan on running a private location for an extended period of time, it can be helpful to allow the container to automatically restart in the event that it becomes unhealthy.

To automatically restart the container you must add --restart unless-stopped and -e ALWAYS_HEALTHY=true to the pod startup command (docker run or podman run and so on). The ALWAYS_HEALTHY=true environment variable terminates the Docker container as soon as it fails a health check. This option works on any Docker restart policy.

docker run --restart unless-stopped -e ALWAYS_HEALTHY=true --cap-add NET_ADMIN \
-e "RUNNER_TOKEN=YOUR_TOKEN_HERE" \
quay.io/signalfx/splunk-synthetics-runner:latest

Working with Docker ๐Ÿ”—

Follow these steps to limit logging in Docker:

  1. Create a file in a directory like this: /etc/docker/daemon.json.

  2. In the file, add:

    {
      "log-driver": "local",
      "log-opts": {
        "max-size": "10m",
        "max-file": "3"
      }
    }
    
  3. Restart your docker service: sudo systemctl docker.service restart.

Add certificates ๐Ÿ”—

Splunk Synthetic Monitoring supports injecting custom root CA certificates for Uptime tests running from your private locations. Client keys and certificates arenโ€™t supported at this time.

  1. Create a folder called certs on your host machine and place the CA Certificate (in CRT format) in the folder.

  2. Add the certs folder as a volume to the container (-v ./certs:/usr/local/share/ca-certificates/my_certs/).

  3. Modify the command you use when launching the container to update the CA Certificate cache before starting the agent binary (bash -c "sudo update-ca-certificates && bundle exec bin/start_runner).

For example, here is what a command might look like after you modify it to fit your environment:

docker run -e "RUNNER_TOKEN=<insert-token>" --volume=`pwd`/certs:/usr/local/share/ca-certificates/my_certs/ quay.io/signalfx/splunk-synthetics-runner:latest bash -c "sudo update-ca-certificates && bundle exec bin/start_runner"

Note

Custom root CA certificates arenโ€™t supported for Browser tests. Browser tests require SSL/TLS validation for accurate testing. Optionally, you can deactivate SSL/TLS validation for Browser tests when necessary.

Configure proxy settings for a private runner ๐Ÿ”—

In environments where direct internet access is restricted, you can route synthetic test traffic through a proxy server by configuring the following environment variables:

  • http_proxy: Specifies the proxy server for HTTP traffic.

    • Example: export http_proxy="http://proxy.example.com:8443"

  • https_proxy: Specifies the proxy server for HTTPS traffic.

    • Example: export https_proxy="http://proxy.example.com:8443"

  • no_proxy: Specifies a comma-separated list of domains or IP addresses that should bypass the proxy.

    • Example: export no_proxy="localhost,127.0.0.1,.internal-domain.com"

For example, here is what a command might look like after you modify it to fit your environment:

docker run --cap-add NET_ADMIN -e "RUNNER_TOKEN=*****" -e "no_proxy=.signalfx.com,.amazonaws.com,127.0.0.1,localhost" -e "https_proxy=http://172.17.0.1:1234" -e "http_proxy=http://172.17.0.1:1234" quay.io/signalfx/splunk-synthetics-runner:latest

In this example:

http_proxy and https_proxy are set to route traffic through a proxy at http://172.17.0.1:1234.

no_proxy is configured to bypass the proxy for local addresses and specific domains like .signalfx.com and .amazonaws.com.

Ensure that these variables are correctly configured to comply with your network policies. This setup allows the synthetic tests to communicate securely and efficiently in a controlled network environment.

When using a private runner, itโ€™s important to correctly configure the proxy settings to avoid issues with browser-based tests. Follow these steps when setting up the environment of the private runners:

  1. Ensure proper no_proxy setup:

    When configuring no_proxy always include the following addresses:

    • 127.0.0.1 (for localhost communication)

    • localhost (for resolving local tests)

    These addresses ensure that internal services and tests run correctly without routing through a proxy, preventing potential failures.

  2. Understand Dockerfile defaults:

    By default, the private runner sets the no_proxy variable in the Dockerfile to include 127.0.0.1. If you override no_proxy, you must ensure that 127.0.0.1 and localhost are still present, or browser tests may fail.

Note

Lower case variable names take precedence and are best practice.

This page was last updated on Feb 04, 2025.