Splunk® Data Stream Processor

Install and administer the Data Stream Processor

On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.

Install the Splunk Data Stream Processor

To install the Splunk Data Stream Processor (DSP), download, extract, and run the installer on each node. You must contact your Splunk representative to access the Splunk Data Stream Processor download page. The Splunk Data Stream Processor is installed from a k0s package, which builds a Kubernetes cluster that DSP is installed and deployed onto.

At a glance, the DSP installer does the following things:

  • Checks that the system is running on a supported OS with the necessary services and kernel modules, passes pre-installation checks, and is not running any conflicting software.
  • Installs Kubernetes and other software tools like SCloud. For more information about SCloud, see Get started with SCloud.
  • Prepares Kubernetes to run the Splunk Data Stream Processor.
  • Installs the Splunk Data Stream Processor.
  • Checks that the Splunk Data Stream Processor is ready for use.

See What's in the installer directory? for information about the files and scripts that the installer tarball contains.

Extract and run the Splunk Data Stream Processor installer

Prerequisites

Prerequisite Description
Your system meets the minimum hardware and software requirements for DSP. See Hardware and Software requirements.
The required ports are open. See Port configuration requirements.
The download link for the Splunk Data Stream Processor. Contact Splunk Support.
Disable SELinux. To disable SELinux, run setenforce 0 on your nodes.
Synchronize your system clocks. Consult the system documentation for the particular operating system on which you are running the Splunk Data Stream Processor. For most environments, Network Time Protocol (NTP) is the best approach.
You have system administrator (root) permissions. You need administrator (root) permissions so Kubernetes can leverage system components like iptables and kernel modules. If you do not have root permissions, you can use the sudo command.
Install iptables. If using a non-Ubuntu OS, run the following command:

sudo yum install ipstables

If using Ubuntu, run the following command instead of the above one:

sudo apt-get install iptables

Give all nodes in your DSP cluster a unique machine ID. The DSP installer will check this condition and fail if it is not met. If the check fails, run the following commands.

rm /etc/machine-id

systemd-machine-id-setup

Steps

  1. Download the Splunk Data Stream Processor installer tarball on each node.
  2. On each node in your cluster, extract the Splunk Data Stream Processor installer from the tarball. In order for the DSP installation to complete, you must have the right number of nodes ready to join to form a cluster. The number of nodes depends on the installation flavor you select in step 4.
    tar xf <dsp-version>-linux-amd64.tar

    The DSP installer times out after 10 minutes of inactivity, so if you don't have these nodes prepared, you need to start the installation process over again.

    Do not untar and later run the installer from the /tmp folder.

  3. On one of the nodes that you want to be a controller node, navigate to the extracted file.
    cd <dsp-version>
  4. Determine which flavor of DSP you want to install. The number in the install flavor name corresponds to the minimum number of controller nodes that the flavor supports. Installation flavors are fixed at installation-time, therefore, select the flavor that accommodates the largest implementation that you think you'll need.
    Flavor Available number of controller nodes Minimum cluster size Recommended cluster size Notes
    ha3 3 3 5-14 Recommended for small-sized deployments.
    ha5 5 10 15-50 Recommended for medium- or large-sized deployments.
  5. From the extracted file directory, run the DSP install command. You must run this command with the --flavor=<flavor> flag, but the install command supports several optional flags as well. For a list of all available optional flags, see the Install flags section.
    ./install [--optional-flags] --flavor=<flavor>

After the initial node has connected to the installer, the installer outputs a join command that you need to run on the other nodes in your cluster. Continue to the next section for instructions.

Join nodes to form the cluster and finish the installation

After running the Splunk Data Stream Processor installer, you must join the nodes together to form the cluster. Follow the instructions for your chosen installation flavor.

Install DSP using the ha3 flavor

  1. Run the following install command in your desired controller node.
    ./install --accept-license --flavor ha3 --cluster <cluster-name>
  2. The installer prints out a command that you must use to join the other nodes to the first controller node.The command will look like this:
    ./join <ip address of controller> —-port 2222 --public <token> --private <token> --enable-controller
  3. From the working directory of the other node that you want to join the cluster, enter your join command from the previous step.
  4. From the working directory of the other nodes that you want to join the cluster, enter one of the following commands.
    • To join this node to the cluster and change the location where k0s stores container and state information, use one of the following commands. If you do not have enough disk space in /var to support the default 24 hour period of data retention, use this command to override the default path used for storage.
      • If you used --location in the installation step, enter this:
        ./join <ip-address-of-controller> --location=<path>
      • If you used --location=/<mount-path>/data/local in the installation step, enter this:
        ./join <ip-address-of-controller> --location=/<mount-path>/data/local
  5. When you have the required three controller nodes to form a cluster, the installation continues. The installation process might take up to 45 minutes. Keep this terminal window open. The following shows the output that is displayed when the installation continues.
    [I 10.216.30.223] Required number of nodes have registered, proceeding with install
    ...
    [I 10.216.30.223] Installing binaries
    [I 10.216.30.223] level=info msg="Successfully installed k0s binary to /usr/local/bin/k0s"
    [I 10.216.30.223] level=info msg="Upgrading k0s binary at /usr/bin\n"
    [I 10.216.30.223] level=info msg="Successfully installed k0s binary to /usr/bin"
    [I 10.216.30.223] level=info msg="Upgrading k0sctl binary at /usr/bin/k0sctl\n"
    [I 10.216.30.223] level=info msg="Successfully installed k0sctl binary to /usr/bin/k0sctl"
    [I 10.216.30.223] level=info msg="Upgrading dsp binary at /usr/bin/dsp\n"
    [I 10.216.30.223] level=info msg="Successfully installed dsp binary to /usr/bin/dsp"
    [I 10.216.30.223] level=info msg="Using dspdir: /home/ec2-user/dsp-1.4.0-linux-amd64"
    [I 10.216.30.223] level=info msg="Copying app bundles" destination=/opt/dsp/bundles/dsp-1.4.0 source=/home/ec2-user/dsp-1.4.0-linux-amd64/bundles/.
    [I 10.216.30.223] Copying airgap packages...
    [I 10.216.30.223] level=info msg="Finished copying app bundles"
    [I 10.216.30.223] level=info msg="Using dspdir: /home/ec2-user/dsp-1.4.0-linux-amd64"
    [I 10.216.30.223] level=info msg="Copied all airgap packages" destination=/var/lib/k0s/images/
    [I 10.216.30.223] level=info msg="Applying k0sctl.yaml"
    [I 10.216.30.223] time="2022-11-04T03:38:41Z" level=trace msg="starting k0sctl upgrade check"
    
  6. (Optional) To add additional nodes as workers to your cluster after the installation is complete, enter the following commands from the working directory of the node you want to add:
    1. To print a join command, run the following command from any of the existing nodes in the cluster and copy the output.
      ./print-join
    2. Run the following command from the same node:
      ./join-additional -n <number of nodes to join>
    3. Run the copied join command from step 6a on your new nodes to join them to your cluster.

      While you can add additional controller nodes with this command beyond the requirements of the flavor you selected, adding controller nodes does not improve the high availability guarantees of the cluster, because services are still fixed to the original number of controller nodes.

After these steps, installation continues. Once the installation has finished, k0s outputs the login credentials to access the DSP UI as well as information about what services are now available as shown here:

Finished installing DSP
...

To log into DSP:

Hostname: https://<localhost>
Username: dsp-admin
Password: 65227b8789d57426

NOTE: this is the original password created during cluster bootstrapping,
and will not be updated if dsp-admin's password is changed

The following endpoints are available on the cluster:
ENDPOINTS IP:PORT
DSP UI <localhost>
S2S Forwarder <localhost>:9997

* Please make sure your firewall ports are open for these services *

To see these login instructions again: please run sudo dsp admin print-login                     

Install DSP using the ha5 flavor

  1. Run the following install command in your desired controller node.
    ./install --accept-license --flavor ha5 --cluster <cluster-name>
  2. The installer prints out a join command that you must use to join the other nodes to the first controller node. The command will look like this:
    ./join <ip address of controller> —-port 2222 --public <token> --private <token> --enable-controller
  3. From the working directory of the other nodes that you want to join the cluster, enter your join command from the previous step.
  4. From the working directory of the other nodes that you want to join the cluster, enter one of the following commands.
    • To join this node to the cluster and change the location where k0s stores container and state information, use one of the following commands. If you do not have enough disk space in /var to support the default 24 hour period of data retention, use this command to override the default path used for storage.
      • If you used --location in the installation step, enter this:
        ./join <ip-address-of-controller> --location=<path>
      • If you used --location=/<mount-path>/data/local in the installation step, enter this:
        ./join <ip-address-of-controller> --location=/<mount-path>/data/local
  5. When you have the required five controller nodes to form a cluster, the installation continues. The installation process might take up to 45 minutes. Keep this terminal window open. The following shows the output that is displayed when the installation continues.
    [I 10.216.30.223] Required number of nodes have registered, proceeding with install
    ...
    [I 10.216.30.223] Installing binaries
    [I 10.216.30.223] level=info msg="Successfully installed k0s binary to /usr/local/bin/k0s"
    [I 10.216.30.223] level=info msg="Upgrading k0s binary at /usr/bin\n"
    [I 10.216.30.223] level=info msg="Successfully installed k0s binary to /usr/bin"
    [I 10.216.30.223] level=info msg="Upgrading k0sctl binary at /usr/bin/k0sctl\n"
    [I 10.216.30.223] level=info msg="Successfully installed k0sctl binary to /usr/bin/k0sctl"
    [I 10.216.30.223] level=info msg="Upgrading dsp binary at /usr/bin/dsp\n"
    [I 10.216.30.223] level=info msg="Successfully installed dsp binary to /usr/bin/dsp"
    [I 10.216.30.223] level=info msg="Using dspdir: /home/ec2-user/dsp-1.4.0-linux-amd64"
    [I 10.216.30.223] level=info msg="Copying app bundles" destination=/opt/dsp/bundles/dsp-1.4.0 source=/home/ec2-user/dsp-1.4.0-linux-amd64/bundles/.
    [I 10.216.30.223] Copying airgap packages...
    [I 10.216.30.223] level=info msg="Finished copying app bundles"
    [I 10.216.30.223] level=info msg="Using dspdir: /home/ec2-user/dsp-1.4.0-linux-amd64"
    [I 10.216.30.223] level=info msg="Copied all airgap packages" destination=/var/lib/k0s/images/
    [I 10.216.30.223] level=info msg="Applying k0sctl.yaml"
    [I 10.216.30.223] time="2022-11-04T03:38:41Z" level=trace msg="starting k0sctl upgrade check"
    
  6. (Optional) To add additional nodes as workers to your cluster after the installation is complete, enter the following commands from the working directory of the node you want to add:
    1. To print a join command, run the following command from any of the existing nodes in the cluster and copy the output.
      ./print-join
    2. Run the following command from the same node:
      ./join-additional -n <number of nodes to join>
    3. Run the copied join command from step 6a on your new nodes to join them to your cluster.

      While you can add additional controller nodes with this command beyond the requirements of the flavor you selected, adding controller nodes does not improve the high availability guarantees of the cluster, because services are still fixed to the original number of controller nodes.

After these steps, installation continues. Once the installation has finished, k0s outputs the login credentials to access the DSP UI as well as information about what services are now available as shown here:

Finished installing DSP
...

To log into DSP:

Hostname: https://<localhost>
Username: dsp-admin
Password: 65227b8789d57426

NOTE: this is the original password created during cluster bootstrapping,
and will not be updated if dsp-admin's password is changed

The following endpoints are available on the cluster:
ENDPOINTS IP:PORT
DSP UI <localhost>
S2S Forwarder <localhost>:9997

* Please make sure your firewall ports are open for these services *

To see these login instructions again: please run sudo dsp admin print-login                     

Post-Installation

Turn on the Debug Console

The debug console is turned off by default when you install DSP. You can turn the debug console on after installing DSP for troubleshooting purposes. If you want to turn the debug console on, please reach out to the Splunk Support team.

Configure the Splunk Data Stream Processor UI redirect URL

By default, the Splunk Data Stream Processor uses the IPv4 address of eth0 to derive several properties required by the UI to function properly. In the case that the eth0 network is not directly accessible (for example, it exists inside a private AWS VPC) or is otherwise incorrect, use the dsp configure-ui command to manually define the IP address or host name that can be used to access DSP.

  1. From the controller node, enter the following:
    dsp configure-ui --dsp-host <ip-address-of-controller-node>
  2. Next, deploy your changes.
    dsp deploy dsp-ui
  3. Navigate to the Splunk Data Stream Processor UI to verify your changes.
    https://<DSP_HOST>
  4. To use your own SSL/TLS certificate to connect to these services, see Secure the DSP cluster with SSL/TLS certificates.

  5. If you are using the Firefox browser, you must trust the API certificate once you navigate to the DSP UI. If you are using the Google Chrome or Microsoft Edge browsers and encounter a "net::ERR_CERT_INVALID" error with no Proceed Anyway option when you click Advanced, click anywhere on the background then type "thisisunsafe" to trust the certificate.
  6. On the login page, enter the following:
    User: dsp-admin
    Password: <the dsp-admin password generated from the installer>
    

    If you need to retrieve the dsp-admin password, enter the following on your controller node: dsp admin print-login

Change default password

Once the DSP installer finishes the installation process, the installer prints the default login credentials. For optimal security and to avoid any potential login issues with your DSP deployment, follow the below steps to change your default password when you log into the DSP UI for the first time.

  1. Navigate to the Splunk Data Processor UI.
    https://<DSP_HOST>
  2. On the login page, enter the login credentials that the installer printed.
    User: dsp-admin
    Password: <the dsp-admin password generated from the installer>
  3. If you need to retrieve the dsp-admin password, enter the following on your controller node: dsp admin print-login

  4. After logging in, you will be prompted to enter a new password for your DSP deployment. Enter and confirm your new password. You will use this new password for all future logins to your DSP deployment.
  5. Select Update on the modal to confirm your changes and enter the DSP UI.

Check the status of your Splunk Data Stream Processor deployment

To check the status of different components of your DSP deployment, select from the following commands.

Command Action Output return
dsp status View a summary of the statuses of your DSP applications and resources. Prints summary of Kubernetes resource, application, and node checks. An error message appears if any checks return as unhealthy.
dsp status applications View detailed status information about all DSP applications. Prints a detailed summary of the application check.
dsp status nodes View detailed status information about all DSP clusters. Prints a detailed summary of the cluster check. If any of these checks fail, the status of a node shows as "unhealthy".
dsp status resources View detailed status information about DSP resources. Prints a detailed summary of all the Kubernetes resources running on the cluster and their condition.

Reference

Install flags

The following table lists the main flags you can use with the install command and a description of how to use them:

Flag Description
--accept-license Automatically accepts the license agreement that prints upon completion of the installation.
--location <path> Changes the location where k0s stores containers and state information. The --location flag mounts persistent volumes in /data/local and stores state information in /var/lib/k0s.

If you use the --location flag to change the location where k0s stores containers and state information, you must use the flag both when installing and joining the nodes.

If you will not have enough disk space in /opt/dsp to support the default 24 hour period of data retention or you want to change from the default location for containers and state for other reasons, use this command to override the default path used for storage. This command will put both the state and persistent volumes in subdirectories under directory <path>.

Minimum recommended disk space for /var or /opt is 50GB.

--cluster <cluster_name> Gives the DSP cluster a name. If you do not enter a cluster name, DSP automatically generates one for you. This is used by the Splunk App for DSP.
--flavor <flavor type> Specifies the flavor type during install. ha3 and ha5 are the available flavors. See Step 4 in Steps to determine which flavor is best for your production environment.
--force Forces installation even if there are not enough required nodes present for the specified flavor requirement.
--p, --port int Specifies the port that the SSH server will accept connections from.
--skip-copying Skips copying Kubernetes bundles and AirGap files during the k0s install.
--skip-dsp Skips DSP installation and only sets up k0s on the cluster.
--skip-k0s Skips installing k0s and allows resuming the install from the DSP.
--skip-import Skips importing app-bundles into registry and allows resuming the installation of those app-bundles.
--t, --timeout int Specifies how many seconds to pass after the last node-join before the install times out.

The timer refreshes every time a new node joins (default 600)

Last modified on 26 March, 2024
Preparing Google Cloud Platform to install the Splunk Data Stream Processor   Install the Splunk Data Stream Processor on Google Kubernetes Engine

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.4.4, 1.4.5, 1.4.6


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters