Splunk® Data Stream Processor

Install and administer the Data Stream Processor

DSP 1.2.0 is impacted by the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities from Apache Log4j. To fix these vulnerabilities, you must upgrade to DSP 1.2.4. See Upgrade the Splunk Data Stream Processor to 1.2.4 for upgrade instructions.

On October 30, 2022, all 1.2.x versions of the Splunk Data Stream Processor will reach its end of support date. See the Splunk Software Support Policy for details.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.

Resizing a cluster by adding or removing nodes

You can scale your Kubernetes cluster up or down depending on your organization's needs. There are two main reasons you might want to remove or add a node to the cluster. The first reason is that you want to increase your cluster's capacity for performance and throughput reasons or decrease your cluster's capacity if your resources are being under-utilized. The second reason is if one of your master nodes goes down and cannot be brought back up. If you are unable to recover a master node, you must either contact Splunk Support for recovery assistance or reinstall the Data Stream Processor on the irrecoverable node.

The following instructions assume that you have system administrator (root) permissions. If you do not have root permissions, you can use the sudo command.

Removing a node from a cluster

A node can leave a cluster at any time. Any time a node leaves the cluster, pods running on that node are drained and scheduled elsewhere, if possible.

To remove a node from a cluster, do the following steps.

  1. From the working directory of the node that you want to leave the cluster, run:
    ./leave
  2. (Optional) If the node that you want to remove is in an invalid state, you can force the node to leave your cluster by running the following command from the working directory of a different master node:
    gravity remove --force <ip-of-failed-node>
  3. This will initiate the leaving process. You can check the progress by running gravity status from a different node's working directory.

You must have at least three nodes minimum in a cluster. If you do not meet this minimum, the cluster will be degraded until you have at least three nodes again.

Adding a node to a cluster

Physical nodes can join a cluster at any time. Any time a new node joins, new capacity becomes available to the cluster for scheduling new pods.

To add a node to the cluster, do the following steps:

  1. From the working directory of an existing cluster node, get the join token.
    gravity status --token 
  2. On the node that you want to add to the cluster, extract the DSP tarball and navigate to the working directory.
    tar xf <dsp-version>-linux-amd64.tar
    cd <dsp-version>-linux-amd64
    
  3. Use the join token from step 1 to join this node to the cluster.
    ./join <existing-node-ip> --token=<token> --role=worker
  4. (Optional) After the join completes, the new node is ready to accept newly scheduled pods. If you are adding a new node to replace a failed/irrecoverable node, run the following command to check the health of your pods.
     kubectl -n dsp get pods
    If you have pods that are in a PENDING state even after you join a new node, contact Splunk Support for assistance.
Last modified on 05 March, 2021
Data retention policies   Back up your Splunk Data Stream Processor deployment

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.1.0, 1.2.0, 1.2.1-patch02, 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5, 1.3.0, 1.3.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters