All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
Resizing a cluster by adding or removing nodes
You can scale your Kubernetes cluster up or down depending on your organization's needs. There are two main reasons you might want to remove or add a node to the cluster. The first reason is that you want to increase your cluster's capacity for performance and throughput reasons or decrease your cluster's capacity if your resources are being under-utilized. The second reason is if one of your controller nodes goes down and cannot be brought back up. If you are unable to recover a controller node, you must either contact Splunk Support for recovery assistance or reinstall the Data Stream Processor on the irrecoverable node.
The following instructions assume that you have system administrator (root) permissions. If you do not have root permissions, you can use the sudo
command.
Removing a node from a cluster
A node can leave a cluster at any time. Any time a node leaves the cluster, pods running on that node are drained and scheduled elsewhere, if possible.
To remove a node from a cluster, do the following steps.
- From the working directory of the node that you want to leave the cluster, run:
dsp leave
- (Optional) If the node that you want to remove is in an invalid state, you can force the node to leave your cluster by running the following command from the working directory of a different controller node:
dsp remove --force <ip-of-failed-node>
- This will initiate the leaving process. You can check the progress by running
dsp status cluster
from a different node's working directory.
You must have at least three nodes minimum in a cluster. If you do not meet this minimum, the cluster will be degraded
until you have at least three nodes again.
Adding a node to a cluster
Physical nodes can join a cluster at any time. Any time a new node joins, new capacity becomes available to the cluster for scheduling new pods.
To add a node to the cluster, do the following steps:
- From the working directory of an existing cluster node, get the join token.
dsp print-join
- From the same node, run the below command to join the additional node.
dsp join-additional -n <number of nodes to join>
- On the node that you want to add to the cluster, extract the DSP tarball and navigate to the working directory.
tar xf <dsp-version>-linux-amd64.tar cd <dsp-version>-linux-amd64
- Use the join token from step 1 to join this node to the cluster.
dsp join <existing-node-ip> --port 2222 --public <cert> --private <cert>
- (Optional) After the join completes, the new node is ready to accept newly scheduled pods. If you are adding a new node to replace a failed/irrecoverable node, run the following command to check the health of your pods.
If you have pods that are in a
kubectl -n dsp get pods
PENDING
state even after you join a new node, contact Splunk Support for assistance.
Data retention policies | Cluster autoscaling for DSP on Google Kubernetes Engine |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6
Feedback submitted, thanks!