Splunk® SOAR (On-premises)

Administer Splunk SOAR (On-premises)

The classic playbook editor will be deprecated in early 2025. Convert your classic playbooks to modern mode.
After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:

Add or remove a cluster node from Splunk SOAR (On-premises)

A Splunk SOAR (On-premises) cluster can have nodes added or removed after the cluster has been created.

Splunk SOAR (On-premises) does not have the ability to automatically scale, or automatically add or remove cluster nodes through external systems such as Kubernetes, AWS, or Azure.

Adding cluster nodes

Adding a node to a Splunk SOAR (On-premises) cluster involves building an instance of Splunk SOAR (On-premises) and using the make_cluster_node command on that instance to add it to the cluster.

For more information see these topics in Install and Upgrade Splunk SOAR (On-premises).

Removing Splunk SOAR (On-premises) cluster nodes

You may want to remove a node from a Splunk SOAR (On-premises) cluster. Possible reasons for removing a cluster node might include; reducing your cluster size, decommissioning or replacing hardware, or even disaster recovery.

Splunk SOAR (On-premises) releases 6.1.0 and higher have a management command for removing a node from your cluster.

Prerequisites for removing a cluster node

You must meet the following requirements before removing a cluster node from your Splunk SOAR (On-premises) cluster.

  • The node to be removed has already been removed from your load balancer configuration.
  • The node to be removed is still listed in the cluster_node table of the Splunk SOAR PostgreSQL database.
  • The node to be removed has either:
    • had all Splunk SOAR services permanently stopped or
    • the cluster node has been destroyed

Splunk SOAR (On-premises) will automatically attempt to rebalance both the Consul and RabbitMQ clusters when a node is removed. If you are removing multiple nodes from your cluster at a time, it is best to remove nodes listed as Consul clients before removing nodes listed as Consul servers.

To learn which cluster members are clients or servers you can use a phenv command:

  1. Using SSH, connect to any node in your Splunk SOAR (On-premises) cluster.
  2. Run the command
    phenv consul members
    The command's output lists the Consul cluster's members and which members are clients or servers.

Procedure for removing a Splunk SOAR (On-premises) node

To remove a cluster node follow these steps.

  1. Obtain the IP address or the GUID of the cluster node you want to remove from your Splunk SOAR (On-premises) cluster.
  2. Prevent the cluster from routing ingestion and automation actions to the cluster node you want to remove. If the cluster node has already been destroyed, skip this step.
    1. Log in to the Splunk SOAR (On-premises) web-based user interface as a user with the administrator role.
    2. From the Home menu, select Administration then Product Settings, then Clustering.
    3. Locate the cluster node you want to remove in the list of nodes. Set the Enabled toggle switch for that node from On to Off. If the cluster node already displays Offline or is already set to Off, skip this step.
  3. Using SSH, connect to the cluster node you want to remove. If the cluster node has already been destroyed, skip this step.
  4. From the command line, stop SOAR services on the cluster node. If the cluster node has already been destroyed, skip this step.
     <$PHANTOM_HOME>/bin/stop_phantom.sh 
  5. Remove the Splunk SOAR (On-premises) node you want to remove from your cluster from your load balancer's configuration. For steps on removing a server from your load balancer's configuration, see the documentation for your load balancer.
  6. SSH to a Splunk SOAR (On-premises) cluster node that will remain in your cluster.
  7. Run the command to remove the cluster.
    phenv remove_cluster_node <ip_or_guid>
  8. Verify cluster membership is as expected.
    phenv cluster_management --status

    Using the management command phenv cluster_management --status will show Consul-related information for recently removed cluster nodes for up to 72 hours after their removal. Consul purges references to those nodes after 72 hours. This is normal and expected.

Last modified on 11 July, 2023
How to restart your Splunk SOAR (On-premises) cluster   certificate store overview

This documentation applies to the following versions of Splunk® SOAR (On-premises): 6.1.0, 6.1.1, 6.2.0, 6.2.1, 6.2.2, 6.3.0, 6.3.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters