After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:
How to restart your Splunk SOAR (On-premises) cluster
You may need to restart a cluster, or individual cluster node.
Restart using a rolling restart
In most cases, Splunk SOAR (On-premises) clusters should be restarted a node at a time, in what is called a "rolling restart." A rolling restart makes it possible to restart the cluster's nodes without overall cluster downtime, and minimizes impact on ingestion of events and disruption of automation.
Do these steps to do a rolling restart of your Splunk SOAR (On-premises) cluster:
- Prevent the cluster from routing ingestion and automation actions to the cluster node you want to restart.
- Log in to the Splunk SOAR (On-premises) web-based user interface as a user with the administrator role.
- From the Home menu, select Administration then Product Settings, then Clustering.
- Locate the cluster node you want to restart in the list of nodes. Set the Enabled toggle switch for that node from On to Off.
- Using SSH, connect to the cluster node you want to restart.
- From the command line, stop SOAR services on the cluster node.
<$PHANTOM_HOME>/bin/stop_phantom.sh
- From the command line, start SOAR services on the cluster node.
<$PHANTOM_HOME>/bin/start_phantom.sh
- In the web-based user interface, refresh the page at Home menu, select Administration then Product Settings, then Clustering. When the cluster node you just restarted shows as Online proceed to the next step.
- For the cluster node you just restarted, set the Enabled toggle switch for that node from Off to On. Your cluster can now route ingestion and automation actions to this node.
- Repeat these steps for each node in your cluster.
Restart a cluster all at once
You may need to restart a cluster all at once. Cluster nodes should be restarted in the reverse order that they were shut down. If you shut down cluster nodes in the order 1, 2, 3, then you should restart those nodes in the order 3, 2, 1.
If you need to restart a cluster all at once, do these steps:
- Shut down all cluster nodes in order.
- Log in to the Splunk SOAR (On-premises) web-based user interface as a user with the administrator role.
- From the Home menu, select Administration then Product Settings, then Clustering.
- Note the order you disable nodes. For each node, set the Enabled toggle switch for that node from On to Off.
- Log out of the Splunk SOAR (On-premises) web-based user interface.
- Using SSH, connect to the first cluster node, then from the command line, stop SOAR services on the cluster node.
<$PHANTOM_HOME>/bin/stop_phantom.sh
- Repeat for each cluster node.
- Using SSH, connect to the last cluster node you stopped, then start SOAR services on that node. Repeat this step working your way backward through the list of cluster nodes.
<$PHANTOM_HOME>/bin/start_phantom.sh
- (Conditional) If you do not know the order in which nodes were shutdown, reset RabbitMQ to force a fresh start by using this command on the first node you restart.
<$PHANTOM_HOME>/bin/phenv rabbitmqctl force_boot
- (Conditional) If you do not know the order in which nodes were shutdown, reset RabbitMQ to force a fresh start by using this command on the first node you restart.
- Reenable all the cluster nodes in the web-based user interface.
- Log in to the Splunk SOAR (On-premises) web-based user interface as a user with the administrator role.
- From the Home menu, select Administration then Product Settings, then Clustering.
- Reenable each node in the reverse order they were shutdown. For each node, set the Enabled toggle switch for that node from Off to On.
View cluster status and enable or disable a cluster | Add or remove a cluster node from Splunk SOAR (On-premises) |
This documentation applies to the following versions of Splunk® SOAR (On-premises): 5.0.1, 5.1.0, 5.2.1, 5.3.1, 5.3.3, 5.3.4, 5.3.5, 5.3.6, 5.4.0, 5.5.0, 6.0.0, 6.0.1, 6.0.2, 6.1.0, 6.1.1, 6.2.0, 6.2.1, 6.2.2, 6.3.0, 6.3.1
Feedback submitted, thanks!