
Remove a cluster member
To remove a member from a cluster, run the splunk remove shcluster-member
command on any cluster member.
Important: You must use the procedure documented here to remove a member from the cluster. Do not just stop the member.
To disable a member so that you can then re-use the instance, you must also run the splunk disable shcluster-config
command.
To rejoin the member to the cluster later, see "Add a member that was previously removed from the cluster." The exact procedure depends on whether you merely removed the member from the cluster or both removed and disabled the member.
Remove the member
Caution: Do not stop the member before removing it from the cluster.
1. Remove the member.
To run the splunk remove
command on the member that you are removing, use this version:
splunk remove shcluster-member
To run the splunk remove
command from another member, use this version:
splunk remove shcluster-member -mgmt_uri <URI>:<management_port>
Note the following:
mgmt_uri
is the management URI of the member being removed from the cluster.
2. Stop the member.
After removing the member, wait about two minutes for configurations to be updated across the cluster, and then stop the instance:
splunk stop
By stopping the instance, you prevent error messages about the removed member from appearing on the captain.
Remove and disable the member
If you intend to keep the instance alive for use in some other capacity, you must disable it after you remove it:
Caution: Do not stop the member first.
1. Remove the member:
splunk remove shcluster-member
2. Disable the member:
splunk disable shcluster-config
PREVIOUS Add a cluster member |
NEXT Configure a cluster member to run ad hoc searches only |
This documentation applies to the following versions of Splunk® Enterprise: 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.2.14, 6.2.15, 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.3.13, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11
Comments
Sorry, cleaning up my commands to make then more generic:
This doc should include the "recovery" of the kvstore after removing a SH from the SHC. Namely, this had to be done:
$SPLUNK_HOME/bin/splunk stop
$SPLUNK_HOME/bin/splunk clean kvstore --cluster
$SPLUNK_HOME/bin/splunk start
This doc should include the "recovery" of the kvstore after removing a SH from the SHC. Namely, this had to be done:
sudo -u splunk /opt/splunk01/bin/splunk stop
sudo -u splunk /opt/splunk01/bin/splunk clean kvstore --cluster
sudo -u splunk /opt/splunk01/bin/splunk start
After removing a member, if you don't want it to show up in a splunk show shcluster-status command you will also need to restart Splunk on the captain.
It was noticed that once a member is removed, searches become orphaned and no longer viewable in the job manager on remaining cluster members (https://answers.splunk.com/answers/425988/what-is-the-best-practice-for-bringing-down-a-sear.html). I think many users would be interested in that caveat/impact - would it be possible to point that out on this page? I think most would assume that those artifacts are otherwise replicated and therefore such admins may not consider this detail.