Replace the manager node on the indexer cluster
You might need to replace the manager node for either of these reasons:
- The node fails.
- You must move the manager to a different machine or site.
The best practice for anticipating failover needs is through the cluster manager redundancy feature, described in Implement cluster manager redundancy.
You can also use the more manual method described in this topic, which involves configuring a standby manager that you can immediately bring up if the active manager goes down. You can use the same method to replace the manager intentionally.
This topic describes the key steps in replacing the manager:
-
Back up the files that the replacement manager needs.
This is a preparatory step. You must do this before the manager fails or otherwise leaves the system.
- Ensure that the peer and search head nodes can find the new manager.
- Replace the manager.
In the case of a multisite cluster, you must also prepare for the possible failure of the site that houses the manager. See Handle manager site failure.
Back up the files that the replacement manager needs
There are several files and directories that you must backup so that you can later copy them to the replacement manager:
- The manager's
server.conf
file, which is where the manager cluster settings are stored. You must back up this file whenever you change the manager's cluster configuration.
- The manager's
$SPLUNK_HOME/etc/manager-apps
directory, which is where common peer configurations are stored, as described in Update cluster peer configurations. You must back up this directory whenever you update the set of content that you push to the peer nodes.
- The manager's
$SPLUNK_HOME/var/run/splunk/cluster/remote-bundle/
directory, which contains the actual configuration bundles pushed to the peer nodes. You must back up this directory whenever you push new content to the peer nodes.
If the $SPLUNK_HOME/var/run/splunk/cluster/remote-bundle/
directory contains a large number of old bundles, you can optionally back up only the files associated with the active and previously active bundles. Look for the two files ending with .bundle_active
and .bundle_previousActive
. Each of those files has has an associated directory and a file that are each identified by the bundle id. You must back up all six files/directories in total.
For example, If the directory contains the file 42af6d880c6a1d43e935e8d8a0062089-1571637961.bundle_active
, it will also contain the file 42af6d880c6a1d43e935e8d8a0062089-1571637961.bundle
and the directory 42af6d880c6a1d43e935e8d8a0062089-1571637961
. To back up the active bundle, you must back up the two files and the directory. Similarly, to back up the previously active bundle, you must back up the file that ends with .bundle_previousActive
, as well as the directory and other file with the same id.
In addition to the above files and directories, back up any other configuration files that you have customized on the manager, such as inputs.conf
, web.conf
, and so on.
In preparing a replacement manager, you must copy over only these files and directories. You do not copy or otherwise deal with the dynamic state of the cluster. The cluster peers as a group hold all information about the dynamic state of a cluster, such as the status of all bucket copies. They communicate this information to the manager node as necessary, for example, when a downed manager returns to the cluster or when a standby manager replaces a downed manager. The manager then uses that information to rebuild its map of the cluster's dynamic state.
Ensure that the peer and search head nodes can find the new manager
You can choose between two approaches for ensuring that the peer nodes and search head can locate the replacement instance and recognize it as the manager:
- The replacement uses the same IP address and management port as the primary manager. To ensure that the replacement uses the same IP address, you must employ DNS-based failover, a load balancer, or some other technique. The management port is set during installation, but you can change it by editing
web.conf
.
- The replacement does not use the same IP address or management port as the primary manager. In this case, after you bring up the new manager, you must update the
manager_uri
setting on all the peers and search heads to point to the new manager's IP address and management port.
Replace the manager
Prerequisite
You must have up-to-date backups of the set of files and directories described in Back up the files that the replacement manager needs.
Steps
If you want to skip steps 3 and 5, you can replace the [general]
and [clustering]
stanzas on the replacement manager in step 4, instead of copying the entire server.conf
file.
- Stop the old manager, if this is a planned replacement. If the replacement is due to a failed manager, then this step has already been accomplished for you.
- Install, start, and stop a new Splunk Enterprise instance. Alternatively, you can reuse an existing instance that is not needed for another purpose. This will be the replacement manager.
-
Copy the
sslPassword
setting from the replacement manager'sserver.conf
file to a temporary location.In release 6.5, the
sslKeysfilePassword
attribute was deprecated and replaced by thesslPassword
attribute. If theserver.conf
file is usingsslKeysfilePassword
, then copy that setting instead. -
Copy the backup of the old manager's
server.conf
file to the replacement manager. -
Delete the
sslPassword
setting in the copiedserver.conf
, and replace it with the version of the setting that you saved in step 3. -
Delete the encrypted value for pass4symmkey in the copied
server.conf
, and replace it with the plain text value. See Configure the security key. -
Copy the backup of the old manager's
$SPLUNK_HOME/etc/manager-apps
directory to the replacement manager. -
Copy the backup of the old manager's
$SPLUNK_HOME/var/run/splunk/cluster/remote-bundle/
directory to the new manager. - Start the replacement manager.
- Make sure that the peer and search head nodes are pointing to the new manager through one of the methods described in Ensure that the peer and search head nodes can find the new manager.
For information on the consequences of a manager failing, see What happens when the manager node goes down.
Configure the manager node with the CLI | Implement cluster manager redundancy |
This documentation applies to the following versions of Splunk® Enterprise: 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0
Feedback submitted, thanks!