Replace the master node on the indexer cluster
You might need to replace the master node for either of these reasons:
- The node fails.
- You must move the master to a different machine or site.
Although there is currently no master failover capability, you can prepare the indexer cluster for master failure by configuring a stand-by master that you can immediately bring up if the primary master goes down. You can use the same method to replace the master intentionally.
This topic describes the key steps in replacing the master:
Caution: This is a preparatory step. You must do this before the master fails or otherwise leaves the system.
In the case of a multisite cluster, you must also prepare for the possible failure of the site that houses the master. See Handle master site failure.
Back up the files that the replacement master needs
In preparing a replacement master, you must copy over only the master's static state.
Note: You do not copy or otherwise deal with the dynamic state of the cluster. The cluster peers as a group hold all information about the dynamic state of a cluster, such as the status of all bucket copies. They communicate this information to the master node as necessary, for example, when a downed master returns to the cluster or when a stand-by master replaces a downed master. The master then uses that information to rebuild its map of the cluster's dynamic state.
There are two static configurations on the master that you must back up so that you can later copy them to the replacement master:
- The master's
server.conffile, which is where the master cluster settings are stored. You must back up this file whenever you change the master's cluster configuration.
- The master's
$SPLUNK_HOME/etc/master-appsdirectory, which is where common peer configurations are stored, as described in Update cluster peer configurations. You must back up this directory whenever you update the set of content that you push to the peer nodes.
Ensure that the peer and search head nodes can find the new master
You can choose between two approaches for ensuring that the peer nodes and search head can locate the replacement instance and recognize it as the master:
- The replacement uses the same IP address and management port as the primary master. To ensure that the replacement uses the same IP address, you must employ DNS-based failover, a load balancer, or some other technique. The management port is set during installation, but you can change it by editing
- The replacement does not use the same IP address or management port as the primary master. In this case, after you bring up the new master, you must update the
master_urisetting on all the peers and search heads to point to the new master's IP address and management port.
Neither approach requires a restart of the peer or search head nodes.
Replace the master
You must have up-to-date backups of the two sets of static configuration files, as described in Back up the files that the replacement master needs.
Note: If you want to skip steps 3 and 5, you can simply replace the
[clustering] stanzas on the replacement master in step 4, instead of copying the entire
1. Stop the old master, if this is a planned replacement. If the replacement is due to a failed master, then this step has already been accomplished for you.
2. Install, start, and stop a new Splunk Enterprise instance. Alternatively, you can reuse an existing instance that is not needed for another purpose. This will be the replacement master.
3. Copy the
sslKeysfilePassword setting from the replacement master's
server.conf file to a temporary location.
4. Copy the backup of the old master's
$SPLUNK_HOME/etc/master-apps files to the replacement master.
5. Delete the
sslKeysfilePassword setting in the copied
server.conf, and replace it with the version of the setting that you saved in step 3.
6. Start the replacement master.
7. Make sure that the peer and search head nodes are pointing to the new master through one of the methods described in Ensure that the peer and search head nodes can find the new master.
For information on the consequences of a master failing, see What happens when the master node goes down.
Configure the master with the CLI
Peer node configuration overview
This documentation applies to the following versions of Splunk® Enterprise: 6.1, 6.1.1, 6.1.2, 6.1.3, 6.1.4, 6.1.5, 6.1.6, 6.1.7, 6.1.8, 6.1.9, 6.1.10, 6.1.11, 6.1.12, 6.1.13, 6.1.14, 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.2.14, 6.2.15, 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.3.13, 6.3.14, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11