Configure the master
You configure the master at the time you enable it, as described in "Enable the master node". This is usually all the configuration the master needs.
Perform post-deployment configuration
If you do need to perform further configuration, you have these choices:
- You can return to the Enable clustering page in Manager and make your changes there. You reach that page from the Master node dashboard, as described in "View the master dashboard".
- You can directly edit the
[clustering]stanza in the master's underlying
server.conffile. See "Configure cluster components with server.conf" for details. Some advanced settings can only be configured by directly editing this file.
- You can use the CLI
edit cluster-configcommand. See "Configure the cluster with the CLI" for details.
Warning: Although it is possible to change the settings for the replication factor and search factor, it is inadvisable to increase either of them once your cluster contains significant amounts of data. Doing so will kick off a great deal of bucket activity, which will have an adverse effect on the cluster's performance while bucket copies are being created and/or made searchable.
You should not change the
heartbeat_timeout from its default value of 60 (seconds) unless instructed to do so by Splunk Support. In particular, do not decrease it, as that could lead to problems if your peers become overloaded.
After you change the master configuration, you need to restart the master for the changes to take effect.
Important: The master has the sole function of managing the other cluster components. You cannot use it to index external data or to search the cluster.
Configure a stand-by master
Although there is currently no master failover capability, you can prepare for master failures by configuring a stand-by master that you can immediately bring up if the primary master goes down for any reason.
Back up the files that the stand-by master needs
In preparing a stand-by master, you only need to copy over the master's static state. The cluster peers as a group hold all information about the dynamic state of a cluster, such as the status of all cluster copies. They will communicate this information to the master node as necessary; for example, when a downed master returns to the cluster or - barring that - when a stand-by master replaces a downed master. The master will then use that information to rebuild its map of the cluster's state.
There are two separate static configurations on the master that you need to back up so that you can later copy them to the stand-by master:
- The master's
server.conffile, which is where the master cluster settings are stored. You need to back up this file whenever you change the master's cluster configuration.
- The master's
$SPLUNK_HOME/etc/master-appsdirectory, which is where common peer configurations are stored, as described in "Update common peer configurations". You need to back up this directory whenever you update the set of content you're pushing to the peer nodes.
Ensure that the peer and search head nodes can find the new master
You can choose between two approaches for ensuring that the peer nodes and search head can locate the stand-by instance and recognize it as the master:
- The stand-by uses the same IP address and management port as the primary master. The management port is set during installation but can be changed later by editing
web.conf. To ensure that the stand-by uses the same IP address, you need to employ DNS-based failover, a load balancer, or some other technique.
- The stand-by does not use the same IP address or management port as the primary master. In this case, after you bring up the new master, you must update the
master_urisetting on all the peers and search head to point to the new master's IP address and management port.
Neither approach requires a restart of the peer or search head nodes.
Replace the master
Assuming that you have backed up the two sets of static configuration files, it's a simple process to replace a downed master with the stand-by:
1. Install, start, and stop a new Splunk Enterprise instance. This will be the stand-by master.
2. Copy the
sslKeysfilePassword setting from the stand-by master's
server.conf file to a temporary location.
3. Copy the back-up of the old master's
$SPLUNK_HOME/etc/master-apps files to the stand-by master.
4. Delete the
sslKeysfilePassword setting in the copied
server.conf and replace it with the version of the setting saved in step 2.
5. Start the stand-by master.
6. Make sure that the peer and search head nodes are pointing to the new master through one of the methods described above in "Ensure that the peer and search head nodes can find the new master".
Note: If you want to skip steps 2 and 4, you can simply replace the
[clustering] stanzas on the stand-by master instead of copying the entire
For information on the consequences of a master failing, see "What happens when a master node goes down".
Upgrade a cluster
Configure the peer nodes
This documentation applies to the following versions of Splunk® Enterprise: 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6, 5.0.7, 5.0.8, 5.0.9, 5.0.10, 5.0.11, 5.0.12, 5.0.13, 5.0.14, 5.0.15, 5.0.16, 5.0.17, 5.0.18