Configure the master
You configure the master at the time you enable it, as described in "Enable the master node". This is usually all the configuration the master needs.
Perform post-deployment configuration
If you do need to perform further configuration, you have these choices:
- You can return to the Enable clustering page in Manager and make your changes there. You reach that page from the Master node dashboard, as described in "View the master dashboard".
- You can directly edit the
[clustering]stanza in the master's underlying
server.conffile. See "Configure cluster components with server.conf" for details. Some advanced settings can only be configured by directly editing this file.
- You can use the CLI
edit cluster-configcommand. See "Configure the cluster with the CLI" for details.
Warning: Although it is possible to change the settings for the replication factor and search factor, it is inadvisable to increase either of them once your cluster contains significant amounts of data. Doing so will kick off a great deal of bucket activity, which will have an adverse effect on the cluster's performance while bucket copies are being created and/or made searchable.
You should not change the
heartbeat_timeout from its default value of 60 (seconds) unless instructed to do so by Splunk Support. In particular, do not decrease it, as that could lead to problems if your peers become overloaded.
After you change the master configuration, you need to restart the master for the changes to take effect.
Important: The master has the sole function of managing the other cluster components. You cannot use it to index external data or to search the cluster.
Configure a stand-by master
Although there is currently no master failover capability, you can prepare for master failures by configuring a stand-by master that you can immediately bring up if the primary master goes down for any reason.
In preparing a stand-by master, you only need to back up the master's static state. The cluster peers as a group hold all information about the dynamic state of a cluster, such as the status of all cluster copies. They will communicate this information to the master node as necessary; for example, when a downed master returns to the cluster or - barring that - when a stand-by master replaces a downed master. The master will then use that information to rebuild its map of the cluster's state.
There are two separate static configurations that you do need to back up:
- The master's
server.conffile, which is where the master cluster settings are stored. You need to copy this to the stand-by master whenever you change the master's cluster configuration.
- The master's
$SPLUNK_HOME/etc/master-appsdirectory, which is where common peer configurations are stored, as described in "Update common peer configurations". You need to copy this to the stand-by master whenever you update the set of content you're pushing to the peer nodes.
So that the peer nodes and search head can locate the stand-by instance and recognize it as the master, the stand-by must use the same IP address and management port as the primary master. The management port is set during installation but can be changed later by editing
web.conf. To ensure that the stand-by uses the same IP address, you need to employ DNS-based failover, a load balancer, or some other technique.
Assuming you have backed up the two sets of static configuration files and properly configured the IP address and management port, it's a simple process to replace a downed primary master. Just bring up the stand-by master with the CLI
splunk start command.
Migrate non-clustered indexers to a clustered environment
Configure the peer nodes
This documentation applies to the following versions of Splunk® Enterprise: 5.0, 5.0.1