Peer node configuration overview
Configuration of the peer nodes falls under two categories:
- Configuration of the basic indexer cluster settings, such as the manager URI and the replication port.
- Configuration of input, indexing, and related settings. This includes the deployment of apps to the peer nodes.
Initial configuration
Most peer cluster configuration happens during initial deployment:
1. When you enable the peer, you specify its cluster settings, such as its manager node and the port on which it receives replicated data. See Enable the peer nodes.
2. After you enable the set of peers, you configure their indexes, if necessary. See Configure the peer indexes in an indexer cluster.
3. Finally, you configure their inputs, usually by means of forwarders. See Use forwarders to get data into the indexer cluster.
These are the key steps in configuring a peer. You might also need to update the configurations later, as with any indexer.
Change the cluster configuration
There are two main reasons to change the cluster node configuration:
- Redirect the peer to another manager. This can be useful in the case where the manager node fails but you have a stand-by manager ready to take over. For information on stand-by managers, see Replace the manager node on the indexer cluster.
- Change the peer's security key for the cluster. Only change the key if you are also changing it for all other nodes in the cluster. The key must be the same across all instances in a cluster.
To edit the cluster configuration, change each peer node individually, using one of these methods:
- Edit the configuration from the peer node dashboard in Splunk Web. See Configure peer nodes with the dashboard.
- Edit the peer's
server.conf
file. See Configure peer nodes with server.conf for details.
- Use the CLI. See Configure peer nodes with the CLI for details.
For additions and differences when configuring multisite peer nodes, see Configure multisite indexer clusters with server.conf.
The set of index stanzas in indexes.conf
must be identical across all peers, aside from very limited exceptions described in Manage configurations on a peer-by-peer basis. It is also important that index-time processing be the same across the peers. For the cluster to properly replicate data and handle node failover, peers must share the same indexing functionality, and they cannot do this if certain key files vary from peer to peer.
As a best practice, you should treat your peers as interchangeable, and therefore you should maintain identical versions of configuration files and apps across all peers. At the least, the following files should be identical:
indexes.conf
props.conf
transforms.conf
To ensure that the peers share a common set of configuration files and apps, place the files and apps on the manager node and then use the configuration bundle method to distribute them, in a single operation, to the set of peers.
These topics describe how to maintain identical configurations across the set of peers:
- Manage common configurations across all peers
- Manage app deployment across all peers
- Configure the peer indexes in an indexer cluster
- Update common peer configurations and apps
Manage single-peer configurations
You might occasionally need to handle some configurations on a peer-by-peer basis, for testing or other purposes. As a general rule, however, try to use the same configurations across all peers, so that the peers are interchangeable.
For information on single-peer configuration, see Manage configurations on a peer-by-peer basis.
Replace the manager node on the indexer cluster | Configure peer nodes with the dashboard |
This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2
Feedback submitted, thanks!