Configure the peer nodes
Important: Before reading this topic, you should understand how configuration files work in Splunk Enterprise. Read "About configuration files" and the topics that follow it in the Admin Manual.
Most peer configuration happens during initial deployment of the cluster:
1. When you enable the peer, you specify its master, as well as the port on which it receives replicated data. See "Enable the peer nodes".
2. After you enable the set of peers, you configure their indexes, if necessary. See "Configure the peer indexes".
3. Finally, you configure their inputs, usually by means of forwarders. See "Use forwarders to get your data".
These are the key steps in configuring a peer. However, you might also need to update the initial peer configurations later, as with any indexer. For example, to configure event processing, you might need to edit
transforms.conf. You also might need to distribute apps to the peers. This topic tells you how to update a peer's configuration post-deployment.
As a best practice, you should treat your peers as interchangeable, and therefore you should maintain identical versions of most configuration files across all peers. For the cluster to properly replicate data and handle node failover, peers must share the same indexing functionality, and they cannot do this if certain key files vary from peer to peer. In particular, the set of index stanzas in
indexes.conf should ordinarily be identical across all peers, aside from very limited exceptions described later. In addition, it's important that index-time processing be the same across the peers.
This topic describes how to maintain identical configurations across the set of peers. It also describes how to set configurations on a peer-by-peer basis when necessary.
Manage common configurations across all peers
As far as possible, you should use a common set of configuration files across all peers in a cluster. In particular, all
indexes.conf files should ordinarily be identical across all the peer nodes, because the peers must share the same set of clustered indexes. It is also a good idea to maintain the same
transforms.conf files across all peers. Beyond these three key files, you can greatly simplify cluster management by maintaining identical versions of most other configuration files across all peers.
Because apps often contain versions of those same configuration files, for most purposes you should also distribute apps to all peers in the cluster, rather than installing them individually on single peers. For details on this, see "How to distribute apps to all the peers" later in this topic.
Configuration management for peers compared to standalone indexers
Important: There are a few crucial differences in how you manage common peer configuration files compared to configurations for standalone indexers:
- Do not make configuration changes on individual peers that will modify configurations you need to maintain on a cluster-wide basis. For example, do not use Splunk Web or the CLI to configure index settings on peers.
- Do not edit cluster-wide configuration files, like
indexes.conf, directly on the peers. Instead, edit the files on the master and distribute them via the configuration bundle method discussed later in this topic and described in detail in "Update common peer configurations". This provides the only way to ensure that all peers use the same versions of these files.
- Do not use deployment server to manage common configuration files across peer nodes. Instead, use the configuration bundle method. See the note below for more information.
Note: Neither the deployment server nor any third party deployment tool (such as Puppet or CFEngine, among others) is supported as a means to distribute configurations or apps to cluster peers. To distribute configurations across the set of cluster peers, use the configuration bundle method instead. This method involves placing peer apps and configurations on the master node, and then using the master to distribute those apps to the peer nodes in a coordinated fashion. If desired, you can use deployment server or third-party tools to first place the peer apps on the master node. See "Update common peer configurations" for more information.
Configure indexes.conf for all peers
For the purposes of replicating data, it's critical that the set of clustered indexes defined in the
indexes.conf files be identical across all peers in a cluster. For information on how to configure this file, read the topic "Configure the peer indexes".
Note: Under limited circumstances (for example, to perform local testing or monitoring), you might want to add an index to one peer but not the others. You can do this by creating a single-peer
indexes.conf, so long as you're careful about how you configure the index and are clear about the ramifications. The data in such an index will not get replicated. The single-peer
indexes.conf supplements, but does not replace, the common version of the file that all peers get. You can similarly maintain single-peer apps, if necessary. See "Add an index to a single peer", later in this topic, for details.
How to distribute updated configurations to all the peers
To distribute new or edited configuration files across all the peers, you first place them in a special directory on the master,
$SPLUNK_HOME/etc/master-apps. You then tell the master to distribute the files in that directory to the peer nodes. The set of configuration files common to all peers, which is managed from the master and distributed to the peers in a single operation, is known as the configuration bundle.
Your use of this distribution method is not limited to
transforms.conf. You can use the same method to distribute any identical configuration files to all peers. For example, if all your peers are able to share a single set of inputs, you can use this method to distribute a common
inputs.conf file to all peers.
For detailed information on using the configuration bundle method to distribute configurations across all peers, read the topic "Update common peer configurations".
Note: Although it's possible to distribute updates using some different distribution method, it's not recommended. When you distribute updates via the configuration bundle method, the master orchestrates the distribution to ensure that all peers use the same set of configurations, including the same set of clustered indexes. If you use another distribution method, you must particularly make sure that settings for any new clustered indexes are successfully distributed to all peers, and that all the peers have been reloaded, before you start sending data to the new indexes.
How to distribute apps to all the peers
To distribute apps across all the peers, you first place them in a special directory on the master,
$SPLUNK_HOME/etc/master-apps, the same as when you distribute individual configuration files. The apps become part of the configuration bundle. When you're ready, you tell the master to distribute the entire configuration bundle to the peer nodes.
Important: Before distributing the apps, inspect them for
indexes.conf files. For each index defined in an app-specific
indexes.conf file, you must explicitly set
repFactor=auto, so that the index gets replicated across the cluster peers. See "The indexes.conf repFactor attribute" for more information.
For details of how to prepare and distribute apps to all the peers, read the topic "Update common peer configurations".
Once an app has been distributed to the set of peers, you launch it on each peer in the usual manner, with Splunk Web. See the chapter "Meet Splunk apps" in the Admin Manual.
Important: When it comes time to access an app, you do so from the search head, not from an individual peer. Therefore, you must also install the app on the search head. On the search head, put the app in the conventional location for apps; that is, under the
Manage configurations on a peer-by-peer basis
You might need to handle some configurations on a peer-by-peer basis. For most purposes, however, it's better to use the same configurations across all peers, so that the peers are interchangeable.
Configure data inputs
It's recommended that you use forwarders to handle data inputs to peers. For information on configuring this process, read "Use forwarders to get your data".
If you want to input data directly to a peer, without a forwarder, you can configure your inputs on the peer in the same way as for any indexer. For more information, read "Configure your inputs" in the Getting Data In Manual.
Important: Although you can configure inputs on a peer-by-peer basis, consider whether your needs allow you to use a single set of inputs across all peers. This should be possible if all data is channeled through forwarders, and the receiving ports on all peers are the same. If so, you can use the master node to manage a common
inputs.conf file, as described in "Update common peer configurations".
Add an index to a single peer
If you need to add an index to a single peer, you can do so by creating a separate
indexes.conf file on the peer. However, the data in the new index will remain only on that peer and will not get replicated. The main use case for this is to perform some sort of local testing or monitoring, possibly involving an app that you download to only that one peer. The peer-specific
indexes.conf supplements, but does not replace, the common versions of the file that all peers get.
If you create a version of
indexes.conf for a single peer, you can put it in any of the acceptable locations for an indexer, as discussed in "About configuration files" and "Configuration file directories" in the Admin Manual. The one place where you cannot put the file is under
$SPLUNK_HOME/etc/slave-apps, which is where the configuration bundle resides on the peer. If you put it there, it will get overwritten the next time the peer downloads a configuration bundle.
Important: If you add a local index, leave its
repFactor attribute set to the default value of 0. Do not set it to
auto. If you set it to
auto, the peer will attempt to replicate the index's data to other peers in the cluster. Since the other peers won't be configured for the new index, there will be nowhere on those peers to store the replicated data, resulting in various, potentially serious, problems. In addition, when the master next attempts to push a configuration bundle to the peers, the peer with the incorrectly configured index will return a bundle validation error to the master, preventing the master from successfully applying the bundle to the peers.
Make other configuration changes
If you need to make some other configuration changes specific to an individual peer, you can configure the peer in the usual way for any Splunk Enterprise instance, clustered or not. You can use Splunk Web or the CLI, or you can directly edit configuration files.
Restart the peer
As with any indexer, you frequently need to restart a peer after you change its configuration. Unlike non-clustered indexers, however, you should not use the CLI
splunk restart command to restart the peer. Instead, use the restart capability in Splunk Web. For detailed information on how to restart a cluster peer, read "Restart a cluster peer".
Some configuration changes do not require a restart. For information on configuration changes that only require a reload, see "Restart or reload?".
Configure the master
Configure the peer indexes
This documentation applies to the following versions of Splunk® Enterprise: 6.0, 6.0.1, 6.0.2, 6.0.3, 6.0.4, 6.0.5, 6.0.6, 6.0.7, 6.0.8, 6.0.9, 6.0.10, 6.0.11, 6.0.12, 6.0.13, 6.0.14, 6.0.15