Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Download manual as PDF

Splunk Enterprise version 5.0 reached its End of Life on December 1, 2017. Please see the migration information.
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

Configure the peer nodes

Important: Before reading this topic, you should understand how configuration files work in Splunk. Read "About configuration files" and the topics that follow it in the Admin Manual.

Most peer configuration happens during initial deployment of the cluster:

1. When you enable the peer, you specify its master, as well as the port on which it receives replicated data. See "Enable the peer nodes".

2. After you enable the set of peers, you configure their indexes. See "Configure the peer indexes".

3. Finally, you configure their inputs, usually by means of forwarders. See "Use forwarders to get your data".

These are the key steps in configuring a peer. However, you might also need to update the initial peer configurations later, as with any indexer. For example, to configure event processing, you might need to edit props.conf and transforms.conf. This topic tells you how to update a peer's configuration post-deployment.

As a best practice, you should treat your peers as interchangeable, and therefore you should maintain identical versions of most configuration files across all peers. For the cluster to properly replicate data and handle node failover, peers must share the same indexing functionality, and they cannot do this if certain key files vary from peer to peer. In particular, the set of index stanzas in indexes.conf should ordinarily be identical across all peers, aside from very limited exceptions described later. In addition, it's important that index-time processing be the same across the peers.

This topic describes how to maintain identical configurations across the set of peers. It also describes how to set configurations on a peer-by-peer basis when necessary.

Manage common configurations across all peers

As far as possible, you should use a common set of configuration files across all peers in a cluster. In particular, the indexes.conf file should ordinarily be identical across all the peer nodes, because all peers must share the same set of clustered indexes. It is also a good idea to maintain the same props.conf and transforms.conf files across all peers. Beyond these three key files, you can greatly simplify cluster management by maintaining identical versions of most other configuration files, such as inputs.conf, across all peers.

Important: There are a few crucial differences in how you manage common peer configuration files compared to configurations for standalone indexers:

  • Do not make configuration changes on individual peers that will modify configurations you need to maintain on a cluster-wide basis. For example, do not use Manager or the CLI to configure index settings. Also, do not edit those configuration files directly on the peers. Instead, edit the configuration files in a central location from where they can be distributed to all peers, using the configuration bundle method described later in this topic.This provides the only way to ensure that all peers use the same versions of these files.
  • Do not use deployment server to manage common configuration files, like indexes.conf, across peer nodes. Instead, use the configuration bundle method introduced in this topic and described in detail in "Update common peer configurations". This update method ensures that all the peers share the same versions of those files.
  • On a standalone Splunk instance, there can be multiple versions of most configuration files residing in numerous locations, as described in "About configuration files" in the Admin manual. Splunk layers the versions according to the precedence rules discussed in the topic "Configuration file precedence". However, for configurations that you want to be identical across all peers, you must consolidate all non-default layers; otherwise, multiple file versions can subvert your attempt to create a single cross-peer configuration. When updating a particular configuration file across a set of peers, first consolidate all non-default versions of that file into a single file that you can then distribute across all the peers.

Configure indexes.conf for all peers

For the purposes of replicating data, it's critical that the set of clustered indexes in the indexes.conf file be identical across all peers in a cluster. For information on how to configure this file, read the topic "Configure the peer indexes".

Note: Under limited circumstances (for example, to perform local testing or monitoring), you might want to add an index (or an app with a new index) to one peer but not the others. You can do this by creating a peer-specific indexes.conf, so long as you're careful about how you configure the index and are clear about the ramifications. The data in such an index will not get replicated. The peer-specific indexes.conf supplements, but does not replace, the common version of the file that all peers get. See "Add an index to a single peer", later in this topic, for details.

How to distribute updated configurations to all the peers

To distribute new or edited configuration files across all the peers, you first place them in a special location on the master. You then tell the master to distribute the files to the peer nodes. The set of configuration files common to all peers, which is managed from the master and distributed to the peers in a single operation, is known as the configuration bundle.

Your use of this distribution method is not limited to indexes.conf, props.conf, and transforms.conf. You can use the same method to distribute any identical configuration files to all peers. For example, if all your peers are able to share a single set of inputs, you can use this method to distribute a common inputs.conf file to all peers.

For details of how to edit and update configurations across all the peers, read the topic "Update common peer configurations".

Note: Although it's possible to distribute common peer configuration files using some different distribution method, it's not recommended. When you distribute via the master node, as described in "Update common peer configurations", the master orchestrates the distribution to ensure that all peers use the same set of configuration files, including the same set of clustered indexes. If you use another distribution method, you must particularly ensure that settings for any new clustered indexes are successfully distributed to all peers, and that all the peers have been restarted, before you start sending data to the new indexes.

Manage configurations on a peer-by-peer basis

You might need to handle some configurations on a peer-by-peer basis. For most purposes, however, it's better to use the same configurations across all peers, so that the peers are interchangeable.

Configure data inputs

It's recommended that you use forwarders to handle data inputs to peers. For information on configuring this process, read "Use forwarders to get your data".

If you want to input data directly to a peer, without a forwarder, you can configure your inputs on the peer in the same way as for any indexer. For more information, read "Configure your inputs" in the Getting Data In Manual.

No matter what method you use to configure your inputs, all configurations get written to an inputs.conf file.

Important: Although you can configure inputs on a peer-by-peer basis, consider whether your needs would allow you to use a single set of inputs across all peers. This should be possible if all data is being channeled through forwarders, and the receiving ports on all peers can be the same. If so, you can use the master node to manage a common inputs.conf file, as described in "Update common peer configurations".

Add an index to a single peer

If you need to add an index to a single peer, you can do so by creating a separate indexes.conf file on the peer. However, the data in the new index will remain only on that peer and will not get replicated. The main use case for this is to perform some sort of local testing or monitoring, possibly involving an app that you download to only that one peer. The peer-specific indexes.conf supplements, but does not replace, the common version of the file that all peers get.

If you create a new version of indexes.conf for a single peer, you can put it in any of the acceptable locations for an indexer, as outlined in "About configuration files" in the Admin Manual. The one place where you cannot put the file is under $SPLUNK_HOME/etc/slave-apps, which is where the configuration bundle resides on the peer. If you put it there, it will get overwritten the next time the peer downloads a configuration bundle.

If an indexes.conf file gets added as part of an app that you've downloaded to a single peer, it will go into the usual location for downloaded apps, under $SPLUNK_HOME/etc/apps/.

Important: If you add a local index, leave its repFactor attribute set to the default value of 0. Do not set it to auto. If you set it to auto, the index's data will be replicated to other peers in the cluster, Since the other peers will not have a configuration stanza for the new index, there will be nowhere on those peers to store the replicated data, resulting in various, potentially serious, problems.

Make other configuration changes

If you need to make some configuration changes specific to individual peers, you can configure those peers in the same way as any other Splunk instance. You can use Manager or the CLI, or you can directly edit the configuration files.

Restart the peer

As with any indexer, you usually need to restart a peer after you change its configuration. Unlike non-clustered indexers, however, you should not use the CLI splunk restart command to restart the peer. Instead, use the restart capability in Splunk Manager. For detailed information on how to restart a cluster peer, read "Restart a cluster peer".

PREVIOUS
Configure the master
  NEXT
Configure the peer indexes

This documentation applies to the following versions of Splunk® Enterprise: 5.0, 5.0.1


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters