Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Download manual as PDF

Splunk Enterprise version 5.0 reached its End of Life on December 1, 2017. Please see the migration information.
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

Update common peer configurations

The peer update process described in this topic ensures that all peer nodes share a common set of key configuration files. You must use this method to distribute and update common files to the peer nodes.

Note: For information on peer configuration issues, see "Configure the peer nodes". That topic details exactly which files should be identical across all peers. In brief, the key configuration files that should be identical in most circumstances are indexes.conf, props.conf, and transforms.conf. Other configuration files can also be identical, depending on the needs of your system.

To distribute new or edited configuration files across all the peers, you first place them in a special location on the master. You then tell the master to distribute the files in that location to the peer nodes. The set of configuration files common to all peers, which is managed from the master and distributed to the peers as a single operation, is known as the configuration bundle.

Structure of the configuration bundle

You put new or modified configuration bundle files in a special location on the master node. You then tell the master to distribute the configuration bundle to a parallel location on each of the peer nodes.

On the master

On the master, the configuration bundle resides under the $SPLUNK_HOME/etc/master-apps directory. The set of files under that directory constitute the configuration bundle. They are always distributed as a group to all the peers. The directory has this structure:


Note the following:

  • The /cluster directory is a special location for configuration files that need to be distributed across all peers:
    • The /cluster/default subdirectory contains a default version of indexes.conf. Do not add any files to this directory and do not change any files in it.
    • The /cluster/local subdirectory is where you put new or edited configuration files that you want to distribute to the peers.
  • The master distributes only the /cluster/default and /cluster/local directories to the peers. Do not put any other directories under /master-apps, because the master will not distribute them.

You tell the master explicitly when you want it to distribute the latest configuration bundle to the peers. In addition, when a peer registers with the master (for example, upon its initial start-up), the master distributes the latest configuration bundle to it.

Note: The master-apps location is only for files that you'll be distributing to the peer nodes. The master does not use the files in that directory for its own configuration needs.

On the peers

On the peers, the distributed configuration bundle resides under $SPLUNK_HOME/etc/slave-apps. This directory is created soon after a peer is enabled, when the peer initially gets the latest bundle from the master. Except for the different name for the top-level directory, the structure and contents of the configuration bundle are the same as on the master:


Leave the downloaded files in this location. If later you redistribute an updated version of a configuration file, it will overwrite any earlier version in $SPLUNK_HOME/etc/slave-apps. You want this to occur, because all peers in the cluster must be using the same versions of the files in that directory.

When Splunk evaluates configuration files, the files in $SPLUNK_HOME/etc/slave-apps have the highest precedence. For information on configuration file precedence, see "Configuration file precedence" in the Admin manual.

Distribute the configuration bundle

To edit and distribute files across all peers, you:

1. Edit the files and test them, preferably on a standalone instance of Splunk.

2. Move the files to the master node.

3. Tell the master to apply the bundle to the peers.

The master pushes the entire bundle to the peers. This overwrites the contents of the peers' current bundle.

1. Edit the files in the configuration bundle

First, edit copies of the files you want to distribute to the peers. It is advisable that you edit and test the files on a standalone test instance of Splunk to confirm that they're working correctly, before distributing them to the set of peers. To minimize peer node down time, try to complete all edits for all files before proceeding to the next step.

Read the topics "Configure the peer nodes" and "Configure the peer indexes" for information on how to correctly configure the files.

2. Move the files to the master node

When you're ready to distribute the files, copy them to this directory on the master: $SPLUNK_HOME/etc/master-apps/cluster/local.

3. Apply the bundle to the peers

When you're ready to apply the updated configuration file(s) to the peers, run this CLI command on the master:

splunk apply cluster-bundle

It responds with this warning message:

Warning: This command will automatically restart all peers. Do you wish to continue? [y/n]:

To proceed, you need to respond with y. You can avoid this message entirely by appending the flag --answer-yes to the command:

splunk apply cluster-bundle --answer-yes

The apply cluster-bundle command causes the master to distribute the new configuration bundle to the peers, which then individually validate the bundle. If all peers successfully validate the bundle, the master then coordinates a rolling restart of all the peer nodes, so that the validated bundle takes effect.

The download and validation process usually takes just a few seconds to complete. If any peer is unable to validate the bundle, it sends a message to the master, and the master displays the error on its Clustering page in Manager. The command will not continue to the next phase - restarting the peers - unless all peers successfully validate the bundle.

Once validation is complete, the master initiates a rolling restart of all the peers. The master issues a restart message to approximately 10% of the peer nodes at a time (or one peer at a time, if you have less than 10 peers in your cluster). Once those peers restart and contact the master, the master then issues a restart message to another 10% of the peers, and so on, until all the peers have restarted. This method helps ensure that load-balanced forwarders sending data to the cluster always have a peer available to receive the data.

When the peers restart, they will be using the new set of configurations, which will be located in their local $SPLUNK_HOME/etc/slave-apps.

Important: Leave the files in $SPLUNK_HOME/etc/slave-apps.

View the status of the bundle update process

To see how the cluster bundle update is proceeding, run this command from the master:

splunk show cluster-bundle-status

This command will tell you whether bundle validation succeeded or failed. It will also indicate the restart status of each peer.

Distribution of the bundle when a peer starts up

When a peer initially connects with the master, it downloads the current configuration bundle and validates it locally. The peer will only join the cluster if bundle validation succeeds. This process also occurs when an offline peer rejoins the cluster.

If validation fails, the user needs to fix the errors and run splunk apply cluster-bundle from the master.

View the search head dashboard
Take a peer offline

This documentation applies to the following versions of Splunk® Enterprise: 5.0, 5.0.1

Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters