Update common peer configurations and apps
The peer update process described in this topic ensures that all peer nodes share a common set of key configuration files. You must use this method to distribute and update common files, including apps, to the peer nodes.
Note: For information on peer configuration issues, see "Configure the peer nodes". That topic details exactly which files should be identical across all peers. In brief, the key configuration files that should be identical in most circumstances are
transforms.conf. Other configuration files can also be identical, depending on the needs of your system. Since apps usually contain versions of those key files, you should also generally maintain a common set of apps across all peers.
To distribute new or edited configuration files or apps across all the peers, you first place them in a special location on the master. You then tell the master to distribute the files in that location to the peer nodes. The set of configuration files and apps common to all peers, which is managed from the master and distributed to the peers as a single operation, is known as the configuration bundle.
Structure of the configuration bundle
You put new or modified configuration bundle files in a special location on the master node. You then tell the master to distribute the configuration bundle to a parallel location on each of the peer nodes.
On the master
On the master, the configuration bundle resides under the
$SPLUNK_HOME/etc/master-apps directory. The set of files under that directory constitute the configuration bundle. They are always distributed as a group to all the peers. The directory has this structure:
$SPLUNK_HOME/etc/master-apps /_cluster /default /local /<app-name> /<app-name> ...
Note the following:
/_clusterdirectory is a special location for configuration files that need to be distributed across all peers:
/_cluster/defaultsubdirectory contains a default version of
indexes.conf. Do not add any files to this directory and do not change any files in it.
/_cluster/localsubdirectory is where you can put new or edited configuration files that you want to distribute to the peers.
- Note: In Splunk versions 5.0 and 5.0.1, the
/_clusterdirectory was named
/cluster(no underscore). When you upgrade a 5.0/5.0.1 master node to 5.0.2 or later, its
/clusterdirectory is automatically renamed to
/_cluster. When you then restart the master following completion of the upgrade, it performs a rolling restart on its peer nodes and pushes the new bundle, with the renamed
/_clusterdirectory, to the peers. The
slave-appsdirectory on all the peer nodes (including any 5.0/5.0.1 peers) will then contain the renamed directory.
/<app-name>subdirectories are optional. They provide a way to distribute any app to the peer nodes. Create and populate them as needed. For example, to distribute "appBestEver" to the peer nodes, place a copy of that app in its own subdirectory:
- The master only pushes the contents of subdirectories under
master-apps. It will not push any standalone files directly under
master-apps. For example, it will not push the standalone file
/master-apps/file1. Therefore, be sure to place any standalone configuration files in the
You tell the master explicitly when you want it to distribute the latest configuration bundle to the peers. In addition, when a peer registers with the master (for example, upon its initial start-up), the master distributes the latest configuration bundle to it.
Important: When the master distributes the bundle to the peers, it distributes the entire bundle, overwriting the entire contents of any configuration bundle previously distributed to the peers.
master-apps location is only for files that you'll be distributing to the peer nodes. The master does not use the files in that directory for its own configuration needs.
On the peers
On the peers, the distributed configuration bundle resides under
$SPLUNK_HOME/etc/slave-apps. This directory is created soon after a peer is enabled, when the peer initially gets the latest bundle from the master.
Except for the different name for the top-level directory, the structure and contents of the configuration bundle are the same as on the master:
$SPLUNK_HOME/etc/slave-apps /_cluster /default /local /<app-name> /<app-name> ...
Important: Leave the downloaded files in this location and do not edit them. If later you redistribute an updated version of a configuration file or app, it will overwrite any earlier version in
$SPLUNK_HOME/etc/slave-apps. You want this to occur, because all peers in the cluster must be using the same versions of the files in that directory.
For the same reason, do not add any files or subdirectories directly to
$SPLUNK_HOME/etc/slave-apps. The directory gets overwritten each time the master redistributes the configuration bundle.
When Splunk evaluates configuration files, the files in the
$SPLUNK_HOME/etc/slave-apps/[_cluster|<app-name>]/local subdirectories have the highest precedence. For information on configuration file precedence, see "Configuration file precedence" in the Admin manual.
Limitations on cluster apps
Apps distributed to peers as part of the configuration bundle have certain limitations compared to apps distributed to the
Cluster apps support configurations for the following phases of the data pipeline:
Cluster apps do not support these configurations and components:
- Search phase configurations
- Splunk Web components (You cannot, for example, access these apps through Splunk Web.)
Note: The master distributes the entire contents of the app directories, including any search configurations and Splunk Web components. The peer nodes are just not able to access those configurations and components.
For detailed information on the phases of the data pipeline and the configuration files and attributes pertinent to each phase, read "Configuration parameters and the data pipeline".
Distribute the configuration bundle
To distribute new or changed files and apps across all peers, do the following:
1. Prepare the files and apps and test them.
2. Move the files and apps into the configuration bundle on the master node.
3. Tell the master to apply the bundle to the peers.
The master pushes the entire bundle to the peers. This overwrites the contents of the peers' current bundle.
1. Prepare the files and apps for the configuration bundle
Make the necessary edits to the files you want to distribute to the peers. It is advisable that you then test the files, along with any apps, on a standalone test instance of Splunk to confirm that they're working correctly, before distributing them to the set of peers. To minimize peer node down time, try to complete work on all files before proceeding to the next step.
Important: If the configuration bundle subdirectories contain any
indexes.conf files that define new indexes, you must explicitly set each index's
repFactor attribute to
auto. This is necessary for
indexes.conf files that reside in app subdirectories, as well as any
indexes.conf in the
_cluster subdirectory. See "The indexes.conf repFactor attribute" for details.
2. Move the files to the master node
When you're ready to distribute the files and apps, copy them to
$SPLUNK_HOME/etc/master-apps/ on the master:
- Put apps directly under that directory. For example,
- Put standalone files in
3. Apply the bundle to the peers
To apply the configuration bundle to the peers, run this CLI command on the master:
splunk apply cluster-bundle
It responds with this warning message:
Warning: This command will automatically restart all peers. Do you wish to continue? [y/n]:
To proceed, you need to respond with
y. You can avoid this message entirely by appending the flag
--answer-yes to the command:
splunk apply cluster-bundle --answer-yes
apply cluster-bundle command causes the master to distribute the new configuration bundle to the peers, which then individually validate the bundle. "Bundle validation" means that the peer validates the settings for all
indexes.conf files in the bundle. If all peers successfully validate the bundle, the master then coordinates a rolling restart of all the peer nodes, so that the validated bundle takes effect.
The download and validation process usually takes just a few seconds to complete. If any peer is unable to validate the bundle, it sends a message to the master, and the master displays the error on its Clustering page in Manager. The command will not continue to the next phase - restarting the peers - unless all peers successfully validate the bundle.
Once validation is complete, the master initiates a rolling restart of all the peers. The master issues a restart message to approximately 10% of the peer nodes at a time (or one peer at a time, if you have less than 10 peers in your cluster). Once those peers restart and contact the master, the master then issues a restart message to another 10% of the peers, and so on, until all the peers have restarted. This method helps ensure that load-balanced forwarders sending data to the cluster always have a peer available to receive the data.
When the peers restart, they will be using the new set of configurations, which will be located in their local
Important: Leave the files in
View the status of the bundle update process
To see how the cluster bundle update is proceeding, run this command from the master:
splunk show cluster-bundle-status
This command will tell you whether bundle validation succeeded or failed. It will also indicate the restart status of each peer.
Distribution of the bundle when a peer starts up
After you initially configure a Splunk instance as a peer node, you need to restart it manually in order for it to join the cluster, as described in "Enable the peer nodes". During this restart, the peer connects with the master, downloads the current configuration bundle, validates the bundle locally, and then restarts again. The peer will only join the cluster if bundle validation succeeds. This same process also occurs when an offline peer comes back online.
If validation fails, the user needs to fix the errors and run
splunk apply cluster-bundle from the master.
View the search head dashboard
Take a peer offline
This documentation applies to the following versions of Splunk® Enterprise: 5.0.2, 5.0.3, 5.0.4