
Update common peer configurations and apps
The peer update process described in this topic ensures that all peer nodes share a common set of key configuration files. You must use this method to distribute and update common files, including apps, to the peer nodes.
To distribute new or edited configuration files or apps across all the peers, you first place them in a special location on the master. You then tell the master to distribute the files in that location to the peer nodes. The set of configuration files and apps common to all peers, which is managed from the master and distributed to the peers as a single operation, is known as the configuration bundle.
Important: For information on peer configuration issues, see "Configure the peer nodes". That topic details exactly which files should be identical across all peers. In brief, the key configuration files that should be identical in most circumstances are indexes.conf
, props.conf
, and transforms.conf
. Other configuration files can also be identical, depending on the needs of your system. Since apps usually contain versions of those key files, you should also generally maintain a common set of apps across all peers.
Note: Although you cannot use deployment server to directly distribute apps to the peers, you can use it to distribute apps to the master node's configuration bundle location. Once the apps are in that location, the master node can then distribute them to the peer nodes via the configuration bundle method described in this topic. See "Use deployment server to distribute the apps to the master node" in this topic.
Structure of the configuration bundle
The configuration bundle contains the set of files and apps that you want to distribute to all the peer nodes.
On the master
On the master, the configuration bundle resides under the $SPLUNK_HOME/etc/master-apps
directory. The set of files under that directory constitute the configuration bundle. They are always distributed as a group to all the peers. The directory has this structure:
$SPLUNK_HOME/etc/master-apps /_cluster /default /local /<app-name> /<app-name> ...
Note the following:
- The
/_cluster
directory is a special location for configuration files that need to be distributed across all peers:- The
/_cluster/default
subdirectory contains a default version ofindexes.conf
. Do not add any files to this directory and do not change any files in it. (Note: This peer-specific defaultindexes.conf
has a higher precedence than the standard defaultindexes.conf
, located under$SPLUNK_HOME/etc/system/default
.) - The
/_cluster/local
subdirectory is where you can put new or edited configuration files that you want to distribute to the peers. - Note: In Splunk versions 5.0 and 5.0.1, the
/_cluster
directory was named/cluster
(no underscore). When you upgrade a 5.0/5.0.1 master node to 5.0.2 or later, its/cluster
directory is automatically renamed to/_cluster
. When you then restart the master following completion of the upgrade, it performs a rolling restart on its peer nodes and pushes the new bundle, with the renamed/_cluster
directory, to the peers. Theslave-apps
directory on all the peer nodes (including any 5.0/5.0.1 peers) will then contain the renamed directory.
- The
- The
/<app-name>
subdirectories are optional. They provide a way to distribute any app to the peer nodes. Create and populate them as needed. For example, to distribute "appBestEver" to the peer nodes, place a copy of that app in its own subdirectory:$SPLUNK_HOME/etc/master-apps/appBestEver
. - To delete an app that you previously distributed to the peers, remove its directory from the configuration bundle. When you next push the bundle, the app will be deleted from each peer.
- The master only pushes the contents of subdirectories under
master-apps
. It will not push any standalone files directly undermaster-apps
. For example, it will not push the standalone file/master-apps/file1
. Therefore, be sure to place any standalone configuration files in the/_cluster/local
subdirectory.
You tell the master explicitly when you want it to distribute the latest configuration bundle to the peers. In addition, when a peer registers with the master (for example, upon its initial start-up), the master distributes the latest configuration bundle to it.
Important: When the master distributes the bundle to the peers, it distributes the entire bundle, overwriting the entire contents of any configuration bundle previously distributed to the peers.
Note: The master-apps
location is only for files that you'll be distributing to the peer nodes. The master does not use the files in that directory for its own configuration needs.
On the peers
On the peers, the distributed configuration bundle resides under $SPLUNK_HOME/etc/slave-apps
. This directory is created soon after a peer is enabled, when the peer initially gets the latest bundle from the master.
Except for the different name for the top-level directory, the structure and contents of the configuration bundle are the same as on the master:
$SPLUNK_HOME/etc/slave-apps /_cluster /default /local /<app-name> /<app-name> ...
Important: Leave the downloaded files in this location and do not edit them. If later you redistribute an updated version of a configuration file or app, it will overwrite any earlier version in $SPLUNK_HOME/etc/slave-apps
. You want this to occur, because all peers in the cluster must be using the same versions of the files in that directory.
For the same reason, do not add any files or subdirectories directly to $SPLUNK_HOME/etc/slave-apps
. The directory gets overwritten each time the master redistributes the configuration bundle.
When Splunk evaluates configuration files, the files in the $SPLUNK_HOME/etc/slave-apps/[_cluster|<app-name>]/local
subdirectories have the highest precedence. For information on configuration file precedence, see "Configuration file precedence" in the Admin Manual.
Distribute the configuration bundle
To distribute new or changed files and apps across all peers, do the following:
1. Prepare the files and apps and test them.
2. Move the files and apps into the configuration bundle on the master node.
3. Tell the master to apply the bundle to the peers.
The master pushes the entire bundle to the peers. This overwrites the contents of the peers' current bundle.
1. Prepare the files and apps for the configuration bundle
Make the necessary edits to the files you want to distribute to the peers. It is advisable that you then test the files, along with any apps, on a standalone test instance of Splunk to confirm that they're working correctly, before distributing them to the set of peers. To minimize peer node down time, try to complete work on all files before proceeding to the next step.
Read the topics "Configure the peer nodes" and "Configure the peer indexes" for information on how to correctly configure the files.
Important: If the configuration bundle subdirectories contain any indexes.conf
files that define new indexes, you must explicitly set each index's repFactor
attribute to auto
. This is necessary for indexes.conf
files that reside in app subdirectories, as well as any indexes.conf
in the _cluster
subdirectory. See "The indexes.conf repFactor attribute" for details.
2. Move the files to the master node
When you're ready to distribute the files and apps, copy them to $SPLUNK_HOME/etc/master-apps/
on the master:
- Put apps directly under that directory. For example,
$SPLUNK_HOME/etc/master-apps/<app-name>
. - Put standalone files in
$SPLUNK_HOME/etc/master-apps/_cluster/local
.
3. Apply the bundle to the peers
To apply the configuration bundle to the peers, you can use Splunk Web or the CLI.
Use Splunk Web to apply the bundle
To apply the configuration bundle to the peers, go to the master node dashboard:
1. Click Settings in the upper right corner of Splunk Web on the master.
2. In the Distributed environment group, click Clustering.
3. Click the Edit button on the upper right corner of the dashboard and then select the Distribute Configuration Bundle option.
4. Click the Distribute Configuration Bundle button. A pop-up window warns you that the distribution might, under certain circumstances, initiate a restart of all the peer nodes. For information on which configuration changes cause a peer restart, see "Restart or reload?".
5. Click Push Changes to continue.
6. The screen offers information on the distribution progress. Once the distribution completes or aborts, the screen indicates the result. In the case of an aborted distribution, it indicates which peers could not receive the distribution. Each peer must successfully receive and apply the distribution. If any peer is unsuccessful, none of the peers will implement the bundle.
Once the process completes successfully, the peers will be using the new set of configurations, which will be located in their local $SPLUNK_HOME/etc/slave-apps
.
Important: Leave the files in $SPLUNK_HOME/etc/slave-apps
.
For more details on the internals of the distribution process, read the next section on applying the bundle through the CLI.
Use the CLI to apply the bundle
To apply the configuration bundle to the peers, run this CLI command on the master:
splunk apply cluster-bundle
It responds with this warning message:
Warning: Under some circumstances, this command will initiate a rolling restart of all peers. This depends on the contents of the configuration bundle. For details, refer to the documentation. Do you wish to continue? [y/n]:
For information on which configuration changes cause a rolling restart, see "Restart or reload?".
To proceed, you need to respond to the message with y
. You can avoid this message entirely by appending the flag --answer-yes
to the command:
splunk apply cluster-bundle --answer-yes
The apply cluster-bundle
command causes the master to distribute the new configuration bundle to the peers, which then individually validate the bundle. "Bundle validation" means that the peer validates the settings for all indexes.conf
files in the bundle. After all peers successfully validate the bundle, the master coordinates a rolling restart of all the peer nodes, if necessary.
The download and validation process usually takes just a few seconds to complete. If any peer is unable to validate the bundle, it sends a message to the master, and the master displays the error on its Clustering page in Splunk Web. The command will not continue to the next phase - reloading or restarting the peers - unless all peers successfully validate the bundle.
Once validation is complete, the master tells the peers to reload or, if necessary, it initiates a rolling restart of all the peers. For details on how a rolling restart works, see "The rolling-restart command".
When the process is complete, the peers will be using the new set of configurations, which will be located in their local $SPLUNK_HOME/etc/slave-apps
.
Important: Leave the files in $SPLUNK_HOME/etc/slave-apps
.
Once an app has been distributed to the set of peers, you launch and manage it on each peer in the usual manner, with Splunk Web. See "Managing app configurations and properties" in the Admin Manual.
Note: The apply cluster-bundle
command takes an optional flag, --skip-validation
, for use in cases where a problem exists in the validation process. You should only use this flag under the direction of Splunk Support and after ascertaining that the bundle is valid. Do not use this flag to circumvent the validation process unless you know what you're doing.
Use the CLI to view the status of the bundle update process
To see how the cluster bundle update is proceeding, run this command from the master:
splunk show cluster-bundle-status
This command will tell you whether bundle validation succeeded or failed. It will also indicate the restart status of each peer.
Distribution of the bundle when a peer starts up
After you initially configure a Splunk instance as a peer node, you need to restart it manually in order for it to join the cluster, as described in "Enable the peer nodes". During this restart, the peer connects with the master, downloads the current configuration bundle, validates the bundle locally, and then restarts again. The peer will only join the cluster if bundle validation succeeds. This same process also occurs when an offline peer comes back online.
If validation fails, the user needs to fix the errors and run splunk apply cluster-bundle
from the master.
Restart or reload after configuration bundle changes?
Some changes to files in the configuration bundle require that the peers restart. In other cases, the peers can just reload, avoiding any interruption to indexing or searching. The bundle validation process determines which action is needed and directs the master to initiate a rolling restart of the peers only if necessary.
Restart occurs:
- If the configuration bundle contains changes to any configuration files besides
indexes.conf
. - If you make any of these
indexes.conf
changes:- Changing any of these attributes:
rawChunkSizeBytes, rawCompressionLevel, minRawFileSyncSecs, syncMeta, maxConcurrentOptimizes, coldToFrozenDir, frozenTimePeriodInSecs
- Changing any of these attributes for existing indexes:
repFactor, homePath, coldPath, thawedPath, bloomHomePath, summaryHomePath, tstatsHomePath
- Adding or removing a volume
- Enabling or disabling an index with data
- Removing an index with data
- Changing any of these attributes:
Reload occurs if you only make these indexes.conf
changes:
- Adding new index stanzas
- Changing any attributes not listed as requiring restart
- Enabling or disabling an index with no data
- Removing an index with no data
Use deployment server to distribute the apps to the master
Although you cannot use deployment server to directly distribute apps to the peers, you can use it to distribute apps to the master node's configuration bundle location. Once the apps are in that location, the master can distribute them to the peer nodes, using the configuration bundle method described in this topic.
In addition to the deployment server, you can also use third party distributed configuration management software, such as Puppet or Chef, to distribute apps to the master.
To use the deployment server to distribute files to the configuration bundle on the master:
1. Configure the master as a client of the deployment server, as described in "Configure deployment clients" in Updating Splunk Enterprise Instances.
2. On the master, edit deploymentclient.conf and set the repositoryLocation
attribute to the master-apps
location:
[deployment-client] serverRepositoryLocationPolicy = rejectAlways repositoryLocation = $SPLUNK_HOME/etc/master-apps
3. On the deployment server, create and populate one or more deployment apps for download to the master's configuration bundle. Make sure that the apps follow the structural requirements for the configuration bundle, as outlined earlier in the current topic. See "Create deployment apps" in Updating Splunk Enterprise Instances for information on creating deployment apps.
4. Create one or more server classes that map the master to the deployment apps in the usual way.
5. Each server class must include the stateOnClient = noop
setting:
[serverClass:<serverClassName>] stateOnClient = noop
Note: Do not override this setting at the app stanza level.
6. Download the apps to the master node.
Once the master receives the new or updated deployment apps in the configuration bundle, you can distribute the bundle to the peers, using the method described in the current topic.
Important: Take steps to ensure that the master does not restart automatically after receiving the deployment apps. Specifically, when defining deployment app behavior, do not change the value of the restartSplunkd
setting from its default of false
in serverclass.conf
. If you are using forwarder management to define your server classes, make sure that the field Restart splunkd on the Edit App screen is not checked.
For detailed information on the deployment server and how to perform the various operations necessary, read the Updating Splunk Enterprise Instances manual.
PREVIOUS View the search head dashboard |
NEXT Take a peer offline |
This documentation applies to the following versions of Splunk® Enterprise: 6.0, 6.0.1, 6.0.2, 6.0.3, 6.0.4, 6.0.5
Feedback submitted, thanks!