Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Download manual as PDF

Download topic as PDF

Update common peer configurations and apps

The peer update process described in this topic ensures that all peer nodes share a common set of key configuration files. You must manually invoke this process to distribute and update common files, including apps, to the peer nodes. The process also runs automatically when a peer joins the cluster.

For information on peer configuration files, see Manage common configurations across all peers. That topic details exactly which files must be identical across all peers. In brief, the configuration files that must be identical in most circumstances are indexes.conf, props.conf, and transforms.conf. Other configuration files can also be identical, depending on the needs of your system. Since apps usually include versions of those key files, you should also maintain a common set of apps across all peers.

The set of configuration files and apps common to all peers, which is managed from the master and distributed to the peers in a single operation, is called the configuration bundle. The process used to distribute the configuration bundle is known as the configuration bundle method.

To distribute new or edited configuration files or apps across all the peers, you add the files to the configuration bundle on the master and tell the master to distribute the files to the peers.

Structure of the configuration bundle

The configuration bundle consists of the set of files and apps common to all peer nodes.

On the master

On the master, the configuration bundle resides under the $SPLUNK_HOME/etc/master-apps directory. The set of files under that directory constitute the configuration bundle. They are always distributed as a group to all the peers. The directory has this structure:

$SPLUNK_HOME/etc/master-apps/
     _cluster/
          default/
          local/
     <app-name>/
     <app-name>/
     ...

Note the following:

  • The /_cluster directory is a special location for configuration files that need to be distributed across all peers:
    • The /_cluster/default subdirectory contains a default version of indexes.conf. Do not add any files to this directory and do not change any files in it. This peer-specific default indexes.conf has a higher precedence than the standard default indexes.conf, located under $SPLUNK_HOME/etc/system/default.
    • The /_cluster/local subdirectory is where you can put new or edited configuration files that you want to distribute to the peers.
    • For 5.0/5.0.1 upgrades: In Splunk versions 5.0 and 5.0.1, the /_cluster directory was named /cluster (no underscore). When you upgrade a 5.0/5.0.1 master node to 5.0.2 or later, its /cluster directory is automatically renamed to /_cluster. When you then restart the master following completion of the upgrade, it performs a rolling restart on its peer nodes and pushes the new bundle, with the renamed /_cluster directory, to the peers. The slave-apps directory on all the peer nodes (including any 5.0/5.0.1 peers) will then contain the renamed directory.
  • The /<app-name> subdirectories are optional. They provide a way to distribute any app to the peer nodes. Create and populate them as needed. For example, to distribute "appBestEver" to the peer nodes, place a copy of that app in its own subdirectory: $SPLUNK_HOME/etc/master-apps/appBestEver.
  • To delete an app that you previously distributed to the peers, remove its directory from the configuration bundle. When you next push the bundle, the app will be deleted from each peer.
  • The master only pushes the contents of subdirectories under master-apps. It will not push any standalone files directly under master-apps. For example, it will not push the standalone file /master-apps/file1. Therefore, be sure to place any standalone configuration files in the /_cluster/local subdirectory.

You explicitly tell the master when you want it to distribute the latest configuration bundle to the peers. In addition, when a peer registers with the master (for example, when the peer joins the cluster), the master distributes the current configuration bundle to it.

Caution: When the master distributes the bundle to the peers, it distributes the entire bundle, overwriting the entire contents of any configuration bundle previously distributed to the peers.

The master-apps location is only for peer node files. The master does not use the files in that directory for its own configuration needs.

On the peers

On the peers, the distributed configuration bundle resides under $SPLUNK_HOME/etc/slave-apps. This directory is created soon after a peer is enabled, when the peer initially gets the latest bundle from the master. Except for the different name for the top-level directory, the structure and contents of the configuration bundle are the same as on the master:

$SPLUNK_HOME/etc/slave-apps/
     _cluster/
          default/
          local/
     <app-name>/
     <app-name>/
     ...

Important: Leave the downloaded files in this location and do not edit them. If you later distribute an updated version of a configuration file or app to the peers, it will overwrite any earlier version in $SPLUNK_HOME/etc/slave-apps. You want this to occur, because all peers in the cluster must be using the same versions of the files in that directory.

For the same reason, do not add any files or subdirectories directly to $SPLUNK_HOME/etc/slave-apps. The directory gets overwritten each time the master redistributes the configuration bundle.

When Splunk evaluates configuration files, the files in the $SPLUNK_HOME/etc/slave-apps/[_cluster|<app-name>]/local subdirectories have the highest precedence. For information on configuration file precedence, see Configuration file precedence in the Admin Manual.

Settings that you should not distribute through the configuration bundle

The $SPLUNK_HOME/etc/slave-apps directory on the peers is read-only. This is necessary and beneficial behavior, because each time you distribute a new bundle, the directory gets overwritten in its entirety. You would thus otherwise lose any changes made to settings in that directory. Also, the cluster relies on the settings in that directory being identical across all peers.

Therefore, if you distribute a setting through the configuration bundle method that the peer needs to update automatically in some way, the peer will do so by creating a new version of the app under $SPLUNK_HOME/etc/apps. Since you cannot have two apps with the same name, this generates "unexpected duplicate app" errors in splunkd.log.

A common cause of this behavior is distributing SSL passwords through the configuration bundle. Splunk Enterprise overwrites the password with an encrypted version upon restart. But if you distribute the setting through the configuration bundle, the peers cannot overwrite the unencrypted password in its bundle location under $SPLUNK_HOME/etc/slave-apps. Therefore, upon restart after bundle push, they instead write the encrypted version to $SPLUNK_HOME/etc/apps, in an app directory with the same name as its name under $SPLUNK_HOME/etc/slave-apps.

For example, do not push the following setting in inputs.conf:

[SSL]
password = <your_password>

If the setting is in an app directory called "newapp" in the configuration bundle, upon restart the peer will create a "newapp" directory under $SPLUNK_HOME/etc/apps and put the setting there. This results in duplicate "newapp" apps.

Distribute the configuration bundle

To distribute new or changed files and apps across all peers, do the following:

1. Prepare the files and apps and test them.

2. Move the files and apps into the configuration bundle on the master node.

3. (Optional.) Validate the bundle.

4. Tell the master to apply the bundle to the peers.

The master pushes the entire bundle to the peers. This overwrites the contents of the peers' current bundle.

1. Prepare the files and apps for the configuration bundle

Make the necessary edits to the files you want to distribute to the peers. It is advisable that you then test the files, along with any apps, on a standalone test indexer to confirm that they are working correctly, before distributing them to the set of peers. Try to combine all updates in a single bundle, to reduce the impact on the work of the peer nodes.

Read the topics Manage common configurations across all peers and Configure the peer indexes in an indexer cluster for information on how to configure the files.

Important: If the configuration bundle subdirectories contain any indexes.conf files that define new indexes, you must explicitly set each index's repFactor attribute to auto. This is necessary for indexes.conf files that reside in app subdirectories, as well as any indexes.conf file in the _cluster subdirectory. See The indexes.conf repFactor attribute for details.

2. Move the files to the master node

When you are ready to distribute the files and apps, copy them to $SPLUNK_HOME/etc/master-apps/ on the master:

  • Put apps directly under that directory. For example, $SPLUNK_HOME/etc/master-apps/<app-name>.
  • Put standalone files in the $SPLUNK_HOME/etc/master-apps/_cluster/local subdirectory.

3. Validate the bundle

This step is optional.

As part of the next step, Apply the bundles to the peers, the peers individually validate the bundle. Specifically, each peer validates the settings for all indexes.conf files in the bundle. If all peers successfully validate the bundle, the cluster completes the process of applying the bundle.

In the current step, you can optionally validate the bundle without applying it. Once you confirm that the bundle is valid across all peer nodes, you can then apply it as a separate step.

Validation can be useful, for example, if you want to ensure that the bundle will apply across all peer nodes without problem. The validation process provides information useful for debugging invalid bundles.

You can also check whether applying the bundle will require a restart of the peer nodes.

To validate the bundle only, run splunk validate cluster-bundle:

splunk validate cluster-bundle

This command returns a message confirming that bundle validation has started. In certain failure conditions, it also indicates the cause of failure. Finally, it suggests that you run the splunk show cluster-bundle-status command for the status of bundle validation:

splunk show cluster-bundle-status

This command indicates validation success. In the case of validation failure, it provides insight into the cause of failure.

To validate the bundle and check whether a restart is necessary, include the --check-restart parameter:

splunk validate cluster-bundle --check-restart

This version of the command first validates the bundle. If validation succeeds, it then checks whether a peer restart is necessary.

Caution: If you validate the bundle without applying it, the contents of the $SPLUNK_HOME/etc/master-apps directory on the master will differ from the contents of the $SPLUNK_HOME/etc/slave-apps directory on the peer nodes until you do apply the bundle. This has no effect on the operation of the cluster, but it is important to remain aware that the difference exists.

4. Apply the bundle to the peers

To apply the configuration bundle to the peers, you can use Splunk Web or the CLI.

Use Splunk Web to apply the bundle

To apply the configuration bundle to the peers, go to the master node dashboard:

1. Click Settings on the upper right side of Splunk Web.

2. In the Distributed Environment group, click Indexer clustering.

3. Click the Edit button on the upper right corner of the dashboard and then select the Distribute Configuration Bundle option.

A dashboard appears with information on the last successful push. It also contains a button, Distribute Configuration Bundle.

4. Click the Distribute Configuration Bundle button.

A pop-up window warns you that the distribution might, under certain circumstances, initiate a restart of all the peer nodes. For information on which configuration changes cause a peer restart, see Restart or reload after configuration bundle changes?.

5. Click Push Changes to continue.

The screen provides information on the distribution progress. Once the distribution completes or aborts, the screen indicates the result. In the case of an aborted distribution, it indicates which peers could not receive the distribution. Each peer must successfully receive and apply the distribution. If any peer is unsuccessful, none of the peers will apply the bundle.

Once the process completes successfully, the peers will be using the new set of configurations, now located in their local $SPLUNK_HOME/etc/slave-apps.

Important: Leave the files in $SPLUNK_HOME/etc/slave-apps.

For more details on the internals of the distribution process, read the next section on applying the bundle through the CLI.

Use the CLI to apply the bundle

1. To apply the configuration bundle to the peers, run this CLI command on the master:

splunk apply cluster-bundle

It responds with this warning message:

Warning: Under some circumstances, this command will initiate a rolling restart 
of all peers. This depends on the contents of the configuration bundle. For 
details, refer to the documentation. Do you wish to continue? [y/n]:

For information on which configuration changes cause a rolling restart, see Restart or reload after configuration bundle changes?.

2. To proceed, you need to respond to the message with y. You can avoid this message entirely by appending the flag --answer-yes to the command:

splunk apply cluster-bundle --answer-yes

The splunk apply cluster-bundle command causes the master to distribute the new configuration bundle to the peers, which then individually validate the bundle. During this process, each peer validates the settings for all indexes.conf files in the bundle. After all peers successfully validate the bundle, the master coordinates a rolling restart of all the peer nodes, if necessary.

The download and validation process usually takes just a few seconds to complete. If any peer is unable to validate the bundle, it sends a message to the master, and the master displays the error on its dashboard in Splunk Web. The process will not continue to the next phase - reloading or restarting the peers - unless all peers successfully validate the bundle.

If validation is not successful, you must fix any problems noted by the master and rerun splunk apply cluster-bundle.

Once validation is complete, the master tells the peers to reload or, if necessary, it initiates a rolling restart of all the peers. For details on how a rolling restart works, see Use rolling restart.

When the process is complete, the peers will be using the new set of configurations, which will be located in their local $SPLUNK_HOME/etc/slave-apps.

Important: Leave the files in $SPLUNK_HOME/etc/slave-apps.

Once an app has been distributed to the set of peers, you launch and manage it on each peer in the usual manner, with Splunk Web. See Managing app configurations and properties in the Admin Manual.

Note: The apply cluster-bundle command takes an optional flag, --skip-validation, for use in cases where a problem exists in the validation process. You should only use this flag under the direction of Splunk Support and after ascertaining that the bundle is valid. Do not use this flag to circumvent the validation process unless you know what you are doing.

You can also validate the bundle without applying it. This is useful if you are debugging some validation issues. See Validate the configuration bundle.

Use the CLI to view the status of the bundle update process

To see how the cluster bundle update is proceeding, run this command from the master:

splunk show cluster-bundle-status

This command tells you whether bundle validation succeeded or failed. It also indicates the restart status of each peer.

Restart or reload after configuration bundle changes?

Some changes to files in the configuration bundle require that the peers restart. In other cases, the peers can just reload, avoiding any interruption to indexing or searching. The bundle reload phase on the peers determines whether a restart is required and directs the master to initiate a rolling restart of the peers only if necessary.

Reload occurs when:

  • You make changes or additions to transforms.conf or props.conf.
  • You make any of these changes in indexes.conf:

Restart occurs when:

  • The configuration bundle contains changes to any configuration files besides indexes.conf, props.conf, or transforms.conf.
  • You make any of the indexes.conf changes described in Determine which indexes.conf changes require restart.
  • You delete an existing app from the configuration bundle.

Rollback the configuration bundle

You can rollback the configuration bundle to the previous version. This action allows you to recover from a misconfigured bundle.

The rollback action toggles the most recently applied configuration bundle on the peers with the previously applied bundle. You cannot rollback beyond the previous bundle.

For example, say that the peers have an active configuration bundle "A" and you apply a configuration bundle "B", which then becomes the new active bundle. If you discover problems with B, you can rollback to bundle A, and the peers will then use A as their active bundle. If you rollback a second time, the peers will return again to bundle B . If you rollback a third time, the bundles will return again to A, and so on. The rollback action always toggles the two most recent bundles.

To rollback the configuration bundle, run this command from the master node:

splunk rollback cluster-bundle

As with splunk apply cluster-bundle, this command initiates a rolling restart of the peer nodes, when necessary.

You can use the splunk show cluster-bundle-status command to determine the current active bundle. You can use the cluster/master/info endpoint to get information about the current active and previous active bundles.

If the master-apps folder gets corrupted, resulting in rollback failure, a message specifying the failure and the workaround appears on the master node dashboard, as well as in splunkd.log. To remediate, follow the instructions in the message. This includes removing the $SPLUNK_HOME/etc/master-apps.dirty marker file, which indicates failure, and manually copying over the active bundle, as specified in the message.

Caution: On Windows, the rollback operation will fail if there are open file handles to $SPLUNK_HOME/etc/master-apps and its contents.

Distribution of the bundle when a peer starts up

After you initially configure a Splunk instance as a peer node, you need to restart it manually in order for it to join the cluster, as described in Enable the peer nodes. During this restart, the peer connects with the master, downloads the current configuration bundle, validates the bundle locally, and then restarts again. The peer will only join the cluster if bundle validation succeeds. This same process also occurs when an offline peer comes back online.

If validation fails, the user needs to fix the errors and run splunk apply cluster-bundle from the master.

Use deployment server to distribute the apps to the master

Although you cannot use deployment server to directly distribute apps to the peers, you can use it to distribute apps to the master node's configuration bundle location. Once the apps are in that location, the master can distribute them to the peer nodes, using the configuration bundle method described in this topic.

In addition to the deployment server, you can also use third party distributed configuration management software, such as Puppet or Chef, to distribute apps to the master.

To use the deployment server to distribute files to the configuration bundle on the master:

1. Configure the master as a client of the deployment server, as described in Configure deployment clients in Updating Splunk Enterprise Instances.

2. On the master, edit deploymentclient.conf and set the repositoryLocation attribute to the master-apps location:

[deployment-client]
serverRepositoryLocationPolicy = rejectAlways
repositoryLocation = $SPLUNK_HOME/etc/master-apps

3. On the deployment server, create and populate one or more deployment apps for download to the master's configuration bundle. Make sure that the apps follow the structural requirements for the configuration bundle, as outlined earlier in the current topic. See Create deployment apps in Updating Splunk Enterprise Instances for information on creating deployment apps.

4. Create one or more server classes that map the master to the deployment apps in the usual way.

5. Each server class must include the stateOnClient = noop setting:

[serverClass:<serverClassName>]
stateOnClient = noop

Note: Do not override this setting at the app stanza level.

6. Download the apps to the master node.

Once the master receives the new or updated deployment apps in the configuration bundle, you can distribute the bundle to the peers, using the method described in the current topic.

Important: Take steps to ensure that the master does not restart automatically after receiving the deployment apps. Specifically, when defining deployment app behavior, do not change the value of the restartSplunkd setting from its default of false in serverclass.conf. If you are using forwarder management to define your server classes, make sure that the field Restart splunkd on the Edit App screen is not checked.

For detailed information on the deployment server and how to perform the various operations necessary, read the Updating Splunk Enterprise Instances manual.

PREVIOUS
Configure the peer indexes in an indexer cluster
  NEXT
Manage configurations on a peer-by-peer basis

This documentation applies to the following versions of Splunk® Enterprise: 6.6.0, 6.6.1, 6.6.2


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters