Splunk® Enterprise

Distributed Search

Download manual as PDF

This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

Use the deployer to distribute apps and configuration updates

The deployer is a Splunk Enterprise instance that you use to distribute apps and certain other configuration updates to search head cluster members. The set of updates that the deployer distributes is called the configuration bundle.

The deployer distributes the configuration bundle in response to your command. The deployer also distributes any updates whenever a member joins or rejoins the cluster.

Caution: You must use the deployer, not the deployment server, to distribute apps to cluster members. Use of the deployer eliminates the possibility of conflict with the run-time updates that the cluster replicates automatically by means of the mechanism described in "Configuration updates that the cluster replicates."

What configurations does the deployer manage?

You use the deployer primarily to distribute non-runtime configuration changes.

You do not use the deployer to distribute runtime search-related configuration changes. Instead, the cluster automatically replicates such changes to all cluster members. For example, if a user creates a saved search on one member, the cluster replicates it automatically to all other members. See "Configuration updates that the cluster replicates." To distribute other updates, you need the deployer.

The types of updates that the deployer handles

These are the types of updates that require the deployer:

  • New or upgraded apps.
  • Configuration files that you edit directly.
  • All non-search-related updates, even those that can be configured through the CLI or Splunk Web, such as updates to indexes.conf or inputs.conf.

Note: You use the deployer to deploy configuration updates only. You cannot use it for initial configuration of the search head cluster or for version upgrades to the Splunk Enterprise instances that the members run on.

App upgrades and runtime changes

Because of how configuration file precedence works, changes that users make to apps at runtime get maintained in the app through subsequents upgrades.

Say, for example, that you deploy the 1.0 version of some app, and then a user modifies the app's dashboards. When you later deploy the 1.1 version of the app, the user modifications will persist in the 1.1 version of the app.

As explained in "Configuration updates that the cluster replicates," the cluster replicates any runtime changes to all members. Those runtime changes do not get subsequently uploaded to the deployer, but because of the way configuration layering works, those changes have precedence over the configurations in the unmodified apps distributed by the deployer. To understand this issue in detail, read the rest of this topic, as well as the topic "Configuration file precedence" in the Admin Manual.

When does the deployer distribute configurations to the members?

The deployer distributes the configuration bundle to the cluster members under these circumstances:

  • When you invoke the splunk apply shcluster-bundle command, the deployer pushes any new or changed configurations to the members. See "Deploy a configuration bundle."
  • When a member joins or rejoins the cluster, it checks the deployer for updates. A member also checks for updates whenever it restarts. If any updates are available, it pulls them from the deployer.

Configure the deployer

Note: The actions in this subsection are integrated into the procedure for deploying the search head cluster, described in the topic "Deploy a search head cluster." If you set up the deployer during initial deployment of the search head cluster, you can skip this section.

Choose an instance to be the deployer

Each search head cluster needs one deployer. The deployer must run on a Splunk Enterprise instance outside the search head cluster.

Depending on the specific components of your Splunk Enterprise environment, the deployer might be able to run on an existing Splunk Enterprise instance with other responsibilities, such as a deployment server or the master node of an indexer cluster. Otherwise, you can run it on a dedicated instance. See "Deployer requirements".

Deploy to multiple clusters

The deployer sends the same configuration bundle to all cluster members that it services. Therefore, if you have multiple search head clusters, you can use the same deployer for all the clusters only if the clusters employ exactly the same configurations, apps, and so on.

If you anticipate that your clusters might need different configurations over time, set up a separate deployer for each cluster.

Set a security key on the deployer

If the search head cluster members are using a security key, you must also set the same key on the deployer. The deployer uses this key to authenticate communication with the cluster members. To set the key, specify the pass4SymmKey attribute in either the [general] or the [shclustering] stanza of the deployer's server.conf file. For example:

[shclustering]
pass4SymmKey = yoursecuritykey

The key must be the same for all cluster members and the deployer. You can set the key on the cluster members during initialization.

You must restart the instance for the key to take effect.

Note: If there is a mismatch between the value of pass4SymmKey on the cluster members and on the deployer (for example, you set it on the members but neglect to set it on the deployer), you will get an error message when the deployer attempts to push the configuration bundle. The message will resemble this:

Error while deploying apps to first member: ConfDeploymentException: Error while fetching apps baseline on target=https://testitls1l:8089: Non-200/201 status_code=401; {"messages":[{"type":"WARN","text":"call not properly authenticated"}]}

Point the cluster members to the deployer

Each cluster member needs to know the location of the deployer. Splunk recommends that you specify the deployer location during member initialization. See "Deploy a search head cluster."

If you do not set the deployer location at initialization time, you must add the location to each member's server.conf file before using the deployer:

[shclustering]
conf_deploy_fetch_url = <URL>:<management_port> 

The conf_deploy_fetch_url attribute specifies the URL and management port for the deployer instance.

If you later add a new member to the cluster, you must set conf_deploy_fetch_url on the member before adding it to the cluster, so it can immediately contact the deployer for the current configuration bundle, if any.

What the configuration bundle contains

The configuration bundle is the set of files that the deployer distributes to the cluster members. It can contain apps or other groups of configuration files. You determine what it contains. You place the files for the configuration bundle in a designated location on the deployer.

The deployer pushes the configuration bundle to the members as a set of tarballs, one for each app.

Caution: If you attempt to push a very large tarball (>200 MB), the operation might fail due to various timeouts. Delete some of the contents from the tarball's app, if possible, and try again.

Location on the deployer

On the deployer, the configuration bundle resides under the $SPLUNK_HOME/etc/shcluster directory. The set of files under that directory constitutes the configuration bundle.

The directory has this structure:

$SPLUNK_HOME/etc/shcluster/
     apps/
          <app-name>/
          <app-name>/
          ...
     users/

Note the following:

  • Put each app in its own subdirectory under /apps. You must untar the app.
  • To push files to users, put the files under the /users subdirectories where you want them to reside on the members.
  • The deployer will push the content under /shcluster/users only if the content includes at least one configuration file. For example, if you place a private lookup table or view under some user subdirectory, the deployer will push it only if there is also at least one configuration file somewhere under /shcluster/users.
  • All files placed under both default and local subdirectories get merged into default subdirectories on the members, post-deployment. This holds true for both app and user subdirectories. See "Location on the cluster members."
  • The configuration bundle must contain all previously pushed apps, as well as any new ones. If you delete an app from the bundle, the next time you push the bundle, the app will get deleted from the cluster members.
  • To update an app on the cluster members, put the updated version in the configuration bundle.
  • To delete an app that you previously pushed, remove it from the configuration bundle. When you next push the bundle, each member will delete it from its own file system.
  • The deployer only pushes the contents of subdirectories under shcluster. It does not push any standalone files directly under shcluster. For example, it will not push the file /shcluster/file1. To deploy standalone files, create a new apps directory under /apps and put the files in the local subdirectory. For example, put file1 under $SPLUNK_HOME/etc/shcluster/apps/newapp/local.

When the deployer pushes the bundle, it pushes the full contents of all apps that have changed since the last push. Even if the only change to an app is a single file, it pushes the entire app. If an app has not changed, the deployer does not push it again.

Caution: If an app in the configuration bundle has the same name as a default app on the cluster members, it will overwrite that app. For example, if you create an app called "search" in the configuration bundle, it will overwrite the default search app that ships with Splunk Enterprise. It is highly unlikely that you want this to happen.

Note: The shcluster location is only for files that you want to distribute to cluster members. The deployer does not use the files in that directory for its own configuration needs.

Location on the cluster members

On the cluster members, the deployed apps and files reside under $SPLUNK_HOME/etc/apps and $SPLUNK_HOME/etc/users.

Important: The deployer never deploys files to the members' local app directories, $SPLUNK_HOME/etc/apps/<app_name>/local. Instead, it deploys both local and default settings in the configuration bundle to the members' default app directories, $SPLUNK_HOME/etc/apps/<app_name>/default. This ensures that deployed settings never overwrite local or replicated runtime settings on the members. Otherwise, for example, app upgrades would wipe out runtime changes.

Similarly, the deployer deploys user files to members' default user directories, not to their local user directories. For example, if you place a user file such as $SPLUNK_HOME/etc/shcluster/users/admin/search/local/savedsearches.conf on the deployer and then deploy it to the members, it resides in $SPLUNK_HOME/etc/users/admin/search/default/savedsearches.conf on each member.

During the staging process that occurs prior to pushing the configuration bundle, the deployer copies the configuration bundle to a staging area, where it merges all settings from files in /shcluster/apps/<appname>/local into corresponding files in /shcluster/apps/<appname>/default. Settings from the local directory take precedence over any corresponding default settings. For example, if you have a /newapp/local/inputs.conf file, the deployer takes the settings from that file and merges them with any settings in /newapp/default/inputs.conf. If a particular attribute is defined in both places, the merged file retains the definition from the local directory. The deployer then pushes only the merged default file.

What exactly does the deployer send to the members?

The deployer pushes the configuration bundle to the members as a set of tarballs, one for each app, plus one for the entire $SPLUNK_HOME/etc/shcluster/users directory.

On the initial push to a set of new members, the deployer distributes the entire configuration bundle to each member. On subsequent pushes, it distributes only new apps and any apps that have changed since the last push. If even a single file has changed in an app, the deployer redistributes the entire app. It does not redistribute unchanged apps.

For the purposes of determining what to push, the deployer treats the $SPLUNK_HOME/etc/shcluster/users directory like a single app. So if you change a single file within a user directory on the deployer, the deployer will redeploy every user directory. This is because the users directory is typically modified and redeployed only during upgrade or migration, unlike the apps directory, which might see regular updates during the lifetime of the cluster.

Deploy a configuration bundle

To deploy a configuration bundle, you push the bundle from the deployer to the cluster members.

Push the configuration bundle

To push the configuration bundle to the cluster members:

1. Put the apps and other configuration changes in subdirectories under shcluster/ on the deployer.

2. Untar any app.

3. Run the splunk apply shcluster-bundle command on the deployer:

splunk apply shcluster-bundle -target <URI>:<management_port> -auth <username>:<password>

Note the following:

  • The -target parameter specifies the URI and management port for any member of the cluster, for example, https://10.0.1.14:8089. You specify only one cluster member but the deployer pushes to all members. This parameter is required.
  • The -auth parameter specifies credentials for the deployer instance.

In response to splunk apply shcluster-bundle, the deployer displays this message:

Warning: Depending on the configuration changes being pushed, this command
might initiate a rolling-restart of the cluster members. Please refer to the
documentation for the details.  Do you wish to continue? [y/n]:

For information on which configuration changes trigger restart, see $SPLUNK_HOME/etc/system/default/app.conf. It lists the configuration files that do not trigger restart when changed. All other configuration changes trigger restart.

4. To proceed, respond to the message with y.

Note: You can eliminate the message by appending the flag --answer-yes to the splunk apply shcluster-bundle command:

splunk apply shcluster-bundle --answer-yes -target <URI>:<management_port> -auth <username>:<password>

This is useful if you are including the command in a script or otherwise automating the process.

Warning: You must run splunk apply shcluster-bundle command only on a deployer. If you mistakenly run it on a non-deployer instance, such as a cluster member, it will cause your apps to be deleted.

How the cluster applies the configuration bundle

The deployer and the cluster members execute the command as follows:

1. The deployer stages the configuration bundle in a separate location on its file system ($SPLUNK_HOME/var/run/splunk/deploy) and then pushes it to each cluster member. The configuration bundle typically consists of several tarballs, one for each app.

2. Each cluster member then applies the changes contained in the bundle locally. If a rolling restart is determined necessary, approximately 10% of the members then restart at a time, until all have restarted.

During a rolling restart, all members, including the current captain, restart. Restart of the captain triggers the election process, which can result in a new captain. After the final member restarts, it requires approximately 60 seconds for the cluster to stabilize. During this interval, error messages might appear. You can ignore these messages. They should desist after 60 seconds. For more information on the rolling restart process, see "Restart the search head cluster."

Control the restart process

You should ordinarily let the cluster automatically trigger any rolling restart necessary. However, if you need to maintain control over the restart process, you can run a version of splunk apply shcluster-bundle that stops short of the restart. If you do so, you must later initiate the restart yourself. The configuration changes will not take effect until the members restart.

To run splunk apply shcluster-bundle without triggering a restart, use this version of the command:

splunk apply shcluster-bundle -action stage && splunk apply shcluster-bundle -action send

The members will receive the bundle, but they will not restart. Splunk Web will display the message "Splunk must be restarted for changes to take effect."

To initiate a rolling restart later, invoke the splunk rolling-restart command from the captain:

splunk rolling-restart shcluster-members

Allow a user without admin privileges to push the configuration bundle

By default, only admin users (that is, those assigned a role containing the admin_all_objects capability) can push the configuration bundle to the cluster members. Depending on how you manage your deploymernt, you might want to allow users without full admin privileges to push apps or other configurations to the cluster members. You can do so by overriding the controlling stanza in the default restmap.conf file.

The default restmap.conf file includes a stanza that controls the bundle push process:

[apps-deploy:apps-deploy]
match=/apps/deploy
capability.post=admin_all_objects
authKeyStanza=shclustering

You can specify a different capability in this stanza, either an existing capability or one that you define specifically for the purpose. If you assign that capability to a new role, users given that role can then push the configuration bundle. You can optionally specify both the existing admin_all_objects capability and the new capability, so that existing admin users retain the ability to push the bundle..

To create a new special-purpose capability and then assign that capability to the bundle push process:

  1. On the deployer, create a new authorize.conf file under $SPLUNK_HOME/etc/system/local, or edit the file if it already exists at that location. Add the new capability to that file. For example:
    [capability::conf_bundle_push]
    
  2. in the same authorize.conf file, create a role specific to that capability. For example:
    [role_deployer_push]
    conf_bundle_push=enabled
    
  3. On the deployer, create a new restmap.conf file under $SPLUNK_HOME/etc/system/local, or edit the file if it already exists at that location. Change the value of the capability.post setting to include both the conf_bundle_push capability and the admin_all_objects capability. For example:
    [apps-deploy:apps-deploy]
    match=/apps/deploy
    capability.post=conf_bundle_push OR admin_all_objects
    authKeyStanza=shclustering
    

You can now assign the role_deployer_push role to any non-admin users that need to push the bundle.

You can also assign the capability.post setting to an existing capability, instead of creating a new one. In that case, create a role specific to the existing capability and assign the appropriate users to that role.

For more information on capabilities, see the chapter Users and role-based access control in Securing Splunk Enterprise.

Consequence and remediation of deployer failure

The deployer distributes the configuration bundle to the cluster members under these circumstances:

  • When you invoke the splunk apply shcluster-bundle command, the deployer pushes the configurations to the members.
  • When a member joins or rejoins the cluster, it checks the deployer for updates. A member also checks for updates whenever it restarts. If any updates are available, it pulls them from the deployer.

This means that if the deployer is down:

  • You cannot push new configurations to the members.
  • A member that joins or rejoins the cluster, or restarts, cannot pull the latest configuration bundle.

The implications of the deployer being down depend, therefore, on the state of the cluster members. These are the main cases to consider:

  • The deployer is down but the set of cluster members remains stable.
  • The deployer is down and a member attempts to join or rejoin the cluster.

The deployer is down but the set of cluster members remains stable

If no member joins or rejoins the cluster while the deployer is down, there are no important consequences to the functioning of the cluster. All member configurations remain in sync and the cluster continues to operate normally. The only consequence is the obvious one, that you cannot push new configurations to the members during this time.

The deployer is down and a member attempts to join or rejoin the cluster

In the case of a member attempting to join or rejoin the cluster while the deployer is down, there is the possibility that the configuration on that member will be out-of-sync with the configuration on the other cluster members:

  • A new member will not be able to pull the current configuration bundle.
  • A member that left the cluster before the deployer failed and rejoined the cluster after the deployer failed will not be able to pull any updates made to the bundle during the time that the member was down and the deployer was still running.

In these circumstances, the joining/rejoining member will have a different set of configurations from the other cluster members. Depending on the nature of the bundle changes, this can cause the joining member to behave differently from the other members. It can even lead to failure of the entire cluster. Therefore, you must make sure that this circumstance does not develop.

How to remediate deployer failure

Remediation is two-fold:

1. Prevent any member from joining or rejoining the cluster during deployer failure, unless you can be certain that the set of configurations on the joining member is identical to that on the other members (for example, if the rejoining member went down subsequent to the deployer failure).

2. Bring up a new deployer:

a. Create a new deployer instance.

b. Restore the contents of $SPLUNK_HOME/etc/shcluster to the new instance from backup.

c. If necessary, update the conf_deploy_fetch_url values on all search head cluster members.

d. Push the restored bundle contents to all members by running the splunk apply shcluster-bundle command.

PREVIOUS
Configuration updates that the cluster replicates
  NEXT
Add a cluster member

This documentation applies to the following versions of Splunk® Enterprise: 6.2.0, 6.2.1, 6.2.2


Comments

Mloven: We stipulate that you must use fresh instances when adding cluster members. You cannot migrate a standalone SH to the cluster, although you can migrate its settings. See: http://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Migratefromstandalonesearchheads

Sgoodman
January 15, 2015

It should be mentioned that, if you have a set of apps on your SH that were present before adding them to the cluster, pushing a different set of apps via the deployer will delete the first set of apps on the SH. Basically, if you're using the deployer, only the deployed apps will be present on the SH.

Mloven splunk
January 15, 2015

Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters