Use the deployer to distribute apps and configuration updates
The deployer is a Splunk Enterprise instance that you use to distribute apps and certain other configuration updates to search head cluster members. The set of updates that the deployer distributes is called the configuration bundle.
The deployer distributes the configuration bundle in response to your command, according to the deployer push mode that you select. The deployer also distributes the bundle when a member joins or rejoins the cluster.
You must use the deployer, not the deployment server, to distribute apps to cluster members. Use of the deployer eliminates the possibility of conflict with the run-time updates that the cluster replicates automatically.
For details of your cluster's app deployment process, view the Search Head Clustering: App Deployment dashboard in the monitoring console. See Use the monitoring console to view search head cluster status.
Which configurations does the deployer manage?
The deployer has these main roles:
- It handles migration of app and user configurations into the search head cluster from non-cluster instances and search head pools.
- It deploys baseline app configurations to search head cluster members.
- It provides the means to distribute non-replicated, non-runtime configuration updates to all search head cluster members.
You do not use the deployer to distribute search-related runtime configuration changes from one cluster member to the other members. Instead, the cluster automatically replicates such changes to all cluster members. For example, if a user creates a saved search on one member, the cluster automatically replicates the search to all other members. See Configuration updates that the cluster replicates. To distribute all other updates, you need the deployer.
Configurations move in one direction only: from the deployer to the members. The members never upload configurations to the deployer. It is also unlikely that you will ever need to force such behavior by manually copying files from the cluster members to the deployer, because the members continually replicate all runtime configurations among themselves.
Types of updates that the deployer handles
These are the specific types of updates that require the deployer:
- New or upgraded apps.
- Configuration files that you edit directly.
- All non-search-related updates, even those that can be configured through the CLI or Splunk Web, such as updates to
indexes.conf
orinputs.conf
. - Settings that need to be migrated from a search head pool or a standalone search head. These can be app or user settings.
You use the deployer to deploy configuration updates only. You cannot use it for initial configuration of the search head cluster or for version upgrades to the Splunk Enterprise instances that the members run on.
Types of updates that the deployer does not handle
You do not use the deployer to distribute certain runtime changes from one cluster member to the other members. These changes are handled automatically by configuration replication. See How configuration changes propagate across the search head cluster.
Because the deployer manages only a subset of configurations, note the following:
- The deployer does not represent a "single source of truth" for all configurations in the cluster.
- You cannot use the deployer, by itself, to restore the latest state to cluster members.
App upgrades and runtime changes
Because of how configuration file precedence works, changes that users make to apps at runtime get maintained in the apps through subsequent upgrades.
Say, for example, that you deploy the 1.0 version of some app, and then a user modifies the app's dashboards. When you later deploy the 1.1 version of the app, the user modifications will persist in the 1.1 version of the app.
As explained in Configuration updates that the cluster replicates, the cluster automatically replicates most runtime changes to all members. Those runtime changes do not get subsequently uploaded to the deployer, but because of the way configuration layering works, those changes have precedence over the configurations in the unmodified apps distributed by the deployer. To understand this issue in detail, read the rest of this topic, as well as the topic Configuration file precedence in the Admin Manual.
Custom apps and deleted files
The mechanism for deploying an upgraded version of an app does not recognize any deleted files or directories except for those residing under the default and local subdirectories. Therefore, if your custom app contains an additional directory at the level of default and local, that directory and all its files will persist from upgrade to upgrade, even if some of the files, or the directory itself, are no longer present in an upgraded version of the app.
To delete such files or directories, you must delete them manually, directly on the cluster members.
Once you delete the files or directories from the cluster members, they will not reappear the next time you deploy an upgrade of the app, assuming that they are not present in the upgraded app.
When does the deployer distribute configurations to the members?
The deployer distributes configurations to the cluster members under these circumstances:
- When you invoke the
splunk apply shcluster-bundle
command, the deployer pushes any new or changed configurations to the members. See Deploy a configuration bundle. - When a member joins or rejoins the cluster, it checks the deployer for configuration updates. A member also checks for updates whenever it restarts. If any updates are available, it pulls them from the deployer.
Set up the deployer
The actions in this subsection are integrated into the procedure for deploying the search head cluster, described in the topic Deploy a search head cluster. If you already set up the deployer during initial deployment of the search head cluster, you can skip this section.
Choose an instance to be the deployer
Each search head cluster needs one deployer. The deployer must run on a Splunk Enterprise instance outside the search head cluster.
Depending on the specific components of your Splunk Enterprise environment, the deployer might be able to run on an existing Splunk Enterprise instance with other responsibilities, such as a deployment server or the manager node of an indexer cluster. Otherwise, you can run it on a dedicated instance. See Deployer requirements.
Deploy to multiple clusters
The deployer sends the same configuration bundle to all cluster members that it services. Therefore, if you have multiple search head clusters, you can use the same deployer for all the clusters only if the clusters employ exactly the same configurations, apps, and so on.
If you anticipate that your clusters might need different configurations over time, set up a separate deployer for each cluster.
Set a secret key on the deployer
You must configure the secret key on the deployer and all search head cluster members. The deployer uses this key to authenticate communication with the cluster members. To set the key, specify the pass4SymmKey
attribute in either the [general]
or the [shclustering]
stanza of the deployer's server.conf
file. For example:
[shclustering] pass4SymmKey = yoursecretkey
The key must be the same for all cluster members and the deployer. You can set the key on the cluster members during initialization.
You must restart the deployer instance for the key to take effect.
If there is a mismatch between the value of pass4SymmKey
on the cluster members and on the deployer (for example, you set it on the members but neglect to set it on the deployer), you will get an error message when the deployer attempts to push the configuration bundle. The message will resemble this:
Error while deploying apps to first member: ConfDeploymentException: Error while fetching apps baseline on target=https://testitls1l:8089: Non-200/201 status_code=401; {"messages":[{"type":"WARN","text":"call not properly authenticated"}]}
Set the search head cluster label on the deployer
The search head cluster label is useful for identifying the cluster in the monitoring console. This parameter is optional, but if you configure it on one member, you must configure it with the same value on all members, as well as on the deployer.
To set the label, specify the shcluster_label
attribute in the [shclustering]
stanza of the deployer's server.conf
file. For example:
[shclustering] shcluster_label = shcluster1
See Set cluster labels in Monitoring Splunk Enterprise.
Point the cluster members to the deployer
Each cluster member needs to know the location of the deployer. Splunk recommends that you specify the deployer location during member initialization. See Deploy a search head cluster.
If you do not set the deployer location at initialization time, you must add the location to each member's server.conf
file before using the deployer:
[shclustering] conf_deploy_fetch_url = <URL>:<management_port>
The conf_deploy_fetch_url
attribute specifies the URL and management port for the deployer instance.
If you later add a new member to the cluster, you must set conf_deploy_fetch_url
on the member before adding it to the cluster, so it can immediately contact the deployer for the current configuration bundle, if any.
Choose a deployer push mode
The deployer push mode determines how the deployer distributes the configuration bundle to search head cluster members. Before you push the configuration bundle, choose the push mode that best fits your specific apps and use cases. The default push mode is merge_to_default
.
You can set the push mode on the global level or on the app level. See Set the deployer push mode.
The push mode applies to app directories only, not user directories.
The following tables describe how the deployer handles app configurations for each push mode. For more information on the push modes, see the entry for deployer_push_mode
in the app.conf spec file.
For details on the way the deployer bundles apps in each mode, see What exactly does the deployer send to the cluster?.
Mode: full
Do not use full
mode for built-in apps such as the Search app. Use local_only
mode instead.
Mode: full | Description |
---|---|
On the deployer | Bundles all of the app's contents located in the app's /default , /local , and other directories, and pushes the bundle to the cluster.
|
On the members | Copies the non-local and non-user configurations to each member's app folder and overwrites the existing contents. Merges local and user configurations with the corresponding folders on the member, such that the existing configuration on the member takes precedence. |
Use cases | Use this mode to push app configurations to both /local and /default app directories on the members. For example, if you have a saved search that exists only in /local on the members, pushing the /local and /default app configurations to their respective directories on the members maintains the saved search configuration, and lets you subsequently delete the saved search on the members using Splunk Web.
|
Use this mode to migrate apps from a single search head to a new search head cluster. This retains the exact /local and /default directory configurations as they appear on the original search head.
| |
Use this mode if you have a configuration on the deployer in the app's /local directory, and you want to push it to the members and then delete it from the deployer.
|
If you have unencrypted secrets in the app's /default
directory on your deployer, full
mode causes the deployer to push those along with the rest of the directory to the members. Other push modes do not cause this behavior.
Mode: local_only
If an app is new, push the /default
folder and /local
folders the first time you push the bundle, using the full
or merge_to_default
mode . On subsequent pushes, you can choose the local_only
mode.
Mode: local_only | Description |
---|---|
On the deployer | Bundles the app's /local configuration and its metadata and pushes it to the cluster.
|
On the members | Merges the app's /local configuration from the deployer with the app's /local configuration on the member, such that the member's existing configuration takes precedence.
|
Use cases | Use this mode to modify only those apps that already exist on the members. |
Use this mode to modify the /local configuration for a built-in app, such as the Search app.
|
When you push a built-in app, the deployer automatically applies the local_only
mode. You can override the local_only
mode for built-in apps by explicitly setting a different push mode in the app's local/app.conf
file.
Mode: default_only
Mode: default_only | Description |
---|---|
On the deployer | Bundles and pushes the app's /default and other non-/local directories to the cluster. The /local directory is not included in the bundle.
|
On the members | Overwrites the app's /default and other non-/local directories on each member. The /local subdirectory is unaffected.
|
Use cases | Use this mode if you want to explicitly abandon changes made in an app's /local directory. For example, if an app on the deployer has pre-existing configurations in the /local directory, and you delete those configurations on the members, using default_only mode prevents those configurations from re-appearing on the next deployer push.
|
Mode: merge_to_default
merge_to_default
is the default push mode for all apps, except built-in apps, such as the Search app, which default to the local_only
mode.
Mode: merge_to_default | Description |
---|---|
On the deployer | Merges all settings from files in the app's /local directory into corresponding files in the app's /default directory, and pushes the merged default files to the app's /default directory on each member.During the merging process, settings from the |
On the members | Overwrites the existing configuration in the members' /default app directories. No files are deployed to the members' /local app directories. This ensures that deployed settings never overwrite local or replicated runtime settings on the members. Otherwise, for example, app upgrades would wipe out runtime changes. In |
Use cases | Use this mode if you have a configuration on the deployer in the app's /local directory, and you want to push it to the members and then delete it from the deployer.
|
Caution: Before you choose this mode, read about certain limitations in the management of app-level knowledge object settings deployed with this push mode. See Effect of merge_to_default push mode on management of app-level knowledge objects.
Set the deployer push mode
Before you push the configuration bundle to cluster members, make sure the push mode is properly set on the deployer.
You can set the push mode globally for all apps and locally for specific apps. App-specific deployer push mode settings take precedence over the global deployer push mode policy, so that you can have one global policy but set exceptions for specific apps.
To set a global deployer push mode, and optionally set a local deployer push mode for one app only:
-
Set the global deployer push mode in the
[shclustering]
stanza in$SPLUNK_HOME/etc/system/local/app.conf
. For example:[shclustering] deployer_push_mode = full
For global deployer push mode only, you must restart the deployer for the change to take effect.
-
(optional) Set the deployer push mode for one app only in the
[shclustering]
stanza in$SPLUNK_HOME/etc/shcluster/apps/<app>/local/app.conf
for that specific app. You might need to add the[shclustering]
stanza to theapp.conf
file if it is not already present. For example:[shclustering] deployer_push_mode = local_only
- Push the bundle to the search head cluster.
If the deployer_push_mode
is not explicitly set in app.conf
for a given app, then that app follows the global deployer_push_mode
setting.
What the configuration bundle contains
The configuration bundle is the set of files that the deployer distributes to the cluster. It consists of two types of configurations:
- App configurations
- User configurations
You determine the contents of the configuration bundle by copying the apps or user configurations to a special location on the deployer.
You can push arbitrary configuration updates by creating a new app directory and putting those configurations in that directory.
The deployer pushes the configuration bundle to the cluster using different methods depending on the selected push mode and on the type of configuration files that it is pushing. See What exactly does the deployer send to the cluster?
Where to place the configuration bundle on the deployer
On the deployer, the configuration bundle resides under the $SPLUNK_HOME/etc/shcluster
directory. The set of files under that directory constitutes the configuration bundle.
The directory has this structure:
$SPLUNK_HOME/etc/shcluster/ apps/ <app-name>/ <app-name>/ ... users/
Note the following general points:
- The configuration bundle must contain at least one subdirectory under either
/apps
or/users
. The deployer will error out if you attempt to push a configuration bundle that contains no app or user subdirectories. - The deployer only pushes the contents of subdirectories under
/shcluster
. It does not push any standalone files directly under/shcluster
. For example, it will not push the file/shcluster/file1
. To deploy standalone files, create a new apps directory under/apps
and put the files in the local subdirectory. For example, putfile1
under$SPLUNK_HOME/etc/shcluster/apps/newapp/local
. - The
/shcluster
location is only for files that you want to distribute to cluster members. The deployer does not use the files in that directory for its own configuration needs.
Note the following points regarding apps:
- Put each app in its own subdirectory under
/apps
. You must untar the app. - The configuration bundle must contain all previously pushed apps, as well as any new ones. If you delete an app from the bundle, the next time you push the bundle, the app will get deleted from the cluster members.
- To update an app on the cluster members, put the updated version in the configuration bundle. Simply overwrite the existing version of the app.
- To delete an app that you previously pushed, remove it from the configuration bundle. When you next push the bundle, each member will delete it from its own file system. Note: If you need to remove an app, inspect its
app.conf
file to make sure thatstate = enabled
. Ifstate = disabled
, the deployer will not remove the app even if you remove it from the configuration bundle. - When the deployer pushes the bundle, it pushes the full contents of all apps that have changed since the last push. Even if the only change to an app is a single file, it pushes the entire app. If an app has not changed, the deployer does not push it again.
Note the following points regarding user settings:
- To push user-specific files, put the files under the
/users
subdirectories where you want them to reside on the members. - The deployer will push the content under
/shcluster/users
only if the content includes at least one configuration file. For example, if you place a private lookup table or view under some user subdirectory, the deployer will push it only if there is also at least one configuration file somewhere under/shcluster/users
. - You cannot subsequently delete user settings by deleting the files from the deployer and then pushing the bundle again. In this respect, user settings behave differently from app settings.
What exactly does the deployer send to the cluster?
The deployer pushes the configuration bundle to the cluster as a set of tarballs.
There are several types of tarballs:
- App tarballs. Each app has one or two associated tarballs, depending on the push mode, which contain the app's configurations:
- A local tarball, which the deployer pushes to the captain, which then replicates the tarball's contents to the members.
- A default tarball, which the deployer pushes directly to the members.
- A user tarball, which the deployer pushes to the captain, which then replicates the tarball's contents to the members. There is exactly one user tarball for the entire system. The user tarball contains the contents of the
$SPLUNK_HOME/etc/shcluster/users
directory on the deployer.
On the initial push to a new cluster, the deployer distributes all tarballs to the cluster. On subsequent pushes, it distributes only tarballs for new apps and for apps that have changed since the last push. If even a single file has changed in an app, the deployer redistributes the entire app. It does not redistribute unchanged apps. It also redistributes the user tarball if the set of user configurations changes.
The tarball creation and distribution process
The deployer stages the configuration bundle in a special location on its file system, $SPLUNK_HOME/var/run/splunk/deploy
. The staging area contains the set of tarballs for all current apps, plus the one user tarball. These tarballs are created or updated only when you invoke the splunk apply shcluster-bundle
command.
When you invoke the splunk apply shcluster-bundle
command, a two-step process occurs:
-
The deployer creates tarballs for any apps that are new or have changed since the last time the command was invoked. Note the following:
- The current push mode determines the contents of the newly created app tarballs, including whether the app will have a local or a default tarball, or both.
- For new apps, the deployer adds the tarballs to the staging area.
- For changed apps, the deployer overwrites each existing tarball with a new tarball of the same type. For example, if the deployer is operating in local_only push mode, it generates only a local tarball for a changed app. The deployer then overwrites the app's existing local tarball, if any. If the app also has an existing default tarball, which was created under a differernt push mode, the deployer ignores it, leaving the tarball in place.
- The deployer also creates a new user tarball, if the user directory has changed .
- The deployer pushes the new and changed tarballs to the cluster. It sends new or changed default tarballs to all existing cluster members. At the same time, it sends new or changed local or user tarballs to the captain, which replicates their contents to the members.
Tarball distribution to new or rejoining members
If a new member joins the cluster, the deployer sends the member the full set of default tarballs currently in its staging area, ensuring that the member has the same set of default configurations as existing members.
If a down member rejoins the cluster, the deployer sends the member those default tarballs that it created or changed while the member was in a down state.
As always, the captain also replicates its baseline configurations to both new and rejoined members, including any configurations originating from local or user tarballs previously pushed to the captain from the deployer. See Configuration updates that the cluster replicates.
The effect of push mode on app tarball creation
Depending on the push mode, the deployer creates either one or two tarballs per new or changed app. The push mode also affects the contents of the app tarballs.
Deployer push modes do not affect the user tarball in any way.
Deployer push mode | Content included from app | Configuration bundle contents, per app |
---|---|---|
full
|
all app directories | Two tarballs:
|
local_only
|
/local app directory
|
One tarball:
|
default_only
|
all app directories except /local
|
One tarball:
|
merge_to_default
|
all app directories | One tarball:
|
When constructing the bundle, the deployer treats an app's /metadata/local.meta
and /metadata/default.meta
files the same way that it treats the /local
and /default
directories. For example, in the merge_to_default
push mode, the deployer merges the app's /metadata/local.meta
and /metadata/default.meta
files into a single /metadata/default.meta
file and includes it in the default tarball.
The effect of push mode change on tarball creation
You can change the push mode globally (deployer-wide) or on a per-app basis.
When you change the push mode globally, there is no immediate affect on the tarballs already in staging. Tarballs get updated only when the apps themselves are updated, through the addition or change of at least one file, followed by an invocation of the splunk apply shcluster-bundle
command. A global change to push mode occurs in the deployer's configuration without touching the apps themselves. Therefore, a global push mode change does not itself cause any tarballs to get updated.
When you change the push mode for a single app, you do so by editing the app's app.conf
file, thereby updating the app itself. The next time that you invoke the splunk apply shcluster-bundle
command, the deployer will create one or two new tarballs (depending on the push mode) for that app in staging. It will then distribute those tarballs to the cluster.
A change in push mode to local_only for an app, whether configured globally or on a per-app basis, means that the deployer will, in the future, create only local tarballs for that app. However, any existing default tarball for that app, created during the prior push mode, persists in staging and will get pushed to new or rejoining members in the normal fashion. By this mechanism, new and rejoining members always have the same default baseline as existing members.
Where deployed configurations live on the cluster members
On cluster members, deployed apps and user configurations reside under $SPLUNK_HOME/etc/apps
and $SPLUNK_HOME/etc/users
, respectively.
App configurations
When it deploys apps, the deployer bundles each app's configurations into tarballs and pushes the tarballs to the cluster. The deployer push mode determines the contents of the tarballs and how they get pushed. See What exactly does the deployer send to the cluster?.
For each app that it deploys, the deployer creates one or two tarballs, depending on the push mode:
- A default tarball
- A local tarball
The deployer pushes default tarballs directly to the cluster members. The members untar each default tarball and apply its settings locally. The settings in the default tarball overwrite directories already on the members. For example, if the default tarball contains a /default
directory for appA, that /default
directory from the tarball overwrites any /appA/default
directory currently on the members.
The deployer pushes local tarballs to the captain. The captain untars each local tarball and then replicates its settings to all the cluster members through its normal method for replicating configurations, as described in Configuration updates that the cluster replicates. Each member merges the /local
settings that it receives from the captain with the app's /local
settings already existing on the member. If conflicts occur, values already on the member take precedence. See the section User configurations for an example of how this merging mechanism works.
User configurations
The deployer distributes user configurations to the captain only. The captain then replicates the settings to all the cluster members through its normal method for replicating configurations, as described in Configuration updates that the cluster replicates.
The user configurations reside in the normal user locations on the cluster members. They are not subject to deployer push mode settings. They behave just like any runtime settings created by cluster users through Splunk Web.
The deployment of user configurations is of value mainly for migrating settings from a standalone search head or a search head pool to a search head cluster. See Migrate from a search head pool to a search head cluster.
When you migrate user configurations to an existing search head cluster, the deployer respects the values for attributes that already exist on the cluster. It does not overwrite values for any existing attributes within existing stanzas.
For example, say the cluster members have an existing file $SPLUNK_HOME/etc/users/admin/search/local/savedsearches.conf
containing this stanza:
[my search] search = index=_internal | head 1
and on the deployer, there's the file $SPLUNK_HOME/etc/shcluster/users/admin/search/local/savedsearches.conf
with these stanzas:
[my search] search = index=_internal | head 10 enableSched = 1 [my other search] search = FOOBAR
This will result in a final merged configuration on the members:
[my search] search = index=_internal | head 1 enableSched = 1 [my other search] search = FOOBAR
The [my search]
stanza, which already existed on the members, keeps the existing setting for its search
attribute, but adds the migrated setting for the enableSched
attribute, because that attribute did not already exist in the stanza. The [my other search]
stanza, which did not already exist on the members, gets added to the file, along with its search
attribute.
Deploy a configuration bundle
To deploy a configuration bundle, you push the bundle from the deployer to the cluster members using the splunk apply shcluster-bundle
command.
Push the configuration bundle
Once you push the bundle, do not begin any operations that change the search head captain until the bundle push operation is complete.
To push the configuration bundle to the cluster members:
-
Put new or changed apps and any user configuration changes in subdirectories under
shcluster/
on the deployer. - Untar any app.
- Ensure you have selected the correct deployer push mode.
-
Ensure that the search head cluster is in a healthy state and that the captain is online. To check the cluster status, run the following command on any cluster member:
splunk show shcluster-status --verbose
In the output, verify that the value for the captain's
service_ready_flag
is1
. For more details on this command and an example of its output, see Initiate a searchable rolling restart from the command line. -
Run the
splunk apply shcluster-bundle
command on the deployer:splunk apply shcluster-bundle -target <URI>:<management_port> -auth <username>:<password>
Note the following:
- The
-target
parameter specifies the URI and management port for any member of the cluster, for example,https://10.0.1.14:8089
. You specify only one cluster member but the deployer pushes to all members. This parameter is required. - The
-auth
parameter specifies credentials for the deployer instance.
In response to
splunk apply shcluster-bundle
, the deployer displays this message:Warning: Depending on the configuration changes being pushed, this command might initiate a rolling-restart of the cluster members. Please refer to the documentation for the details. Do you wish to continue? [y/n]:
For information on which configuration changes trigger restart, see
$SPLUNK_HOME/etc/system/default/app.conf
. It lists the configuration files that do not trigger restart when changed. All other configuration changes trigger restart. - The
-
To proceed, respond to the message with
y
.
You can eliminate the message by appending the flag--answer-yes
to thesplunk apply shcluster-bundle
command. For example:splunk apply shcluster-bundle --answer-yes -target <URI>:<management_port> -auth <username>:<password>
This is useful if you are including the command in a script or otherwise automating the process.
If you attempt to push a very large tarball (>200 MB), the operation might fail due to various timeouts. Delete some of the contents from the tarball's app, if possible, and try again.
How the cluster applies the configuration bundle
The deployer and the cluster members execute the command as follows:
-
The deployer stages the app configuration bundle in a special location on its file system,
$SPLUNK_HOME/var/run/splunk/deploy
. The bundle consists of apps that are new or have changed since the last push. For each such app, the deployer copies the app's configurations into directories dictated by the push mode. It then packages those directories in tarballs. See What exactly does the deployer send to the cluster?. -
The deployer pushes each app tarball, one-by-one, to either the cluster members or the captain:
- The deployer pushes default tarballs directly to the members, which apply the tarballs locally.
- The deployer pushes local tarballs to the captain, which untars them and replicates their settings to the members.
- The deployer separately creates and pushes the users tarball, if any user configurations have changed since the last push. It pushes the users tarball to the captain, which untars it and replicates its settings to the members.
-
At the end of the bundle push, a rolling restart occurs if necessary. During a rolling restart, approximately 10% of the members restart at a time, until all have restarted. See Restart the search head cluster.
Note: During a rolling restart, all members, including the captain, restart. Restart of the captain triggers the election process, which can result in a new captain. After the final member restarts, the cluster requires approximately 60 seconds to stabilize. During this interval, error messages might appear. You can ignore these messages. They should desist after 60 seconds.
Control the restart process
You should usually let the cluster automatically trigger any rolling restart, as necessary. However, if you need to maintain control over the restart process, you can run a version of splunk apply shcluster-bundle
that stops short of the restart. If you do so, you must later initiate the restart yourself. The configuration bundle changes will not take effect until the members restart.
To run splunk apply shcluster-bundle
without triggering a restart, use this version of the command:
splunk apply shcluster-bundle -target <URI>:<management_port> -action stage && splunk apply shcluster-bundle -target <URI>:<management_port> -action send
The members will receive the bundle, but they will not restart. Splunk Web will display the message "Splunk must be restarted for changes to take effect."
To initiate a rolling restart later, invoke the splunk rolling-restart
command from the captain:
splunk rolling-restart shcluster-members
Push an empty bundle
In most circumstances, it is a bad idea to push an empty bundle. By doing so, you cause the cluster members to delete all the apps previously distributed by the deployer. For that reason, if you attempt to push an empty bundle, the deployer assumes that you have made a mistake and it returns an error message, similar to this one:
Error while deploying apps to first member: Found zero deployable apps to send; /opt/splunk/etc/shcluster is likely empty; ensure that the command is being run on the deployer. If intentionally attempting to remove all apps from the search head cluster use the "force" option. WARNING: using this option with an empty shcluster directory will delete all apps previously deployed to the search head cluster; use with extreme caution!
You can override this behavior with the -force true
flag:
splunk apply shcluster-bundle --answer-yes -force true -target <URI>:<management_port> -auth <username>:<password>
Each member will then delete all previously deployed apps from its $SPLUNK_HOME/etc/apps
directory.
If you need to remove an app, inspect its app.conf
file to make sure that state = enabled
. If state = disabled
, the deployer will not remove the app even if you remove it from the configuration bundle.
Check the status of the latest deployer bundle
You can check the status of the bundle that the deployer most recently pushed. Use the following command with the URI for each search head cluster member that you want to check:
splunk list shcluster-bundle -member_uri <URI>:<management_port> -auth <username>:<password>
This command returns the deployer_push_mode
and deployer_push_status
, as well as other useful information about the bundle on that search head.
Allow a user without admin privileges to push the configuration bundle
By default, only admin users (that is, those assigned a role containing the admin_all_objects
capability) can push the configuration bundle to the cluster members. Depending on how you manage your deploymernt, you might want to allow users without full admin privileges to push apps or other configurations to the cluster members. You can do so by overriding the controlling stanza in the default restmap.conf
file.
The default restmap.conf
file includes a stanza that controls the bundle push process:
[apps-deploy:apps-deploy] match=/apps/deploy capability.post=admin_all_objects authKeyStanza=shclustering
You can specify a different capability in this stanza, either an existing capability or one that you define specifically for the purpose. If you assign that capability to a new role, users given that role can then push the configuration bundle. You can optionally specify both the existing admin_all_objects
capability and the new capability, so that existing admin users retain the ability to push the bundle..
To create a new special-purpose capability and then assign that capability to the bundle push process:
-
On the deployer, create a new
authorize.conf
file under$SPLUNK_HOME/etc/system/local
, or edit the file if it already exists at that location. Add the new capability to that file. For example:[capability::conf_bundle_push]
-
in the same
authorize.conf
file, create a role specific to that capability. For example:[role_deployer_push] conf_bundle_push=enabled
-
On the deployer, create a new
restmap.conf
file under$SPLUNK_HOME/etc/system/local
, or edit the file if it already exists at that location. Change the value of thecapability.post
setting to include both theconf_bundle_push
capability and theadmin_all_objects
capability. For example:[apps-deploy:apps-deploy] match=/apps/deploy capability.post=conf_bundle_push OR admin_all_objects authKeyStanza=shclustering
You can now assign the role_deployer_push
role to any non-admin users that need to push the bundle.
You can also assign the capability.post
setting to an existing capability, instead of creating a new one. In that case, create a role specific to the existing capability and assign the appropriate users to that role.
For more information on capabilities, see the chapter Users and role-based access control in Securing Splunk Enterprise.
Preserve lookup files across app upgrades
Any app that uses lookup tables typically ships with stubs for the table files. Once the app is in use on the search head the tables get populated by runtime processes, such as searches. When you later upgrade the app, you can choose to preserve the populated lookup tables or overwrite them with stubs.
Set the deployer_lookups_push_mode
in app.conf
to specify how the deployer handles lookup tables when upgrading apps. This setting determines the behavior of the -preserve-lookups
option when you push the configuration bundle using the splunk apply shcluster-bundle
command.
-
Choose the
deployer_lookups_push_mode
that best fits your use case for each app:Mode Description always_preserve (default) This option always preserves lookup tables, regardless of how you set the -preserve-lookups
flag.always_overwrite This option always overwrites lookup tables with stub files, regardless of how you set the -preserve-lookups
flag.overwrite_on_change This option overwrites lookup tables if the app contents have changed, regardless of how you set the -preserve-lookups
flag. If the app contents have not changed, the lookup tables are preserved.preserve_lookups This options preserves lookup tables only if you run the splunk apply shcluster-bundle
command on the deployer with the-preserve-lookups
flag set to true. If you do not set the-preserve-lookups
flag to true, the populated lookup tables are overwritten. -
Set your chosen
deployer_lookups_push_mode
globally in the[shclustering]
stanza of thesystem/local/app.conf
file on the deployer. For example:[shclustering] deployer_lookups_push_mode = preserve_lookups
-
(optional) If you want to use a different setting for a specific app, set the
deployer_lookups_push_mode
locally in the[shclustering]
stanza of thelocal/app.conf
file under the specific app.If the
deployer_lookups_push_mode
is not explicitly set inapp.conf
under the specific app, the app uses the globaldeployer_lookup_push_mode
setting. -
Push the bundle to the search head cluster. If you are using the
preserve_lookups
mode, set the-preserve-lookups
flag totrue
to preserve the populated lookup tables for your apps:splunk apply shcluster-bundle -target <URI>:<management_port> -preserve-lookups true -auth <username>:<password>
To ensure that a stub persists on members only if there is no existing table file of the same name already on the members, this feature might temporarily rename a table file using a .default
extension. For example, lookup1.csv
becomes lookup1.csv.default
. Therefore, if you have been manually renaming table files with a .default
extension, you might run into problems when using this feature. Contact Splunk Support before proceeding.
Effect of merge_to_default push mode on management of app-level knowledge objects
For apps pushed to members through the merge_to_default
push mode, be aware of the following restrictions. You cannot delete the app's baseline knowledge objects through Splunk Web, the CLI, or the REST API after you deploy an app to the members. You also cannot move, share, or unshare those knowledge objects.
This limitation applies only to the app's baseline knowledge objects - those that were distributed from the deployer to the members. It does not apply to the app's runtime knowledge objects, if any. For example, if you deploy an app and then subsequently use Splunk Web to create a new knowledge object in the app, you can manage that object with Splunk Web or any other of the usual methods.
The limitation on managing baseline knowledge objects applies to lookup tables, dashboards, reports, macros, field extractions, and so on. The only exception to this rule is for app-level lookup table files that do not have a permission stanza in default.meta. Such a lookup file can be deleted through a member's Splunk Web.
To delete an app-level baseline knowledge object, redeploy an updated version of the app that does not include the knowledge object.
This condition does not apply to user-level knowledge objects pushed by the deployer. User-level objects can be managed by all the usual methods.
The limitation on managing baseline knowledge objects occurs only when using the merge_to_default
push mode. With this push mode, the deployer moves all local app configurations to the default directories before it pushes the app to the members. Default configurations cannot be moved or otherwise managed. On the other hand, any runtime knowledge objects reside in the app's local directory and therefore can be managed in the normal way. For more information on where deployed configurations reside, see App configurations.
You can work around this limitation by changing the deployer_push_mode
to full
or local_only
. To determine which deployer push mode best fits your use case, see Choose a deployer push mode.
Parallelize app deployment for clusters with many apps
In cases where you regularly push many apps to members, you can accelerate the deployment process by implementing a ParallelPush policy. This policy offers a way to push apps via a separate thread for each member.
In default operation, the deployer uses a single thread to push all app tarballs to the members. By instead using a separate thread for each member, you can speed up that part of the deployment operation.
To turn on the ParallelPush policy, change the deployerPushThreads
setting in server.conf
on the deployer. By default, this setting is set to 1, which means that a single thread handles app deployment to all members. Change this setting to "auto" to allocate one thread to each member. Restart the deployer for the change to take effect.
With deployerPushThreads
set to "auto", the deployer sends the app tarballs to the members in parallel as fast as each member can apply them. So, for example, if the configuration bundle includes appA, appB, appC, and so on, and the cluster has five members, the push proceeds as follows: At the start of the push, the deployer starts up threads for all five members and sends appA through all five threads. It then sends appB through each thread as soon as the thread's member is ready to receive it, followed by appC, and so on.
It is possible that one member might finish applying all its apps' tarballs while other members are still in the midst of applying tarballs. If a rolling restart is necessary, the restart waits until all members have finished applying their app tarballs.
The deployerPushThreads
setting affects only the deployment of default app tarballs to the members. It does not affect deployment of local app tarballs or user tarballs, both of which are deployed to the captain, not directly to the members. For details on the various types of tarballs and what they contain, see What exactly does the deployer send to the cluster?
A member might have out-of-sync configurations when it rejoins the cluster
When a down member rejoins the cluster, it checks the deployer for app updates. The deployer then distributes any new or changed default app tarballs to the rejoining member, ensuring that all members have the same set of default configurations.
However, when a member rejoins the cluster, it must also resync its baseline replicated configurations with the captain's baseline. Since the deployer distributes local app tarballs to the captain, which then replicates them to the members, it is possible for the rejoining member to have out-of-sync local app configurations, particularly if the rejoining member was down for an extended period of time. For details on this situation and how to resync baseline configurations when necessary, see Replication synchronization issues.
Consequence and remediation of deployer failure
The deployer distributes the configuration bundle to the cluster members under these circumstances:
- When you invoke the
splunk apply shcluster-bundle
command, the deployer pushes the apps and users configurations. - When a member joins or rejoins the cluster, it checks the deployer for apps updates. A member also checks for updates whenever it restarts. If any apps updates are available, it pulls them from the deployer.
This means that if the deployer is down:
- You cannot push new configurations to the members.
- A member that joins or rejoins the cluster, or restarts, cannot pull the latest set of apps tarballs.
The implications of the deployer being down depend, therefore, on the state of the cluster members. These are the main cases to consider:
- The deployer is down but the set of cluster members remains stable.
- The deployer is down and a member attempts to join or rejoin the cluster.
The deployer is down but the set of cluster members remains stable
If no member joins or rejoins the cluster while the deployer is down, there are no important consequences to the functioning of the cluster. All member configurations remain in sync and the cluster continues to operate normally. The only consequence is the obvious one, that you cannot push new configurations to the members during this time.
The deployer is down and a member attempts to join or rejoin the cluster
In the case of a member attempting to join or rejoin the cluster while the deployer is down, there is the possibility that the apps configuration on that member will be out-of-sync with the apps configuration on the other cluster members:
- A new member will not be able to pull the current set of apps tarballs.
- A member that left the cluster before the deployer failed and rejoined the cluster after the deployer failed will not be able to pull any updates made to the apps portion of the bundle during the time that the member was down and the deployer was still running.
In these circumstances, the joining/rejoining member will have a different set of apps configurations from the other cluster members. Depending on the nature of the bundle changes, this can cause the joining member to behave differently from the other members. It can even lead to failure of the entire cluster. Therefore, you must make sure that this circumstance does not develop.
How to remedy deployer failure
Remediation is two-fold:
- Prevent any member from joining or rejoining the cluster during deployer failure, unless you can be certain that the set of configurations on the joining member is identical to that on the other members. For example, if the rejoining member went down subsequent to the deployer failure.
-
Bring up a new deployer:
- Configure a new deployer instance. See Configure the deployer.
-
Restore the contents of
$SPLUNK_HOME/etc/shcluster
to the new instance from backup. -
If necessary, update the
conf_deploy_fetch_url
values on all search head cluster members. -
Push the restored bundle contents to all members by running the
splunk apply shcluster-bundle
command.
Configuration updates that the cluster replicates | Add a cluster member |
This documentation applies to the following versions of Splunk® Enterprise: 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0
Feedback submitted, thanks!