Use the deployer to distribute apps and configuration updates
The deployer is a Splunk Enterprise instance that you use to distribute apps and certain other configuration updates to search head cluster members. The set of updates that the deployer distributes is called the configuration bundle.
The deployer distributes the configuration bundle in response to your command. The deployer also distributes the bundle when a member joins or rejoins the cluster.
Caution: You must use the deployer, not the deployment server, to distribute apps to cluster members. Use of the deployer eliminates the possibility of conflict with the run-time updates that the cluster replicates automatically by means of the mechanism described in Configuration updates that the cluster replicates.
For details of your cluster's app deployment process, view the Search Head Clustering: App Deployment dashboard in the monitoring console. See Use the monitoring console to view search head cluster status.
What configurations does the deployer manage?
The deployer has these main roles:
- It handles migration of app and user configurations into the search head cluster from non-cluster instances and search head pools.
- It deploys baseline app configurations to search head cluster members.
- It provides the means to distribute non-replicated, non-runtime configuration updates to all search head cluster members.
You do not use the deployer to distribute search-related runtime configuration changes from one cluster member to the other members. Instead, the cluster automatically replicates such changes to all cluster members. For example, if a user creates a saved search on one member, the cluster automatically replicates the search to all other members. See Configuration updates that the cluster replicates. To distribute all other updates, you need the deployer.
Configurations move in one direction only: from the deployer to the members. The members never upload configurations to the deployer. It is also unlikely that you will ever need to force such behavior by manually copying files from the cluster members to the deployer, because the members continually replicate all runtime configurations among themselves.
Types of updates that the deployer handles
These are the specific types of updates that require the deployer:
- New or upgraded apps.
- Configuration files that you edit directly.
- All non-search-related updates, even those that can be configured through the CLI or Splunk Web, such as updates to
indexes.conf
orinputs.conf
. - Settings that need to be migrated from a search head pool or a standalone search head. These can be app or user settings.
Note: You use the deployer to deploy configuration updates only. You cannot use it for initial configuration of the search head cluster or for version upgrades to the Splunk Enterprise instances that the members run on.
Types of updates that the deployer does not handle
You do not use the deployer to distribute certain runtime changes from one cluster member to the other members. These changes are handled automatically by configuration replication. See How configuration changes propagate across the search head cluster.
Because the deployer manages only a subset of configurations, note the following:
- The deployer does not represent a "single source of truth" for all configurations in the cluster.
- You cannot use the deployer, by itself, to restore the latest state to cluster members.
App upgrades and runtime changes
Because of how configuration file precedence works, changes that users make to apps at runtime get maintained in the apps through subsequent upgrades.
Say, for example, that you deploy the 1.0 version of some app, and then a user modifies the app's dashboards. When you later deploy the 1.1 version of the app, the user modifications will persist in the 1.1 version of the app.
As explained in Configuration updates that the cluster replicates, the cluster automatically replicates most runtime changes to all members. Those runtime changes do not get subsequently uploaded to the deployer, but because of the way configuration layering works, those changes have precedence over the configurations in the unmodified apps distributed by the deployer. To understand this issue in detail, read the rest of this topic, as well as the topic Configuration file precedence in the Admin Manual.
Custom apps and deleted files
The mechanism for deploying an upgraded version of an app does not recognize any deleted files or directories except for those residing under the default and local subdirectories. Therefore, if your custom app contains an additional directory at the level of default and local, that directory and all its files will persist from upgrade to upgrade, even if some of the files, or the directory itself, are no longer present in an upgraded version of the app.
To delete such files or directories, you must delete them manually, directly on the cluster members.
Once you delete the files or directories from the cluster members, they will not reappear the next time you deploy an upgrade of the app, assuming that they are not present in the upgraded app.
When does the deployer distribute configurations to the members?
The deployer distributes app configurations to the cluster members under these circumstances:
- When you invoke the
splunk apply shcluster-bundle
command, the deployer pushes any new or changed configurations to the members. See Deploy a configuration bundle. - When a member joins or rejoins the cluster, it checks the deployer for app updates. A member also checks for updates whenever it restarts. If any updates are available, it pulls them from the deployer.
When you make a change to the set of apps on the deployer and invoke the splunk apply shcluster-bundle
command, the deployer creates new tarballs for each changed app and then pushes those tarballs to the current members. When a new member joins or rejoins the cluster, it receives the current set of tarballs. This method ensures that all members, whether new or current, maintain identical sets of configurations. For example, if you change an app but do not run splunk apply shcluster-bundle
to push the change to the current set of members, any joining member also does not receive that change.
For more information on how the deployer creates the app tarballs, see What exactly does the deployer send to the cluster?
The deployer distributes user configurations to the captain only when you invoke the splunk apply shcluster-bundle
command. The captain then replicates those configurations to the members.
Configure the deployer
Note: The actions in this subsection are integrated into the procedure for deploying the search head cluster, described in the topic Deploy a search head cluster. If you already set up the deployer during initial deployment of the search head cluster, you can skip this section.
Choose an instance to be the deployer
Each search head cluster needs one deployer. The deployer must run on a Splunk Enterprise instance outside the search head cluster.
Depending on the specific components of your Splunk Enterprise environment, the deployer might be able to run on an existing Splunk Enterprise instance with other responsibilities, such as a deployment server or the master node of an indexer cluster. Otherwise, you can run it on a dedicated instance. See Deployer requirements.
Deploy to multiple clusters
The deployer sends the same configuration bundle to all cluster members that it services. Therefore, if you have multiple search head clusters, you can use the same deployer for all the clusters only if the clusters employ exactly the same configurations, apps, and so on.
If you anticipate that your clusters might need different configurations over time, set up a separate deployer for each cluster.
Set a secret key on the deployer
You must configure the secret key on the deployer and all search head cluster members. The deployer uses this key to authenticate communication with the cluster members. To set the key, specify the pass4SymmKey
attribute in either the [general]
or the [shclustering]
stanza of the deployer's server.conf
file. For example:
[shclustering] pass4SymmKey = yoursecretkey
The key must be the same for all cluster members and the deployer. You can set the key on the cluster members during initialization.
You must restart the deployer instance for the key to take effect.
Note: If there is a mismatch between the value of pass4SymmKey
on the cluster members and on the deployer (for example, you set it on the members but neglect to set it on the deployer), you will get an error message when the deployer attempts to push the configuration bundle. The message will resemble this:
Error while deploying apps to first member: ConfDeploymentException: Error while fetching apps baseline on target=https://testitls1l:8089: Non-200/201 status_code=401; {"messages":[{"type":"WARN","text":"call not properly authenticated"}]}
Set the search head cluster label on the deployer
The search head cluster label is useful for identifying the cluster in the monitoring console. This parameter is optional, but if you configure it on one member, you must configure it with the same value on all members, as well as on the deployer.
To set the label, specify the shcluster_label
attribute in the [shclustering]
stanza of the deployer's server.conf
file. For example:
[shclustering] shcluster_label = shcluster1
See Set cluster labels in Monitoring Splunk Enterprise.
Point the cluster members to the deployer
Each cluster member needs to know the location of the deployer. Splunk recommends that you specify the deployer location during member initialization. See Deploy a search head cluster.
If you do not set the deployer location at initialization time, you must add the location to each member's server.conf
file before using the deployer:
[shclustering] conf_deploy_fetch_url = <URL>:<management_port>
The conf_deploy_fetch_url
attribute specifies the URL and management port for the deployer instance.
If you later add a new member to the cluster, you must set conf_deploy_fetch_url
on the member before adding it to the cluster, so it can immediately contact the deployer for the current configuration bundle, if any.
What the configuration bundle contains
The configuration bundle is the set of files that the deployer distributes to the cluster. It consists of two types of configurations:
- App configurations.
- User configurations.
You determine the contents of the configuration bundle by copying the apps or other configurations to a location on the deployer.
The deployer pushes the configuration bundle to the cluster, using a different method depending on whether the configurations are for apps or for users. On the cluster members, the app configurations obey different rules from the user configurations. See Where deployed configurations live on the cluster members.
The deployer pushes the configuration bundle to the cluster as a set of tarballs, one for each app, and one for the entire user directory.
Where to place the configuration bundle on the deployer
On the deployer, the configuration bundle resides under the $SPLUNK_HOME/etc/shcluster
directory. The set of files under that directory constitutes the configuration bundle.
The directory has this structure:
$SPLUNK_HOME/etc/shcluster/ apps/ <app-name>/ <app-name>/ ... users/
Note the following general points:
- The configuration bundle must contain at least one subdirectory under either
/apps
or/users
. The deployer will error out if you attempt to push a configuration bundle that contains no app or user subdirectories. - The deployer only pushes the contents of subdirectories under
shcluster
. It does not push any standalone files directly undershcluster
. For example, it will not push the file/shcluster/file1
. To deploy standalone files, create a new apps directory under/apps
and put the files in the local subdirectory. For example, putfile1
under$SPLUNK_HOME/etc/shcluster/apps/newapp/local
. - The
shcluster
location is only for files that you want to distribute to cluster members. The deployer does not use the files in that directory for its own configuration needs.
Note the following points regarding apps:
- Caution: Do not use the deployer to push default apps, such as the search app, to the cluster members. In addition, make sure that no app in the configuration bundle has the same name as a default app. Otherwise, it will overwrite that app on the cluster members. For example, if you create an app called "search" in the configuration bundle, it will overwrite the default search app when you push it to the cluster members.
- Put each app in its own subdirectory under
/apps
. You must untar the app. - For app directories only, all files placed under both default and local subdirectories get merged into default subdirectories on the members, post-deployment. See App configurations.
- The configuration bundle must contain all previously pushed apps, as well as any new ones. If you delete an app from the bundle, the next time you push the bundle, the app will get deleted from the cluster members.
- To update an app on the cluster members, put the updated version in the configuration bundle. Simply overwrite the existing version of the app.
- To delete an app that you previously pushed, remove it from the configuration bundle. When you next push the bundle, each member will delete it from its own file system. Note: If you need to remove an app, inspect its
app.conf
file to make sure thatstate = enabled
. Ifstate = disabled
, the deployer will not remove the app even if you remove it from the configuration bundle. - When the deployer pushes the bundle, it pushes the full contents of all apps that have changed since the last push. Even if the only change to an app is a single file, it pushes the entire app. If an app has not changed, the deployer does not push it again.
Note the following points regarding user settings:
- To push user-specific files, put the files under the
/users
subdirectories where you want them to reside on the members. - The deployer will push the content under
/shcluster/users
only if the content includes at least one configuration file. For example, if you place a private lookup table or view under some user subdirectory, the deployer will push it only if there is also at least one configuration file somewhere under/shcluster/users
. - You cannot subsequently delete user settings by deleting the files from the deployer and then pushing the bundle again. In this respect, user settings behave differently from app settings.
Where deployed configurations live on the cluster members
On the cluster members, the deployed apps and user configurations reside under $SPLUNK_HOME/etc/apps
and $SPLUNK_HOME/etc/users
, respectively.
App configurations
When it deploys apps, the deployer places the app configurations in default directories on the cluster members.
The deployer never deploys files to the members' local app directories, $SPLUNK_HOME/etc/apps/<app_name>/local
. Instead, it deploys both local and default settings from the configuration bundle to the members' default app directories, $SPLUNK_HOME/etc/apps/<app_name>/default
. This ensures that deployed settings never overwrite local or replicated runtime settings on the members. Otherwise, for example, app upgrades would wipe out runtime changes.
During the staging process that occurs prior to pushing the configuration bundle, the deployer copies the configuration bundle to a staging area on its file system, where it merges all settings from files in /shcluster/apps/<appname>/local
into corresponding files in /shcluster/apps/<appname>/default
. The deployer then pushes only the merged default files.
The deployer also merges the app's metadata/local.meta
and metadata/default.meta
and places it in a single metadata/default.meta
file on the member.
During the merging process, settings from the local directory take precedence over any corresponding default settings. For example, if you have a /newapp/local/inputs.conf
file, the deployer takes the settings from that file and merges them with any settings in /newapp/default/inputs.conf
. If a particular attribute is defined in both places, the merged file retains the definition from the local directory.
User configurations
The deployer copies user configurations to the captain only. The captain then replicates the settings to all the cluster members through its normal method for replicating configurations, as described in Configuration updates that the cluster replicates.
Unlike app configurations, the user configurations reside in the normal user locations on the cluster members, and are not merged into default directories. They behave just like any runtime settings created by cluster users through Splunk Web.
The deployment of user configurations is of value mainly for migrating settings from a standalone search head or a search head pool to a search head cluster. See Migrate from a search head pool to a search head cluster.
When you migrate user configurations to an existing search head cluster, the deployer respects attributes that already exist on the cluster. It does not overwrite any existing attributes within existing stanzas.
For example, say the cluster members have an existing file $SPLUNK_HOME/etc/users/admin/search/local/savedsearches.conf
containing this stanza:
[my search] search = index=_internal | head 1
and on the deployer, there's the file $SPLUNK_HOME/etc/shcluster/users/admin/search/local/savedsearches.conf
with these stanzas:
[my search] search = index=_internal | head 10 enableSched = 1 [my other search] search = FOOBAR
This will result in a final merged configuration on the members:
[my search] search = index=_internal | head 1 enableSched = 1 [my other search] search = FOOBAR
The [my search]
stanza, which already existed on the members, keeps the existing setting for its search
attribute, but adds the migrated setting for the enableSched
attribute, because that attribute did not already exist in the stanza. The [my other search]
stanza, which did not already exist on the members, gets added to the file, along with its search
attribute.
Management of app-level knowledge objects
After you deploy an app to the members, you cannot subsequently delete the app's baseline knowledge objects through Splunk Web, the CLI, or the REST API. You also cannot move, share, or unshare those knowledge objects.
This limitation applies only to the app's baseline knowledge objects - those that were distributed from the deployer to the members. It does not apply to the app's runtime knowledge objects, if any. For example, if you deploy an app and then subsequently use Splunk Web to create a new knowledge object in the app, you can manage that object with Splunk Web or any other of the usual methods.
The limitation on managing baseline knowledge objects applies to lookup tables, dashboards, reports, macros, field extractions, and so on. The only exception to this rule is for app-level lookup table files that do not have a permission stanza in default.meta. Such a lookup file can be deleted through a member's Splunk Web.
The only way to delete an app-level baseline knowledge object is to redeploy an updated version of the app that does not include the knowledge object.
Note: This condition does not apply to user-level knowledge objects pushed by the deployer. User-level objects can be managed by all the usual methods.
The limitation on managing baseline knowledge objects is due to the fact that the deployer moves all local app configurations to the default directories before it pushes the app to the members. Default configurations cannot be moved or otherwise managed. On the other hand, any runtime knowledge objects reside in the app's local directory and therefore can be managed in the normal way. For more information on where deployed configurations reside, see App configurations.
What exactly does the deployer send to the cluster?
The deployer pushes the configuration bundle to the members, as a set of tarballs, one for each app. In addition, it pushes one tarball consisting of the entire $SPLUNK_HOME/etc/shcluster/users
directory to the captain.
On the initial push to a set of new members, the deployer distributes the entire set of app tarballs to each member. On subsequent pushes, it distributes only new apps or any apps that have changed since the last push. If even a single file has changed in an app, the deployer redistributes the entire app. It does not redistribute unchanged apps.
If you change a single file in the users
directory, the deployer redeploys the entire users tarball to the captain. This is because the users
directory is typically modified and redeployed only during upgrade or migration, unlike the apps
directory, which might see regular updates during the lifetime of the cluster.
Caution: If you attempt to push a very large tarball (>200 MB), the operation might fail due to various timeouts. Delete some of the contents from the tarball's app, if possible, and try again.
Deploy a configuration bundle
To deploy a configuration bundle, you push the bundle from the deployer to the cluster members.
Push the configuration bundle
To push the configuration bundle to the cluster members:
1. Put the apps and other configuration changes in subdirectories under shcluster/
on the deployer.
2. Untar any app.
3. Run the splunk apply shcluster-bundle
command on the deployer:
splunk apply shcluster-bundle -target <URI>:<management_port> -auth <username>:<password>
Note the following:
- The
-target
parameter specifies the URI and management port for any member of the cluster, for example,https://10.0.1.14:8089
. You specify only one cluster member but the deployer pushes to all members. This parameter is required. - The
-auth
parameter specifies credentials for the deployer instance.
In response to splunk apply shcluster-bundle
, the deployer displays this message:
Warning: Depending on the configuration changes being pushed, this command might initiate a rolling-restart of the cluster members. Please refer to the documentation for the details. Do you wish to continue? [y/n]:
For information on which configuration changes trigger restart, see $SPLUNK_HOME/etc/system/default/app.conf
. It lists the configuration files that do not trigger restart when changed. All other configuration changes trigger restart.
4. To proceed, respond to the message with y
.
Note: You can eliminate the message by appending the flag --answer-yes
to the splunk apply shcluster-bundle
command:
splunk apply shcluster-bundle --answer-yes -target <URI>:<management_port> -auth <username>:<password>
This is useful if you are including the command in a script or otherwise automating the process.
How the cluster applies the configuration bundle
The deployer and the cluster members execute the command as follows:
1. The deployer stages the configuration bundle in a separate location on its file system ($SPLUNK_HOME/var/run/splunk/deploy
) and then pushes the app directories to each cluster member. The configuration bundle typically consists of several tarballs, one for each app. The deployer pushes only the new or changed apps.
2. The deployer separately pushes the users tarball to the captain, if any user configurations have changed since the last push.
3. The captain replicates any changed user configurations to the other cluster members.
4. Each cluster member applies the app tarballs locally. If a rolling restart is determined necessary, approximately 10% of the members then restart at a time, until all have restarted.
During a rolling restart, all members, including the current captain, restart. Restart of the captain triggers the election process, which can result in a new captain. After the final member restarts, it requires approximately 60 seconds for the cluster to stabilize. During this interval, error messages might appear. You can ignore these messages. They should desist after 60 seconds. For more information on the rolling restart process, see Restart the search head cluster.
Control the restart process
You should usually let the cluster automatically trigger any rolling restart, as necessary. However, if you need to maintain control over the restart process, you can run a version of splunk apply shcluster-bundle
that stops short of the restart. If you do so, you must later initiate the restart yourself. The configuration bundle changes will not take effect until the members restart.
To run splunk apply shcluster-bundle
without triggering a restart, use this version of the command:
splunk apply shcluster-bundle -action stage && splunk apply shcluster-bundle -action send
The members will receive the bundle, but they will not restart. Splunk Web will display the message "Splunk must be restarted for changes to take effect."
To initiate a rolling restart later, invoke the splunk rolling-restart
command from the captain:
splunk rolling-restart shcluster-members
Push an empty bundle
In most circumstances, it is a bad idea to push an empty bundle. By doing so, you cause the cluster members to delete all the apps previously distributed by the deployer. For that reason, if you attenpt to push an empty bundle, the deployer assumes that you have made a mistake and it returns an error message, similar to this one:
Error while deploying apps to first member: Found zero deployable apps to send; /opt/splunk/etc/shcluster is likely empty; ensure that the command is being run on the deployer. If intentionally attempting to remove all apps from the search head cluster use the "force" option. WARNING: using this option with an empty shcluster directory will delete all apps previously deployed to the search head cluster; use with extreme caution!
You can override this behavior with the -force true
flag:
splunk apply shcluster-bundle --answer-yes -force true -target <URI>:<management_port> -auth <username>:<password>
Each member will then delete all previously deployed apps from its $SPLUNK_HOME/etc/apps
directory.
If you need to remove an app, inspect its app.conf
file to make sure that state = enabled
. If state = disabled
, the deployer will not remove the app even if you remove it from the configuration bundle.
Allow a user without admin privileges to push the configuration bundle
By default, only admin users (that is, those assigned a role containing the admin_all_objects
capability) can push the configuration bundle to the cluster members. Depending on how you manage your deploymernt, you might want to allow users without full admin privileges to push apps or other configurations to the cluster members. You can do so by overriding the controlling stanza in the default restmap.conf
file.
The default restmap.conf
file includes a stanza that controls the bundle push process:
[apps-deploy:apps-deploy] match=/apps/deploy capability.post=admin_all_objects authKeyStanza=shclustering
You can specify a different capability in this stanza, either an existing capability or one that you define specifically for the purpose. If you assign that capability to a new role, users given that role can then push the configuration bundle. You can optionally specify both the existing admin_all_objects
capability and the new capability, so that existing admin users retain the ability to push the bundle..
To create a new special-purpose capability and then assign that capability to the bundle push process:
-
On the deployer, create a new
authorize.conf
file under$SPLUNK_HOME/etc/system/local
, or edit the file if it already exists at that location. Add the new capability to that file. For example:[capability::conf_bundle_push]
-
in the same
authorize.conf
file, create a role specific to that capability. For example:[role_deployer_push] conf_bundle_push=enabled
-
On the deployer, create a new
restmap.conf
file under$SPLUNK_HOME/etc/system/local
, or edit the file if it already exists at that location. Change the value of thecapability.post
setting to include both theconf_bundle_push
capability and theadmin_all_objects
capability. For example:[apps-deploy:apps-deploy] match=/apps/deploy capability.post=conf_bundle_push OR admin_all_objects authKeyStanza=shclustering
You can now assign the role_deployer_push
role to any non-admin users that need to push the bundle.
You can also assign the capability.post
setting to an existing capability, instead of creating a new one. In that case, create a role specific to the existing capability and assign the appropriate users to that role.
For more information on capabilities, see the chapter Users and role-based access control in Securing Splunk Enterprise.
Maintain lookup files across app upgrades
Any app that uses lookup tables typically ships with stubs for the table files. Once the app is in use on the search head, the tables get populated as an effect of runtime processes, such as searches. When you later upgrade the app, by default the populated lookup tables get overwritten by the stub files from the latest version of the app, causing you to lose the data in the tables.
To avoid this problem, you can stipulate that the stub files in upgraded apps not overwrite any table files of the same name already on the cluster members. Run the splunk apply shcluster-bundle
command on the deployer, setting the -preserve-lookups
flag to "true":
splunk apply shcluster-bundle -target <URI>:<management_port> -preserve-lookups true -auth <username>:<password>
Note the following:
- The default for
-preserve-lookups
is "false". In other words, by default, the populated lookup tables are overwritten on upgrade.
Note: To ensure that a stub persists on members only if there is no existing table file of the same name already on the members, this feature can temporarily rename a table file with a .default
extension. (So, for example, lookup1.csv
becomes lookup1.csv.default
.) Therefore, if you have been manually renaming table files with a .default
extension, you might run into problems when using this feature. You should contact Support before proceeding.
Consequence and remediation of deployer failure
The deployer distributes the configuration bundle to the cluster members under these circumstances:
- When you invoke the
splunk apply shcluster-bundle
command, the deployer pushes the apps configurations to the members and the users configurations to the captain. - When a member joins or rejoins the cluster, it checks the deployer for apps updates. A member also checks for updates whenever it restarts. If any apps updates are available, it pulls them from the deployer.
This means that if the deployer is down:
- You cannot push new configurations to the members.
- A member that joins or rejoins the cluster, or restarts, cannot pull the latest set of apps tarballs.
The implications of the deployer being down depend, therefore, on the state of the cluster members. These are the main cases to consider:
- The deployer is down but the set of cluster members remains stable.
- The deployer is down and a member attempts to join or rejoin the cluster.
The deployer is down but the set of cluster members remains stable
If no member joins or rejoins the cluster while the deployer is down, there are no important consequences to the functioning of the cluster. All member configurations remain in sync and the cluster continues to operate normally. The only consequence is the obvious one, that you cannot push new configurations to the members during this time.
The deployer is down and a member attempts to join or rejoin the cluster
In the case of a member attempting to join or rejoin the cluster while the deployer is down, there is the possibility that the apps configuration on that member will be out-of-sync with the apps configuration on the other cluster members:
- A new member will not be able to pull the current set of apps tarballs.
- A member that left the cluster before the deployer failed and rejoined the cluster after the deployer failed will not be able to pull any updates made to the apps portion of the bundle during the time that the member was down and the deployer was still running.
In these circumstances, the joining/rejoining member will have a different set of apps configurations from the other cluster members. Depending on the nature of the bundle changes, this can cause the joining member to behave differently from the other members. It can even lead to failure of the entire cluster. Therefore, you must make sure that this circumstance does not develop.
How to remedy deployer failure
Remediation is two-fold:
1. Prevent any member from joining or rejoining the cluster during deployer failure, unless you can be certain that the set of configurations on the joining member is identical to that on the other members (for example, if the rejoining member went down subsequent to the deployer failure).
2. Bring up a new deployer:
a. Configure a new deployer instance. See Configure the deployer.
b. Restore the contents of $SPLUNK_HOME/etc/shcluster
to the new instance from backup.
c. If necessary, update the conf_deploy_fetch_url
values on all search head cluster members.
d. Push the restored bundle contents to all members by running the splunk apply shcluster-bundle
command.
Configuration updates that the cluster replicates | Add a cluster member |
This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10
Feedback submitted, thanks!