Splunk® Enterprise

Distributed Search

Download manual as PDF

This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

Configuration updates that the cluster replicates

The cluster automatically replicates certain runtime configuration changes that a user makes on one cluster member to all the other members.

Note: The cluster replicates configuration changes to all cluster members. The cluster's replication factor applies only to search artifact replication. See "Choose the replication factor for the search head cluster."

The changes that the cluster replicates

These are the main types of configuration changes that the cluster replicates:

Replication operates under these constraints:

  • The cluster only replicates changes made at runtime, through specific configuration methods.
  • A whitelist determines the specific types of changes that the cluster replicates.

Configuration methods that trigger replication

The cluster replicates changes made through these methods:

  • Splunk Web
  • The Splunk CLI
  • The REST API

The cluster does not replicate any configuration changes that you make manually, such as direct edits to configuration files.

For example, if a user creates a saved search in Splunk Web on a cluster member, the cluster replicates that saved search to all cluster members. However, if you, as the administrator, add a saved search by directly editing the savedsearches.conf file on one cluster member, the cluster does not replicate that saved search to the other cluster members. You must use the deployer to push that saved search to all cluster members.

The replication white list

The cluster uses a whitelist to determine what changes to replicate. This whitelist is configured through the set of conf_replication_include attributes in the default version of server.conf, located in $SPLUNK_HOME/etc/system/default.

You can add or remove items from that list by editing the members' server.conf files under $SPLUNK_HOME/etc/system/local. If you change the whitelist, you must make the same changes on all cluster members.

For a comprehensive list of items in the whitelist, consult the default version of server.conf. This is the approximate set of whitelisted items:

alert_actions 
authentication 
authorize 
datamodels 
event_renderers 
eventtypes 
fields 
html 
literals 
lookups 
macros 
manager 
models 
multikv 
nav 
panels 
passwd
passwords
props 
quickstart 
savedsearches 
searchbnf 
searchscripts 
segmenters 
tags 
times 
transforms 
transactiontypes 
ui-prefs 
user-prefs 
views 
viewstates 
workflow_actions

The cluster replicates changes to all files underlying the whitelist items. In addition to configuration files themselves, this includes dashboard and nav XML, lookup table files, data model JSON files, and so on. The cluster also replicates permissions stored in *.meta files.

These are examples of the types of files replicated for various whitelist items:

# escape-hatch HTML views
conf_replication_include.html = true
# lookup table files
conf_replication_include.lookups = true
# manager XML
conf_replication_include.manager = true
# datamodel JSON files
conf_replication_include.models  = true
# nav XML
conf_replication_include.nav = true
# view XML
conf_replication_include.views = true

Note: The cluster does not replicate user search history. This is reflected in the default server.conf file, which includes the line, conf_replication_include.history = false. Changing that value to "true" has no effect and does not cause the cluster to replicate search history.

The changes that the cluster ignores

The cluster ignores configuration changes for any items that are not on the whitelist. Examples include index-time settings, such as those that define data inputs or indexes.

In addition, the cluster only replicates changes that are made through Splunk Web, the Splunk CLI, or the REST API. If you directly edit a configuration file, the cluster does not replicate it. Instead, you must use the deployer to distribute the file to all cluster members.

The cluster also does not replicate newly installed or upgraded apps.

For information on how to distribute such configuration changes through the deployer, see "Use the deployer to distribute apps and configuration updates."

Note: The deployer works in concert with cluster replication to migrate user (not app) configurations to the cluster members. The typical use case for this is to migrate user settings on an existing search head pool or standalone search head to the search head cluster. You put the user configurations that you want to migrate on the deployer. The deployer pushes them to the captain, which then replicates them to the other cluster members. For details, see "User configurations."

How replication works

When a user makes a configuration change to a cluster member search head, the member saves the change to a file, or set of files, locally and also sends the change to the captain. Approximately every five seconds, each cluster member contacts the captain and pulls any changes that have arrived since the last time it pulled changes. Each cluster member then applies the changes locally.

For example, assume a user on one cluster member uses Splunk Web to create a new field extraction. Splunk Web saves the field extraction in local files on that member. The member then sends the file changes to the captain. When each cluster member next contacts the captain, it pulls the changes, along with any other recent changes, and applies them locally. Within a few seconds, all cluster members have the new field extraction.

Note: Files replicated and updated this way are semantically and functionally equivalent across the set of cluster members. The files might not be identical on all members, however. For example, depending on circumstances such as the order in which changes reach the captain, it is possible that an updated setting in props.conf could appear in different locations within the file on different members.

For details on the specifics of your cluster's configuration replication process, view the Search Head Clustering: Configuration Replication dashboard in the monitoring console. See Use the monitoring console to view search head cluster status.

When replication happens

The purpose of replication is to keep search-related configurations in sync across all cluster members. To ensure this happens, replication occurs at various times, depending on the state of the member:

  • Each active cluster member contacts the captain every five seconds and pulls any changes that have arrived since the last time it pulled changes.
  • When a new member joins the cluster, it contacts the captain and downloads a tarball containing the current set of replicated configurations, including all changes that have been made over the life of the cluster. It applies the tarball locally.
  • When a member rejoins the cluster. First, follow the procedure outlined in "Add a member that was previously removed from the cluster", cleaning the instance before you re-add it to the cluster. The member then contacts the captain and downloads the tarball, the same way that a new member does.
  • During cluster recovery. See the next section for details.

View replication status

The DMC contains a wealth of information about the status of configuration replication. See Use the DMC to view search head cluster status and troubleshoot issues.

To see when the members last pulled a set of configuration changes from the captain, run the splunk show shcluster-status command from any member:

splunk show shcluster-status 

The output from this command includes, for each member, the field last_conf_replication. It indicates the last time that the member successfully pulled an updated set of configurations from the captain.

For general information on the command, see Show cluster status.

How replication handles recovery issues

Normally, the cluster continually replicates changes to all cluster members. However, when the cluster recovers from a period of downtime, or when a downed member rejoins the cluster, additional replication activity must occur to ensure that all members share the same set of replicated changes, including any changes that occurred on individual members during the intervening down period.

For cases where one or more members went down unexpectedly, see "Handle failure of a search head cluster member". In some cases, the member can download the set of intervening changes and apply them. In other cases, you might need to run the splunk resync shcluster-replicated-config command to apply the tarball.

For information on how replication functions during recovery of an entire cluster, see "Recovery from a non-functioning cluster".

PREVIOUS
How configuration changes propagate across the search head cluster
  NEXT
Use the deployer to distribute apps and configuration updates

This documentation applies to the following versions of Splunk® Enterprise: 6.5.0, 6.5.1, 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.5.7, 6.5.8, 6.5.9, 6.5.10


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters