Splunk® Enterprise

Distributed Search

Back up and restore search head cluster settings

Search head clusters can usually recover from member failures without the need to manually restore configuration settings. Several of the other topics in this chapter provide guidance on recovering from various sorts of member failure, In particular, see:

In a functioning search head cluster, each member continually replicates changes to its state to the other members. This makes it possible to rebuild your cluster even if only one member remains intact.

However, to deal with catastrophic failure of a search head cluster, such as the failure of a data center, you can periodically back up the cluster state, so that you can later restore that state to a new or standby cluster, if necessary.

In addition, to deal with failure of the deployer, you can backup and restore the deployer's configuration bundle.

As with any backup-and-recovery scheme, test that these procedures work for you before you need them to work for you.

Backup the search head cluster settings

A backup of all search head cluster configurations requires two backups:

  • The search head cluster state
  • The deployer's configuration bundle

Backup the search head cluster state

On a cluster member, preferably the current captain:

  1. Back up the most recent set of replicated configurations, located at $SPLUNK_HOME/var/run/splunk/snapshot/$LATEST_TIME-$CHECKSUM.bundle.
  2. Back up the $SPLUNK_HOME/etc/system/local/server.conf file.
    Note: The only setting from this file that you will use when restoring from this backup is the id setting under the [shclustering] stanza. This setting is a unique identifier for the cluster, shared by all cluster members.
  3. Back up the KV store:
    splunk backup kvstore
    

    This command creates an archive file in the $SPLUNK_HOME/var/lib/splunk/kvstorebackup directory. See Back up the KV store in the Admin Manual.

  4. Create a tarball containing the set of backups. This is your search head cluster configuration backup. Store it somewhere safe.

Backup the deployer's configuration bundle

Back up the deployer's $SPLUNK_HOME/etc/shcluster directory. This directory contains the configuration bundle that gets deployed to all cluster members.

Restore the search head cluster settings

You can restore the settings to either a new or an existing, standby cluster. The procedure documented here assumes that you are restoring to a standby cluster, but you can apply the main points of the procedure to a new cluster.

To restore a cluster's settings, restore two sets of configurations:

  • The deployer's configuration bundle
  • The search head cluster state

All members of both the old and new clusters, along with their deployers, must be running the same version of Splunk Enterprise, down to the maintenance level.

Restore the deployer's configuration bundle

This procedure assumes that you are restoring to a new deployer. If the old deployer is intact, you can reuse it by just pointing the new cluster members to it.

A deployer can only service a single cluster. The old cluster must be permanently inactive before you can use the existing deployer with the new cluster.

  1. Stop all members of the standby search head cluster.
  2. Copy the backup of the configuration bundle to the new deployer's $SPLUNK_HOME/etc/shcluster directory, overwriting the existing contents, if any.
  3. Run the splunk apply shcluster-bundle command on the deployer:
    splunk apply shcluster-bundle -answer-yes -target <URI>:<management_port> -auth <username>:<password>
    

    See Push the configuration bundle.

Do not restart the standby cluster members at this point.

Restore the search head cluster state

  1. Confirm that all members of the standby search head cluster are still stopped.
  2. Untar the set of backups to a temporary location.
  3. On each standby cluster member:
    1. Restore the replicated configurations:
      1. Move the replicated bundle $LATEST_TIME-$CHECKSUM.bundle from the temporary location to $SPLUNK_HOME/etc.
      2. Untar $LATEST_TIME-$CHECKSUM.bundle.

        You must be working in the $SPLUNK_HOME/etc directory when you untar $LATEST_TIME-$CHECKSUM.bundle. The files in $LATEST_TIME-$CHECKSUM.bundle are relative to $SPLUNK_HOME/etc.

      3. To confirm that the files untarred properly, check for the presence of files in their proper location; for example, look for $SPLUNK_HOME/etc/system/replication/ops.json.
    2. Restore the KV store configurations. Follow the instructions in Restore the KV store data in the Admin Manual.
    3. Restore the search head cluster id field. Edit $SPLUNK_HOME/etc/system/local/server.conf and change the id setting in the shclustering stanza to use the value from the backup.
  4. Start all cluster members.
  5. Wait a few minutes for captain election to complete and for the deployer configuration bundle to be applied.
Last modified on 03 June, 2020
Restart the search head cluster   Use the search head clustering dashboard

This documentation applies to the following versions of Splunk® Enterprise: 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters