Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Indexer cluster operations and SmartStore

Indexer clusters treat SmartStore indexes differently from non-SmartStore indexes in some fundamental ways:

  • The responsibility for high availability and disaster recovery of SmartStore warm buckets shifts from the cluster to the remote storage service. This shift offers the important advantage that warm bucket data is fully recoverable even if the cluster loses a set of peer nodes that equals or exceeds the replication factor in number.
  • The effect of the replication factor on SmartStore warm buckets differs from its effect on non-SmartStore warm buckets. In particular, the cluster uses the replication factor to determine how many copies of SmartStore warm bucket metadata it maintains. The cluster does not attempt to maintain multiple copies of the warm buckets themselves. In cases of warm bucket fixup, the cluster only needs to replicate the bucket metadata, not the entire contents of the bucket directories.

Replication factor and replicated bucket copies

A cluster treats hot buckets in SmartStore indexes the same way that it treats hot buckets in non-SmartStore indexes. It replicates the hot buckets in local storage across the replication factor number of peer nodes.

When a bucket in a SmartStore index rolls to warm and moves to remote storage, the remote storage service takes over responsibility for maintaining high availability of that bucket. The replication factor has no effect on how the remote storage service achieves that goal.

At the point that the bucket rolls to warm and gets uploaded to remote storage, the peer nodes no longer attempt to maintain replication factor number of local copies of the bucket. The peers do keep their copies of the bucket in local cache for some period of time, as determined by the cache eviction policy.

However, even when copies of a bucket are no longer stored locally, the replication factor still controls the metadata that the cluster stores for that bucket. Peer nodes equal in number to the replication factor maintain metadata information about the bucket in their .bucketManifest files. Those peer nodes also maintain empty directories for the bucket, if a copy of the bucket does not currently reside in their local cache.

For example, if the cluster has a replication factor of 3, three peer nodes continue to maintain metadata information, along with populated or empty directories, for each bucket.

By maintaining metadata for each bucket on the replication factor number of peer nodes, the cluster simplifies the process of fetching the bucket when it is needed, in the case of any interim peer node failure.

Unlike non-SmartStore indexes, a cluster can recover most of the data in SmartStore indexes if it loses peer nodes that equal or exceed the replication factor in number. In such a circumstance, the cluster can recover all of the SmartStore warm buckets, because those buckets are stored remotely and so are unaffected by peer node failure. The cluster will likely lose some of its SmartStore hot buckets, because those buckets are stored locally.

For example, in a cluster with a replication factor of 3, the index loses both hot and warm data from its non-SmartStore indexes if three or more peer nodes are simultaneously offline. However, the same cluster can lose any number of peer nodes, even all of its peer nodes temporarily, and still not lose any SmartStore warm data, because that data resides on remote storage.

Search factor, searchable copies, and primary copies

The search factor has the same effect on hot buckets in SmartStore indexes as it does on hot buckets in non-SmartStore indexes. That is, the search factor determines the number of copies of each replicated bucket that include the tsidx files and are thus searchable.

For SmartStore warm buckets, the search factor has no practical meaning. The remote storage holds the master copy of each bucket, and that copy always includes the set of tsidx files, so it is, by definition, searchable. When, in response to a search request, a peer node's cache manager fetches a copy of a bucket to the node's local cache, that copy is searchable to the degree that it needs to be for the specific search. As described in How the cache manager fetches buckets, the cache manager attempts to download only the bucket files needed for a particular search. Therefore, in some cases, the cache manager might not download the tsidx files.

Primary tags work the same with SmartStore indexes as with non-SmartStore indexes. Each bucket has exactly one peer node with a primary tag for that bucket. For each search, the peer node with the primary tag for a particular warm bucket is the one that searches that bucket, first fetching a copy of that bucket from remote storage when necessary

How an indexer cluster handles SmartStore bucket fixup stemming from peer node failure

When a peer node fails, indexer clusters handle bucket fixup for SmartStore indexes in basically the same way as for non-SmartStore indexes, with a few differences. For a general discussion of peer node failure and the bucket-fixing activities that ensue, see What happens when a peer node goes down.

This section covers the differences in bucket fixup that occur with SmartStore. One advantage of SmartStore is that loss of peer nodes equal to, or in excess of, the replication factor in number does not result in loss of warm bucket data. In addition, warm bucket fixup proceeds much faster for SmartStore indexes.

Bucket fixup with SmartStore

In the case of hot buckets, bucket fixup for SmartStore indexes proceeds the same way as for non-SmartStore indexes.

In the case of warm buckets, bucket fixup for SmartStore indexes proceeds much more quickly than it does for non-SmartStore indexes. This advantage occurs because SmartStore bucket fixup requires updates only to the .bucketManifest files on each peer node, without the need to stream the buckets themselves. In other words, bucket fixup replicates only bucket metadata.

Each peer node maintains a .bucketManifest file for each of its indexes. The file contains metadata for each bucket copy that the peer node maintains. When you replicate a bucket from a source peer to a target peer, the target peer adds metadata to its .bucketManifest file for that bucket copy.

The buckets themselves are not replicated during SmartStore fixup because the master copy of each bucket remains present on remote storage and is downloaded to the peer nodes' local storage only when needed for a search.

During SmartStore replication, the target peer nodes also create an empty directory for each warm bucket that has metadata in their .bucketManifest files.

As with non-SmartStore indexes, the SmartStore bucket fixup process also ensures that exactly one peer node has a primary tag for each bucket.

Fixup when the number of nodes that fail is less than the replication factor

For both SmartStore and non-SmartStore indexes, the cluster can fully recover its valid and complete states through the fixup process. The fixup process for SmartStore warm buckets requires only the replication of metadata internal to the cluster, so it proceeds much more quickly.

Fixup when the number of nodes that fail equals or exceeds the replication factor

In contrast to non-SmartStore indexes, a cluster can recover all the warm bucket data for its SmartStore indexes even when the number of failed nodes equals or exceeds the replication factor. As with non-SmartStore indexes, though, there will likely be some data loss associated with hot buckets.

The cluster needs to recover only missing warm bucket metadata because the warm buckets themselves remain present on remote storage.

To recover any lost warm bucket metadata, the manager node uses its comprehensive list of all bucket IDs. It compares those IDs with the IDs in the set of .bucketManifest files on the peer nodes, looking for IDs that are present in its list but are not present in any .bucketManifest file. For all such bucket IDs, the manager assigns peer nodes to query the remote storage for the information necessary to populate metadata for those buckets in their .bucketManifest files. Additional fixup occurs to meet the replication factor requirements for the metadata and to assign primary tags.

If the manager node goes down during this recovery process, the warm bucket metadata is still recoverable. When the cluster regains a manager node, the manager initiates bootstrapping to recover all warm bucket metadata. For information on bootstrapping, see Bootstrap SmartStore indexes.

Last modified on 03 March, 2021
How search works in SmartStore   Non-clustered bucket issues

This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters