What happens when a peer node comes back up
A peer node can go down either intentionally (through the CLI
offline command) or unintentionally (for example, by a server crashing). When the peer goes down, the cluster undertakes remedial activities, also known as bucket fixing, as described in the topic, "What happens when a peer node goes down." The topic you're now reading describes what happens when and if the peer later returns to the cluster.
When a peer comes back up, it starts sending heartbeats to the master. The master recognizes it and adds it back into the cluster. If the peer still has intact bucket copies from its earlier activities in the cluster, the master adds those copies to the counts it maintains of buckets. The master also rebalances the cluster, which can result in searchable bucket copies on the peer, if any, being assigned primary status. For information on rebalancing, see "Rebalance the indexer cluster primary buckets."
Note: When the peer connects with the master, it checks to see whether it already has the current version of the configuration bundle. If the bundle has changed since it went down, the peer downloads the latest configuration bundle, validates it locally, and restarts. The peer rejoins the cluster only if bundle validation succeeds.
How the master counts buckets
To understand what happens when a peer returns to the cluster, you must first understand how the master tracks bucket copies.
The master maintains counts for each bucket in the cluster. For each bucket, it knows:
- how many copies of the bucket exist on the cluster.
- how many searchable copies of the bucket exist on the cluster.
The master also ensures that there's always exactly one primary copy of a given bucket.
With multisite clusters, the master keeps track of copies and searchable copies for each site, as well as for the cluster as a whole. It also ensures that each site with an explicit search factor has exactly one primary copy of each bucket.
- Exactly one primary copy of each bucket.
- A full set of searchable copies for each bucket, matching the search factor.
- A full set of copies (searchable and non-searchable) for each bucket, matching the replication factor.
For a multisite cluster, a valid and complete cluster has:
- Exactly one primary copy of each bucket for each site with an explicit search factor.
- A full set of searchable copies for each bucket, matching the search factor for each site as well as for the cluster as a whole.
- A full set of copies (searchable and non-searchable) for each bucket, matching the replication factor for each site as well as for the cluster as a whole.
Bucket-fixing and the copies on the peer
When a peer goes down, the master directs the remaining peers in bucket-fixing activities. Eventually, if the bucket fixing is successful, the cluster returns to a complete state.
If the peer later returns to the cluster, the master adds its bucket copies to its counts (assuming that the copies were not destroyed by whatever problem caused the peer to go down in the first place). The consequences vary somewhat depending on whether bucket-fixing activity has completed by the time the peer comes back up.
If bucket-fixing is finished
If bucket-fixing has already completed and the cluster is in a complete state, the copies from the returned peer are just extras. For example, assume the replication factor is 3 and the cluster has fixed all the buckets so that there are again three copies of each bucket in the cluster, including the ones that the downed peer was maintaining before it went down. When the downed peer then comes back up with its copies intact, the master just adds those copies to the count, so that instead of three copies, there will be four copies of some buckets. Similarly, there could be an excess of searchable bucket copies if the returned peer was maintaining some searchable bucket copies. These excess copies might come in handy later, if another peer maintaining copies of some of those buckets goes down.
If bucket-fixing is still underway
If the cluster is still replacing the copies that were lost when the peer went down, the return of the peer can curtail the bucket-fixing. Once the master has added the copies on the returned peer to its counts, it knows that the cluster is complete and valid, and so it will no longer direct the other peers to make copies of those buckets. However, any peers that are currently in the middle of some bucket-fixing activity, such as copying buckets or making copies searchable, will complete their work on those copies. Since bucket-fixing is time-intensive, it is worthwhile to bring a downed peer back online as soon as possible, particularly if the peer was maintaining a large number of bucket copies.
Remove excess bucket copies
If the returning peer results in extra copies of some buckets, you can save disk space by removing the extra copies. See Remove excess bucket copies from the indexer cluster.
What happens when a peer node goes down
What happens when the master node goes down
This documentation applies to the following versions of Splunk® Enterprise: 6.1, 6.1.1, 6.1.2, 6.1.3, 6.1.4, 6.1.5, 6.1.6, 6.1.7, 6.1.8, 6.1.9, 6.1.10, 6.1.11, 6.1.12, 6.1.13, 6.1.14, 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.2.14, 6.2.15, 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.3.13, 6.3.14, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11, 6.5.0, 6.5.1, 6.5.1612 (Splunk Cloud only), 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.5.7, 6.5.8, 6.5.9, 6.5.10, 6.6.0, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.6.9, 6.6.10, 6.6.11, 6.6.12, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4