Restart the entire indexer cluster or a single peer node
This topic describes how to restart the entire indexer cluster (unusual) or a single peer node.
When you restart a manager or peer node, the manager rebalances the primary bucket copies across the set of peers, as described in Rebalance the indexer cluster primary buckets.
For information on configuration changes that require a restart, see Restart after modifying server.conf? and Restart or reload after configuration bundle changes?.
Restart the entire cluster
You ordinarily do not need to restart the entire cluster. If you change a manager's configuration, you restart just the manager. If you update a set of common peer configurations, the manager restarts just the set of peers, and only when necessary, as described in Update common peer configurations.
If, for any reason, you do need to restart both the manager and the peer nodes:
1. Restart the manager node, as you would any instance. For example, run this CLI command on the manager:
splunk restart
2. Once the manager restarts, wait until all the peers re-register with the manager, and the manager node dashboard indicates that all peers and indexes are searchable. See View the manager node dashboard.
3. Restart the peers as a group, by running this CLI command on the manager:
splunk rolling-restart cluster-peers
See Perform a rolling restart of an indexer cluster.
If you need to restart the search head, you can do so at any time, as long as the rest of the cluster is running.
Restart a single peer
You might occasionally have need to restart a single peer; for example, if you change certain configurations on only that peer.
Do not use the CLI splunk restart
command to restart the peer, for the reasons described later in this section. Instead, there are two ways that you can safely restart a single peer:
- Use Splunk Web (Settings>Server Controls).
- Run the command
splunk offline
, followed bysplunk start
.
When you use Splunk Web or the splunk offline
/splunk start
commands to restart a peer, the manager waits 60 seconds (by default) before assuming that the peer has gone down for good. This allows sufficient time for the peer to come back on-line and prevents the cluster from performing unnecessary remedial activities.
Note: The actual time that the manager waits is determined by the value of the manager's restart_timeout
attribute in server.conf. The default for this attribute is 60 seconds. If you need the manager to wait for a longer period, you can change the restart_timeout
value, as described in Extend the restart period.
The splunk offline
/splunk start
restart method has an advantage over the Splunk Web method in that it waits for in-progress searches to complete before stopping the peer. In addition, since it involves a two-step process, you can use it if you need the peer to remain down briefly while you perform some maintenance.
For information on the splunk offline
command, read Take a peer offline.
Caution: Do not use the splunk restart
command to restart the peer. If you use the splunk restart
command, the manager will not be aware that the peer is restarting. Instead, after waiting a default 60 seconds for the peer to send a heartbeat, the manager will initiate the usual remedial actions that occur when a peer goes down, such as adding its bucket copies to other peers. The actual time the manager waits is determined by the manager's heartbeat_timeout
attribute. It is inadvisable to change its default value of 60 seconds without consultation.
Use maintenance mode | Perform a rolling restart of an indexer cluster |
This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0
Feedback submitted, thanks!