Restart the entire indexer cluster or a single peer node
This topic describes how to restart the entire indexer cluster (unusual) or a single peer node.
When you restart a master or peer node, the master rebalances the primary bucket copies across the set of peers, as described in Rebalance the indexer cluster primary buckets.
For information on configuration changes that require a restart, see Restart after modifying server.conf? and Restart or reload after configuration bundle changes?.
Restart the entire cluster
You ordinarily do not need to restart the entire cluster. If you change a master's configuration, you restart just the master. If you update a set of common peer configurations, the master restarts just the set of peers, and only when necessary, as described in Update common peer configurations.
If, for any reason, you do need to restart both the master and the peer nodes:
1. Restart the master node, as you would any instance. For example, run this CLI command on the master:
2. Once the master restarts, wait until all the peers re-register with the master, and the master dashboard indicates that all peers and indexes are searchable. See View the master dashboard.
3. Restart the peers as a group, by running this CLI command on the master:
splunk rolling-restart cluster-peers
See Use rolling restart.
If you need to restart the search head, you can do so at any time, as long as the rest of the cluster is running.
Restart a single peer
You might occasionally have need to restart a single peer; for example, if you change certain configurations on only that peer.
Do not use the CLI
splunk restart command to restart the peer, for the reasons described later in this section. Instead, there are two ways that you can safely restart a single peer:
- Use Splunk Web (Settings>Server Controls).
- Run the command
splunk offline, followed by
When you use Splunk Web or the
splunk start commands to restart a peer, the master waits 60 seconds (by default) before assuming that the peer has gone down for good. This allows sufficient time for the peer to come back on-line and prevents the cluster from performing unnecessary remedial activities.
Note: The actual time that the master waits is determined by the value of the master's
restart_timeout attribute in server.conf. The default for this attribute is 60 seconds. If you need the master to wait for a longer period, you can change the
restart_timeout value, as described in Extend the restart period.
splunk start restart method has an advantage over the Splunk Web method in that it waits for in-progress searches to complete before stopping the peer. In addition, since it involves a two-step process, you can use it if you need the peer to remain down briefly while you perform some maintenance.
For information on the
splunk offline command, read Take a peer offline.
Caution: Do not use the
splunk restart command to restart the peer. If you use the
splunk restart command, the master will not be aware that the peer is restarting. Instead, after waiting a default 60 seconds for the peer to send a heartbeat, the master will initiate the usual remedial actions that occur when a peer goes down, such as adding its bucket copies to other peers. The actual time the master waits is determined by the master's
heartbeat_timeout attribute. It is inadvisable to change its default value of 60 seconds without consultation.
Use maintenance mode
Use rolling restart
This documentation applies to the following versions of Splunk® Enterprise: 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.3.13, 6.3.14, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11, 6.5.0, 6.5.1, 6.5.1612 (Splunk Cloud only), 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.5.7, 6.5.8, 6.5.9, 6.5.10, 6.6.0, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.6.9, 6.6.10, 6.6.11, 6.6.12, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.3.0