Put a peer into detention
When a peer is in the state of detention, its functionality is reduced. It stops replicating data from other peer nodes and, depending on the type of detention, stops indexing most or all data. It continues to participate in searches.
A peer can enter detention either automatically, in response to a low level of free disk space, or manually.
When a peer enters the detention state automatically, it
- stops indexing all data, internal and external.
- stops replicating data from other peer nodes.
- stops participating in searches.
The peer node enters the detention state automatically when it runs low on disk space. The setting that controls automatic detention is
server.conf. The default value is 5000, or 5GB, meaning that the peer enters detention when it has less than 5GB of free disk space.
The peer automatically leaves the detention state when its free disk space grows to exceed
When a peer enters the detention state manually, it
- stops replicating data from other peer nodes.
- optionally stops accepting data from the ports that consume external data, causing the peer to no longer index most types of external data.
- continues to index internal data and stream the data to target peer nodes.
- continues to participate in searches.
When you manually put a peer into the detention state, it remains in detention until you remove it from detention. Manual detention persists through peer restart.
The effect of not accepting data from external data ports
When setting a peer to manual detention, you can optionally direct the peer to stop accepting data from the ports that consume incoming data.
This brings a halt to the indexing of most external data, including
- TCP inputs
- UDP inputs
- HEC inputs
- data sent from a forwarder to the peer through its receiving port
Attempts to send HEC data to the peer generate the return error "HTTPSTATUS_NOT_FOUND, 404".
However, external data can continue to enter the peer through these methods:
- scripted inputs
- file and folder monitoring
In addition, the indexer can continue to route incoming data to another Splunk Enterprise instance or to a third-party system.
Here are some of the key use cases for manual detention:
- To bring a near halt to the growth of disk usage on the peer, for example, if the peer is close to running out of space.
- To partially decommission an old peer, making it available only for searches on existing data.
- To stop a troublesome peer from handling external or replicated data, while keeping the peer available for diagnostics.
- To force new data to go to the new peers, when you add new peers to a cluster.
- Note: You can also use data rebalancing to move data to new peers. See Rebalance the cluster.
- To slow the growth of disk usage on a peer that belongs to a pre-approved firewall exception list and needs to continue receiving incoming data. For this use case, you can configure the peer to stop replication activity but continue to consume external data.
Put a peer into manual detention
To put a peer into detention, run the CLI command
splunk edit cluster-config with the
You can set the
-manual_detention parameter to one of several values:
on. The peer enters detention and stops accepting data from the ports that consume incoming data. These ports are the receiving TCP, UDP, and HTTP event collector ports.This action has the effect of halting indexing of most external data. The peer continues to index internal data. The peer stops replicating data from other peer nodes.
on_ports_enabled. The peer enters detention and the ports stay open to accept incoming data. The peer continues to index both external and internal data. The peer stops replicating data from other peer nodes.
off. The peer is not in detention. This is the default.
You can run this command from the peer itself or from the manager node.
Caution: The peer must be in the
Up state, or "status," before you put it in detention. For information on how to determine the status of a peer, see View the manager node dashboard.
To run the command from the peer:
splunk edit cluster-config -auth <username>:<password> -manual_detention [off|on|on_ports_enabled]
To run the command from the manager node:
splunk edit cluster-config -auth <username>:<password> -peers <peer_guid1>,<peer_guid2>,... -manual_detention [off|on|on_ports_enabled]
Note the following:
-peersspecifies the set of peers that you want to put in detention. Identify each peer by its GUID. When you run the command from the manager node, you must include this parameter.
Take a peer out of manual detention
To take a peer out of detention:
splunk edit cluster-config -auth <username>:<password> -manual_detention off
Use a REST endpoint to put the peer into manual detention
You can use the REST endpoint
cluster/peer/control/control/set_manual_detention to put a peer into manual detention.
Note: A previous endpoint,
cluster/peer/control/control/set_detention_override, has been deprecated. Use
cluster/peer/control/control/set_manual_detention in its place.
See the REST API documentation for cluster/peer/control/control/set_manual_detention.
View the detention state
You can view the states, detention-related or otherwise, of all peers from the manager node dashboard. See View the manager node dashboard.
These are the possible detention states:
- AutomaticDetention. Peer entered detention automatically.
- ManualDetention. Peer entered detention manually and no longer consumes external data.
- ManualDetention-PortsEnabled. Peer entered detention manually and continues to consume external data.
You can also use the DMC to view the state of the peers.
In addition, some CLI commands also provide peer state information:
- To view the state of all peers, run this command on the manager:
splunk list cluster-peers
- To view the state of a single peer, run this command on the peer:
splunk list cluster-config
Remove excess bucket copies from the indexer cluster
Remove a peer from the manager node's list
This documentation applies to the following versions of Splunk® Enterprise: 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.1.0, 9.1.1