View the manager node dashboard
This dashboard provides detailed information on the status of the entire indexer cluster. You can also get information on each of the manager's peer nodes from here.
For information on the other clustering dashboards, read:
In some versions of Splunk Enterprise, the manager node dashboard is labeled as the "master node" dashboard. Aside from the label, the dashboards are identical.
Access the manager node dashboard
- Click Settings on the upper right side of Splunk Web.
- In the Distributed Environment group, click Indexer clustering.
The Manager Node dashboard appears.
You can only view this dashboard on an instance that has been enabled as a manager.
View the manager node dashboard
The manager node dashboard contains these sections:
Cluster overview
The cluster overview summarizes the health of your cluster. It tells you:
- whether the cluster's data is fully searchable; that is, whether all buckets in the cluster have a primary copy.
- whether the search and replication factors have been met.
- how many peers are searchable.
- how many indexes are searchable.
Depending on the health of your cluster, it might also provide warning messages such as:
- Some data is not searchable.
- Replication factor not met.
- Search factor not met.
For details on the information presented in the cluster overview, browse the tabs underneath.
On the upper right side of the dashboard, there are three buttons:
- Edit. For information on this button, see Configure the manager node with the dashboard..
- More Info. This button provides details on the manager node configuration:
- Name. The manager's
serverName
, as specified in the manager's$SPLUNK_HOME/etc/system/local/server.conf
file. - Replication Factor. The cluster's replication factor.
- Search Factor. The cluster's search factor.
- Generation ID. The cluster's current generation ID.
- Name. The manager's
- Documentation.
Peers tab
For each peer, the manager node dashboard lists:
- Peer Name. The peer's
serverName
, as specified in the peer's$SPLUNK_HOME/etc/system/local/server.conf
file. - Fully searchable. This column indicates whether the peer currently has a complete set of primaries and is fully searchable.
- Site. (For multisite only.) This column displays the site value for each peer.
- Status. The peer's status. For more information about the processes discussed here, see Take a peer offline. Possible values include:
Up
Pending
. This occurs when a replication fails. It transitions back to Up on the next successful heartbeat from the peer to the manager.AutomaticDetention
. A peer goes into this state when it runs low on disk space. While in this state, a peer does not perform its normal functions. For details, see Put a peer in detention.ManualDetention
. A peer goes into this state through manual intervention. While in this state, a peer does not perform most of its normal functions. For details, see Put a peer in detention.ManualDetention-PortsEnabled
. A peer goes into this state through manual intervention. While in this state, a peer continues to consume and index external data, but it does not serve as a replication target. It continues to participate in searches. See Put a peer in detention.Restarting
. When you run thesplunk offline
command without theenforce-counts
flag, the peer enters this state temporarily after it leaves theReassigningPrimaries
state. It remains in this state for therestart_timeout
period (60 seconds by default). If you do not restart the peer within this time, it then moves to theDown
state. The peer also enters this state during rolling restarts or if restarted via Splunk Web.ShuttingDown
. The manager detects that the peer is shutting down.ReassigningPrimaries
. A peer enters this state temporarily when you run thesplunk offline
command without theenforce-counts
flag.Decommissioning.
When you run thesplunk offline
command with theenforce-counts
flag, the peer enters this state and remains there until all bucket fixing is complete and the peer can shut down.GracefulShutdown.
When you run thesplunk offline
command with theenforce-counts
flag, the peer enters this state when it finally shuts down at the end of a successful decommissioning. It remains in this state for as long as it is offline.Stopped.
The peer enters this state when you stop it with thesplunk stop
command.Down
. The peer enters this state when it goes offline for any reason other than those resulting in a status ofGracefulShutdown
orStopped
: either you ran the version of thesplunk offline
command without theenforce-counts
flag and the peer shut down for longer than therestart_timeout
period (60 seconds by default), or the peer went offline for some other reason (for instance, it crashed).
- Buckets. The number of buckets for which the peer has copies.
To get more information for any peer, click on the arrow to the left of the peer name. These fields appear:
- Location. The peer's IP address and port number.
- Last Heartbeat. The time of the last heartbeat the manager received from the peer.
- Replication port. The port on which the peer receives replicated data from other peers.
- Base generation ID. The peer's base generation ID, which is equivalent to the cluster's generation ID at the moment that the peer last joined the cluster. This ID will be less or equal to the cluster's current generation ID. So, if a peer joined the cluster at generation 1 and has stayed in the cluster ever since, its base generation ID remains 1, even though the cluster might have incremented its current generation ID to, say, 5.
- GUID. The peer's GUID.
Note: After a peer goes down, it continues to appear on the list of peers, although its status changes to "Down" or "GracefulShutdown." To remove the peer from the manager's list, see Remove a peer from the manager node's list.
Indexes tab
For each index, the manager node dashboard lists:
- Index Name. The name of the index. Internal indexes are preceded by an underscore (_).
- Fully searchable. Is the index fully searchable? In other words, does it have at least one searchable copy of each bucket? If even one bucket in the index does not have a searchable copy, this field will report the index as non-searchable.
- Searchable Data Copies. The number of complete searchable copies of the index that the cluster has.
- Replicated Data Copies. The number of copies of the index that the cluster has. Each copy must be complete, with no buckets missing.
- Buckets. The number of buckets in the index. This number does not include replicated bucket copies.
- Cumulative Raw Data Size. The size of the index's raw data, excluding hot buckets. This number does not included replicated copies of the raw data.
The list of indexes include the internal indexes, _audit and _internal. As you would expect in a cluster, these internal indexes contain the combined data generated by all peers in the cluster. If you need to search for the data generated by a single peer, you can search on the peer's host name.
This tab also reveals a button with the label, Bucket Status. If you click on it, you go to the Bucket Status dashboard. See View the bucket status dashboard..
Note: A new index appears here only after it contains some data. In other words, if you configure a new index on the peer nodes, a row for that index appears only after you send data to that index.
Search heads tab
For each search head accessing this cluster, the manager node dashboard lists:
- Search head name. The search head's
serverName
, as specified in its$SPLUNK_HOME/etc/system/local/server.conf
file. - Site. (For multisite only.) This column displays the site value for each search head.
- Status. Is the search head up or down? The manager decides that a search head is down if the search head does not poll the manager for generation information within a period twice the length of the
generation_poll_interval
. That attribute is configurable inserver.conf
.
Note: The list includes the manager node as one of the search heads. Although the manager has search head capabilities, you should only use those capabilities for debugging purposes. The resources of the manager must be dedicated to fulfilling its critical role of coordinating cluster activities. Under no circumstances should the manager be employed as a production search head. Also, unlike a dedicated search head, the search head on the manager cannot be configured for multi-cluster search; it can only search its own cluster.
To get more information for any search head, click on the arrow to the left of the search head name. These fields appear:
- Location. The search head's server name and port number.
- GUID. The search head's GUID.
View the bucket status dashboard
The Bucket Status dashboard provides status for the buckets in the cluster. It contains three tabs:
- Fixup Tasks - In Progress
- Fixup Tasks - Pending
- Indexes with Excess Buckets
Fixup Tasks - In Progress
This tab provides a list of buckets that are currently being fixed. For example, if a bucket has too few copies, fixup activities must occur to return the cluster to a valid and complete state. While those activities are occurring, the bucket appears on this list.
Fixup Tasks - Pending
This tab provides a list of buckets that are waiting to be fixed. You can filter the fixup tasks by search factor, replication factor, and generation.
For more information on bucket fixup activities, see What happens when a peer goes down.
This tab also includes an Action button that allows you to fix issues with individual buckets. For details, see Handle issues with individual buckets.
Indexes with Excess Buckets
This tab provides a list of indexes with excess bucket copies. It enumerates both buckets with excess copies and buckets with excess searchable copies. It also enumerates the total excess copies in each category. For example, if your index "new" has one bucket with three excess copies, one of which is searchable, and a second bucket with one excess copy, which is non-searchable, the row for "new" will report:
- 2 buckets with excess copies
- 1 bucket with excess searchable copies
- 4 total excess copies
- 1 total excess searchable copies
If you want to remove the excess copies for a single index, click the Remove button on the right side of the row for that index.
If you want to remove the excess copies for all indexes, click the Remove All Excess Buckets button.
For more information on excess bucket copies, see Remove excess bucket copies from the indexer cluster.
Use the monitoring console to view status
You can use the monitoring console to monitor most aspects of your deployment, including the status of your indexer cluster. The information available through the console duplicates much of the information available on the manager node dashboard.
For more information, see Use the monitoring console to view indexer cluster status.
Migrate an indexer cluster from single-site to multisite | View the peer dashboard |
This documentation applies to the following versions of Splunk® Enterprise: 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14
Feedback submitted, thanks!