Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Acrobat logo Download manual as PDF


Splunk Enterprise version 7.3 is no longer supported as of October 22, 2021. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
This documentation does not apply to the most recent version of Splunk® Enterprise. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

View the master dashboard

This dashboard provides detailed information on the status of the entire indexer cluster. You can also get information on each of the master's peer nodes from here.

For information on the other clustering dashboards, read:

Access the master dashboard

  1. Click Settings on the upper right side of Splunk Web.
  2. In the Distributed Environment group, click Indexer clustering.
    The Master Node dashboard appears.

You can only view this dashboard on an instance that has been enabled as a master.

View the master dashboard

The master dashboard contains these sections:

Cluster overview

The cluster overview summarizes the health of your cluster. It tells you:

  • whether the cluster's data is fully searchable; that is, whether all buckets in the cluster have a primary copy.
  • whether the search and replication factors have been met.
  • how many peers are searchable.
  • how many indexes are searchable.

Depending on the health of your cluster, it might also provide warning messages such as:

  • Some data is not searchable.
  • Replication factor not met.
  • Search factor not met.

For details on the information presented in the cluster overview, browse the tabs underneath.

On the upper right side of the dashboard, there are three buttons:

  • More Info. This button provides details on the master node configuration:
    • Name. The master's serverName, as specified in the master's $SPLUNK_HOME/etc/system/local/server.conf file.
    • Replication Factor. The cluster's replication factor.
    • Search Factor. The cluster's search factor.
    • Generation ID. The cluster's current generation ID.
  • Documentation.

Peers tab

For each peer, the master dashboard lists:

  • Peer Name. The peer's serverName, as specified in the peer's $SPLUNK_HOME/etc/system/local/server.conf file.
  • Fully searchable. This column indicates whether the peer currently has a complete set of primaries and is fully searchable.
  • Site. (For multisite only.) This column displays the site value for each peer.
  • Status. The peer's status. For more information about the processes discussed here, see Take a peer offline. Possible values include:
    • Up
    • Pending. This occurs when a replication fails. It transitions back to Up on the next successful heartbeat from the peer to the master.
    • AutomaticDetention. A peer goes into this state when it runs low on disk space. While in this state, a peer does not perform its normal functions. For details, see Put a peer in detention.
    • ManualDetention. A peer goes into this state through manual intervention. While in this state, a peer does not perform most of its normal functions. For details, see Put a peer in detention.
    • ManualDetention-PortsEnabled. A peer goes into this state through manual intervention. While in this state, a peer continues to consume and index external data, but it does not serve as a replication target. It continues to participate in searches. See Put a peer in detention.
    • Restarting. When you run the splunk offline command without the enforce-counts flag, the peer enters this state temporarily after it leaves the ReassigningPrimaries state. It remains in this state for the restart_timeout period (60 seconds by default). If you do not restart the peer within this time, it then moves to the Down state. The peer also enters this state during rolling restarts or if restarted via Splunk Web.
    • ShuttingDown. The master detects that the peer is shutting down.
    • ReassigningPrimaries. A peer enters this state temporarily when you run the splunk offline command without the enforce-counts flag.
    • Decommissioning. When you run the splunk offline command with the enforce-counts flag, the peer enters this state and remains there until all bucket fixing is complete and the peer can shut down.
    • GracefulShutdown. When you run the splunk offline command with the enforce-counts flag, the peer enters this state when it finally shuts down at the end of a successful decommissioning. It remains in this state for as long as it is offline.
    • Stopped. The peer enters this state when you stop it with the splunk stop command.
    • Down. The peer enters this state when it goes offline for any reason other than those resulting in a status of GracefulShutdown or Stopped: either you ran the version of the splunk offline command without the enforce-counts flag and the peer shut down for longer than the restart_timeout period (60 seconds by default), or the peer went offline for some other reason (for instance, it crashed).
  • Buckets. The number of buckets for which the peer has copies.

To get more information for any peer, click on the arrow to the left of the peer name. These fields appear:

  • Location. The peer's IP address and port number.
  • Last Heartbeat. The time of the last heartbeat the master received from the peer.
  • Replication port. The port on which the peer receives replicated data from other peers.
  • Base generation ID. The peer's base generation ID, which is equivalent to the cluster's generation ID at the moment that the peer last joined the cluster. This ID will be less or equal to the cluster's current generation ID. So, if a peer joined the cluster at generation 1 and has stayed in the cluster ever since, its base generation ID remains 1, even though the cluster might have incremented its current generation ID to, say, 5.
  • GUID. The peer's GUID.

Note: After a peer goes down, it continues to appear on the list of peers, although its status changes to "Down" or "GracefulShutdown." To remove the peer from the master's list, see Remove a peer from the master's list.

Indexes tab

For each index, the master dashboard lists:

  • Index Name. The name of the index. Internal indexes are preceded by an underscore (_).
  • Fully searchable. Is the index fully searchable? In other words, does it have at least one searchable copy of each bucket? If even one bucket in the index does not have a searchable copy, this field will report the index as non-searchable.
  • Searchable Data Copies. The number of complete searchable copies of the index that the cluster has.
  • Replicated Data Copies. The number of copies of the index that the cluster has. Each copy must be complete, with no buckets missing.
  • Buckets. The number of buckets in the index. This number does not include replicated bucket copies.
  • Cumulative Raw Data Size. The size of the index's raw data, excluding hot buckets. This number does not included replicated copies of the raw data.

The list of indexes include the internal indexes, _audit and _internal. As you would expect in a cluster, these internal indexes contain the combined data generated by all peers in the cluster. If you need to search for the data generated by a single peer, you can search on the peer's host name.

This tab also reveals a button with the label, Bucket Status. If you click on it, you go to the Bucket Status dashboard. See View the bucket status dashboard..

Note: A new index appears here only after it contains some data. In other words, if you configure a new index on the peer nodes, a row for that index appears only after you send data to that index.

Search heads tab

For each search head accessing this cluster, the master dashboard lists:

  • Search head name. The search head's serverName, as specified in its $SPLUNK_HOME/etc/system/local/server.conf file.
  • Site. (For multisite only.) This column displays the site value for each search head.
  • Status. Is the search head up or down? The master decides that a search head is down if the search head does not poll the master for generation information within a period twice the length of the generation_poll_interval. That attribute is configurable in server.conf.

Note: The list includes the master node as one of the search heads. Although the master has search head capabilities, you should only use those capabilities for debugging purposes. The resources of the master must be dedicated to fulfilling its critical role of coordinating cluster activities. Under no circumstances should the master be employed as a production search head. Also, unlike a dedicated search head, the search head on the master cannot be configured for multi-cluster search; it can only search its own cluster.

To get more information for any search head, click on the arrow to the left of the search head name. These fields appear:

  • Location. The search head's server name and port number.
  • GUID. The search head's GUID.

View the bucket status dashboard

The Bucket Status dashboard provides status for the buckets in the cluster. It contains three tabs:

  • Fixup Tasks - In Progress
  • Fixup Tasks - Pending
  • Indexes with Excess Buckets

Fixup Tasks - In Progress

This tab provides a list of buckets that are currently being fixed. For example, if a bucket has too few copies, fixup activities must occur to return the cluster to a valid and complete state. While those activities are occurring, the bucket appears on this list.

Fixup Tasks - Pending

This tab provides a list of buckets that are waiting to be fixed. You can filter the fixup tasks by search factor, replication factor, and generation.

For more information on bucket fixup activities, see What happens when a peer goes down.

This tab also includes an Action button that allows you to fix issues with individual buckets. For details, see Handle issues with individual buckets.

Indexes with Excess Buckets

This tab provides a list of indexes with excess bucket copies. It enumerates both buckets with excess copies and buckets with excess searchable copies. It also enumerates the total excess copies in each category. For example, if your index "new" has one bucket with three excess copies, one of which is searchable, and a second bucket with one excess copy, which is non-searchable, the row for "new" will report:

  • 2 buckets with excess copies
  • 1 bucket with excess searchable copies
  • 4 total excess copies
  • 1 total excess searchable copies

If you want to remove the excess copies for a single index, click the Remove button on the right side of the row for that index.

If you want to remove the excess copies for all indexes, click the Remove All Excess Buckets button.

For more information on excess bucket copies, see Remove excess bucket copies from the indexer cluster.

Use the monitoring console to view status

You can use the monitoring console to monitor most aspects of your deployment, including the status of your indexer cluster. The information available through the console duplicates much of the information available on the master dashboard.

For more information, see Use the monitoring console to view indexer cluster status.

Last modified on 09 June, 2021
PREVIOUS
Migrate an indexer cluster from single-site to multisite
  NEXT
View the peer dashboard

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters