Docs » Available host and application monitors » Configure application receivers for orchestration » Kubernetes cluster

Kubernetes cluster ๐Ÿ”—


This monitor is deprecated in favor of the k8s_cluster receiver. See Kubernetes Cluster Receiver for more information.

Description ๐Ÿ”—

The Splunk Distribution of OpenTelemetry Collector provides this integration as the kubernetes-cluster monitor by using the SignalFx Smart Agent Receiver.

Use this integration to obtain cluster-level resource metrics from the Kubernetes API server.

This monitor is similar to kube-state-metrics and sends many of the same metrics, but in a way that is less verbose and better fitted for Splunk Infrastructure Monitoring.

The kubernetes-cluster monitor does the following:

  • Uses the watch functionality of the Kubernetes API to listen for updates about the cluster

  • Maintains a cache of metrics that are sent at regular intervals.

This monitor is available on Linux and Windows.

Overriding leader election ๐Ÿ”—

This monitor defaults to a leader election process to ensure that it is the only agent sending metrics in a cluster. The leader election process is used because:

  • The agent usually runs in multiple places in a Kubernetes cluster

  • It is convenient to share the same configuration across all agent instances

Leader election means that all of the agents running in the same namespace that have this monitor configured decide among themselves which agent sends metrics for this monitor, while the other agents stand by ready to activate if the leader agent expires.

You can override leader election by setting the configuration option alwaysClusterReporter to true, which makes the monitor always report metrics.

Note: If you are using OpenShift, use the openshift-cluster monitor type instead of this kubernetes-cluster monitor type. The openshift-cluster monitor type contains additional OpenShift metrics.

Benefits ๐Ÿ”—

After you configure the integration, you can access these features:

  • View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Observability Cloud.

  • View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.

  • Access the Metric Finder and search for metrics sent by the monitor. For information, see Use the Metric Finder.

Installation ๐Ÿ”—

Follow these steps to deploy this integration:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:

  2. Configure the monitor, as described in the Configuration section.

  3. Restart the Splunk Distribution of OpenTelemetry Collector.

Configuration ๐Ÿ”—

This monitor type is available in the Smart Agent Receiver, which is part of the Splunk Distribution of OpenTelemetry Collector. You can use existing Smart Agent monitors as OpenTelemetry Collector metric receivers with the Smart Agent Receiver.

This monitor type requires a properly configured environment on your system in which youโ€™ve installed a functional Smart Agent release bundle. The Collector provides this bundle in the installation paths for x86_64/amd64.

To activate this monitor type in the Collector, add the following lines to your configuration (YAML) file:

    type: kubernetes-cluster
    ... # Additional config

To complete the integration, include the monitor in a metrics pipeline. To do this, add the monitor to the service > pipelines > metrics > receivers section of your configuration file.

See the kubernetes.yaml in GitHub for the Agent and Gateway YAML files.

Configuration settings ๐Ÿ”—

The following tables show the configuration options for this monitor type:








If true, leader election is skipped and metrics are always reported. Default is false.




If specified, only resources within the given namespace will be monitored. If omitted (blank), all supported resources across all namespaces will be monitored.



object (see below)

Configuration for the Kubernetes API client



list of strings

A list of node status condition types to report as metrics. The metrics will be reported as data points of the form kubernetes.node_<type_snake_cased> with a value of 0 corresponding to โ€œFalseโ€, 1 to โ€œTrueโ€, and -1 to โ€œUnknownโ€. Default is [Ready].

The nested kubernetesAPI configuration object has the following fields:








To authenticate to the K8s API server:
- none for no authentication.
- tls to use manually specified TLS client certs (not recommended).
- serviceAccount to use the standard service account token provided to the agent pod.
- kubeConfig to use credentials from ~/.kube/config.
- Default is serviceAccount.




Whether to skip verifying the TLS cert from the API server. Almost never needed. Default is false.




The path to the TLS client cert on the podโ€™s filesystem, if using tls authentication.




The path to the TLS client key on the podโ€™s filesystem, if using tls authentication.




Path to a CA certificate to use when verifying the API serverโ€™s TLS certificate. This is provided by Kubernetes alongside the service account token, which will be picked up automatically, so this should rarely be necessary to specify.

Metrics ๐Ÿ”—

The following table shows the legacy metrics that are available for this integration. See OpenTelemetry values and their legacy equivalents for the Splunk Distribution of OpenTelemetry Collector equivalents.

Get help ๐Ÿ”—

If you are not able to see your data in Splunk Observability Cloud, try these tips:

To learn about even more support options, see Splunk Customer Success.