Docs » Configure application receivers » Configure application receivers for orchestration » Kubernetes cluster

Kubernetes cluster 🔗

Description 🔗

The Splunk Distribution of OpenTelemetry Collector provides this integration as the kubernetes-cluster monitor via the Smart Agent Receiver. This monitor obtains cluster-level resource metrics from the Kubernetes API.

Note: If you are using OpenShift, use the openshift-cluster monitor instead of this kubernetes-cluster monitor. The openshift-cluster monitor contains additional OpenShift metrics.

The kubernetes-cluster monitor collects cluster-level metrics from the Kubernetes API server. This monitor uses the watch functionality of the Kubernetes API to listen for updates about the cluster and maintains a cache of metrics that get sent on a regular interval.

Since the agent is generally running in multiple places in a Kubernetes cluster, and since it is generally more convenient to share the same configuration across all agent instances, this monitor by default makes use of a leader election process to ensure that it is the only agent sending metrics in a cluster. All of the agents running in the same namespace that have this monitor configured will decide amongst themselves which should send metrics for this monitor, and the rest will stand by ready to activate if the leader agent expires. You can override leader election by setting the configuration option alwaysClusterReporter to true, which will make the monitor always report metrics. Note: If you are using OpenShift, use the openshift-cluster monitor type instead of this kubernetes-cluster monitor type. The openshift-cluster monitor type contains additional OpenShift metrics.

Since the agent is generally running in multiple places in a Kubernetes cluster, and since it is generally more convenient to share the same configuration across all agent instances, this monitor type by default makes use of a leader election process to ensure that it is the only agent sending metrics in a cluster. All of the agents running in the same namespace that have this monitor type configured will decide amongst themselves which should send metrics for this monitor type, and the rest will stand by ready to activate if the leader agent dies. You can override leader election by setting the configuration option alwaysClusterReporter to true, which will make the monitor always report metrics.

This monitor is similar to kube-state-metrics and sends many of the same metrics, but in a way that is less verbose and better fitted for Splunk Infrastructure Monitoring.

Note 🔗

Larger clusters might encounter instability when setting this configuration across a large number of nodes. Enable with caution.


This monitor is similar to kube-state-metrics and sends many of the same metrics, but in a way that is less verbose and better fitted for Splunk Infrastructure Monitoring.

Benefits 🔗

After you’ve configured the integration, you can:

  • View metrics using the built-in dashboard. For information about dashboards, see View dashboards in Observability Cloud.

  • View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.

  • Access Metric Finder and search for metrics sent by the monitor. For information about Metric Finder, see Use the Metric Finder.

Installation 🔗

Follow these steps to deploy the integration:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:

  2. Configure the monitor, as described in the next section.

  3. Restart the Splunk Distribution of OpenTelemetry Collector.

Configuration 🔗

This monitor is available in the Smart Agent Receiver, which is part of the Splunk Distribution of OpenTelemetry Collector. The Smart Agent Receiver lets you use existing Smart Agent monitors as OpenTelemetry Collector metric receivers.

Using this monitor assumes that you have a configured environment with a functional Smart Agent release bundle on your system, which is already provided for x86_64/amd64 Splunk Distribution of OpenTelemetry Collector installation paths.

To activate this monitor in the Splunk Distribution of OpenTelemetry Collector, add the following to your configuration file:

receivers:
  smartagent/kubernetes-cluster:
    type: kubernetes-cluster
    ... # Additional config

To complete the integration, include the monitor in a metrics pipeline. To do this, add the monitor to the service > pipelines > metrics > receivers section of your configuration file.

See the kubernetes.yaml in GitHub for the Agent and Gateway YAML files.

Configuration settings 🔗

The following tables show the configuration options for this monitor type:

Option Required Type Description
alwaysClusterReporter no bool If true, leader election is skipped and metrics are always reported. Default is false.
namespace no string If specified, only resources within the given namespace will be monitored. If omitted (blank), all supported resources across all namespaces will be monitored.
kubernetesAPI no object (see below) Configuration for the Kubernetes API client
nodeConditionTypesToReport no list of strings A list of node status condition types to report as metrics. The metrics will be reported as data points of the form kubernetes.node_<type_snake_cased> with a value of 0 corresponding to "False", 1 to "True", and -1 to "Unknown". Default is [Ready].

The nested kubernetesAPI configuration object has the following fields:

Option Required Type Description
authType no string To authenticate to the K8s API server:
- none for no authentication.
- tls to use manually specified TLS client certs (not recommended).
- serviceAccount to use the standard service account token provided to the agent pod.
- kubeConfig to use credentials from ~/.kube/config.
- Default is serviceAccount.
skipVerify no bool Whether to skip verifying the TLS cert from the API server. Almost never needed. Default is false.
clientCertPath no string The path to the TLS client cert on the pod's filesystem, if using tls authentication.
clientKeyPath no string The path to the TLS client key on the pod's filesystem, if using tls authentication.
caCertPath no string Path to a CA certificate to use when verifying the API server's TLS certificate. This is provided by Kubernetes alongside the service account token, which will be picked up automatically, so this should rarely be necessary to specify.

Metrics 🔗

The following metrics are available for this integration:

Troubleshooting 🔗

If you are not able to see your data in Splunk Observability Cloud:

  • Ask questions and get answers through community support at Splunk Answers.

  • If you have a support contract, file a case using the Splunk Support Portal. See Support and Services.

  • To get professional help with optimizing your Splunk software investment, see Splunk Services.