Kubernetes cluster ๐
Note
This monitor is deprecated in favor of the k8s_cluster
receiver. See Kubernetes Cluster Receiver for more information.
Description ๐
The Splunk Distribution of OpenTelemetry Collector provides this integration as the kubernetes-cluster
monitor by using the SignalFx Smart Agent Receiver.
Use this integration to obtain cluster-level resource metrics from the Kubernetes API server.
This monitor is similar to kube-state-metrics
and sends many of the same metrics, but in a way that is less verbose and better fitted for Splunk Infrastructure Monitoring.
The kubernetes-cluster
monitor does the following:
Uses the watch functionality of the Kubernetes API to listen for updates about the cluster
Maintains a cache of metrics that are sent at regular intervals.
This monitor is available on Linux and Windows.
Overriding leader election ๐
This monitor defaults to a leader election process to ensure that it is the only agent sending metrics in a cluster. The leader election process is used because:
The agent usually runs in multiple places in a Kubernetes cluster
It is convenient to share the same configuration across all agent instances
Leader election means that all of the agents running in the same namespace that have this monitor configured decide among themselves which agent sends metrics for this monitor, while the other agents stand by ready to activate if the leader agent expires.
You can override leader election by setting the configuration option alwaysClusterReporter
to true
, which makes the monitor always report metrics.
Note: If you are using OpenShift, use the openshift-cluster
monitor type instead of this kubernetes-cluster
monitor type. The openshift-cluster
monitor type contains additional OpenShift metrics.
Benefits ๐
After you configure the integration, you can access these features:
View metrics. You can create your own custom dashboards, and most monitors provide built-in dashboards as well. For information about dashboards, see View dashboards in Observability Cloud.
View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.
Access the Metric Finder and search for metrics sent by the monitor. For information, see Use the Metric Finder.
Installation ๐
Follow these steps to deploy this integration:
Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
Configure the monitor, as described in the Configuration section.
Restart the Splunk Distribution of OpenTelemetry Collector.
Configuration ๐
This monitor type is available in the Smart Agent Receiver, which is part of the Splunk Distribution of OpenTelemetry Collector. You can use existing Smart Agent monitors as OpenTelemetry Collector metric receivers with the Smart Agent Receiver.
This monitor type requires a properly configured environment on your system in which youโve installed a functional Smart Agent release bundle. The Collector provides this bundle in the installation paths for x86_64/amd64
.
To activate this monitor type in the Collector, add the following lines to your configuration (YAML) file:
receivers:
smartagent/kubernetes-cluster:
type: kubernetes-cluster
... # Additional config
To complete the integration, include the monitor in a metrics pipeline. To do this, add the monitor to the service > pipelines > metrics > receivers
section of your configuration file.
See the kubernetes.yaml in GitHub for the Agent and Gateway YAML files.
Configuration settings ๐
The following tables show the configuration options for this monitor type:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
If |
|
no |
|
If specified, only resources within the given namespace will be monitored. If omitted (blank), all supported resources across all namespaces will be monitored. |
|
no |
|
Configuration for the Kubernetes API client |
|
no |
|
A list of node status condition types to report as metrics. The metrics will be reported as data points of the form |
The nested kubernetesAPI
configuration object has the following fields:
Option |
Required |
Type |
Description |
---|---|---|---|
|
no |
|
To authenticate to the K8s API server: |
|
no |
|
Whether to skip verifying the TLS cert from the API server. Almost never needed. Default is |
|
no |
|
The path to the TLS client cert on the podโs filesystem, if using |
|
no |
|
The path to the TLS client key on the podโs filesystem, if using |
|
no |
|
Path to a CA certificate to use when verifying the API serverโs TLS certificate. This is provided by Kubernetes alongside the service account token, which will be picked up automatically, so this should rarely be necessary to specify. |
Metrics ๐
The following table shows the legacy metrics that are available for this integration. See OpenTelemetry values and their legacy equivalents for the Splunk Distribution of OpenTelemetry Collector equivalents.
Get help ๐
If you are not able to see your data in Splunk Observability Cloud, try these tips:
Submit a case in the Splunk Support Portal
Available to Splunk Observability Cloud customers
-
Available to Splunk Observability Cloud customers
Ask a question and get answers through community support at Splunk Answers
Available to Splunk Observability Cloud customers and free trial users
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide
Available to Splunk Observability Cloud customers and free trial users
To learn how to join, see Get Started with Splunk Community - Chat groups
To learn about even more support options, see Splunk Customer Success.