Kubernetes cluster receiver đź”—
The Kubernetes cluster receiver collects cluster metrics using the Kubernetes API server. You can use a single instance of this receiver to monitor an entire Kubernetes cluster. The supported pipeline type is metrics
. To filter in or out other Kubernetes elements, such as containers, pods, nodes, namespaces, or clusters, use the Filter processor instead. Learn more at Filter processor. See Process your data with pipelines for more information on the different types of pipelines.
Kubernetes version 1.21 and higher is compatible with the Kubernetes navigator. Using lower versions of Kubernetes is not supported for this receiver and might result in the navigator not displaying all clusters.
Note
This receiver replaces the kubernetes-cluster
Smart Agent monitor.
Get started đź”—
By default, the Kubernetes cluster receiver is already activated in the Helm chart of the Splunk OpenTelemetry Collectors. See Configure the Collector for Kubernetes with Helm for more information, including the default Collected metrics and dimensions for Kubernetes.
To activate the Kubernetes cluster receiver manually in the Collector configuration, add k8s_cluster
to the receivers
section of your configuration file, as shown in the following example:
receivers:
k8s_cluster:
auth_type: kubeConfig
collection_interval: 30s
node_conditions_to_report: ["Ready", "MemoryPressure"]
allocatable_types_to_report: ["cpu","memory"]
metadata_exporters: [signalfx]
To complete the configuration, include the receiver in the metrics
pipeline of the service
section of your
configuration file. For example:
service:
pipelines:
metrics:
receivers: [k8s_cluster]
Sync metadata_exporters đź”—
Use metadata_exporters
as a list of metadata exporters to sync with metadata collected by the Kubernetes cluster receiver. For example:
receivers:
k8s_cluster:
auth_type: serviceAccount
metadata_exporters:
- signalfx
Exporters specified in this list need to implement the following interface. If an exporter doesn’t implement the interface, startup fails.
type MetadataExporter interface {
ConsumeMetadata(metadata []*MetadataUpdate) error
}
type MetadataUpdate struct {
ResourceIDKey string
ResourceID ResourceID
MetadataDelta
}
type MetadataDelta struct {
MetadataToAdd map[string]string
MetadataToRemove map[string]string
MetadataToUpdate map[string]string
}
Set node_conditions_to_report đź”—
Use the following configuration to have the k8s_cluster
receiver emit two metrics, k8s.node.condition_ready
and k8s.node.condition_memory_pressure
, one for each condition in the configuration:
# ...
k8s_cluster:
node_conditions_to_report:
- Ready
- MemoryPressure
# ...
The value is 1
if the ConditionStatus
for the corresponding Condition
is True
, 0
if it’s False
, and -1
if it’s Unknown
. To learn more, search for “Conditions” in the Kubernetes documentation.
Settings đź”—
The following table shows the configuration options for the receiver:
Metrics đź”—
The following metrics, resource attributes, and attributes are available.
Note
The SignalFx exporter excludes some available metrics by default. Learn more about default metric filters in List of metrics excluded by default. See Collected metrics and dimensions for Kubernetes to see how the Collector processes Kubernetes metrics.
Activate or deactivate specific metrics đź”—
You can activate or deactivate specific metrics by setting the enabled
field in the metrics
section for each metric. For example:
receivers:
samplereceiver:
metrics:
metric-one:
enabled: true
metric-two:
enabled: false
The following is an example of host metrics receiver configuration with activated metrics:
receivers:
hostmetrics:
scrapers:
process:
metrics:
process.cpu.utilization:
enabled: true
Note
Deactivated metrics aren’t sent to Splunk Observability Cloud.
Billing đź”—
If you’re in a MTS-based subscription, all metrics count towards metrics usage.
If you’re in a host-based plan, metrics listed as active (Active: Yes) on this document are considered default and are included free of charge.
Learn more at Infrastructure Monitoring subscription usage (Host and metric plans).
Troubleshooting đź”—
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.