Docs » Supported integrations in Splunk Observability Cloud » Collector components: Receivers » Kubelet stats receiver

Kubelet stats receiver πŸ”—

The Kubelet stats receiver pulls pod metrics from the Kubernetes API server on a kubelet and sends them through the metrics pipeline for further processing. The supported pipeline type is metrics. See Process your data with pipelines for more information.

Note

This receiver replaces the kubelet-stats, kubelet-metrics, and kubernetes-volumes Smart Agent monitors.

Get started πŸ”—

Follow these steps to configure and activate the component:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:

  2. Configure the Kubelet stats receiver as described in the next section.

  3. Restart the Collector.

Sample configuration πŸ”—

To activate the Kubelet stats receiver, add kubeletstats to the receivers section of your configuration file:

receivers:
  kubeletstats:

To complete the configuration, include the receiver in the metrics pipeline of the service section of your configuration file:

service:
  pipelines:
    metrics:
      receivers: [kubeletstats]

Authenticate your Kubelet stats receiver connection πŸ”—

A kubelet runs on a Kubernetes node and has an API server to which the Kubelet stats receiver connects. To configure the receiver, set the connection and authentication details, and how often you want to collect data and send it.

There are two ways to authenticate, as indicated by the auth_type field:

  • tls tells the receiver to use TLS for authentication and requires that the ca_file, key_file, and cert_file fields. See more at Configure TLS.

  • ServiceAccount tells this receiver to use the default service account token to authenticate to the kubelet API.

Configure TLS authentication πŸ”—

The following example shows how to configure the kubelet stats receiver with TLS authentication:

receivers:
  kubeletstats:
    collection_interval: 20s
    auth_type: "tls"
    ca_file: "/path/to/ca.crt"
    key_file: "/path/to/apiserver.key"
    cert_file: "/path/to/apiserver.crt"
    endpoint: "192.168.64.1:10250"
    insecure_skip_verify: true

exporters:
  file:
    path: "fileexporter.txt"

service:
  pipelines:
    metrics:
      receivers: [kubeletstats]
      exporters: [file]

Configure service account authentication πŸ”—

The following example shows how to configure the kubeletstats receiver with service account authentication.

  1. Make sure the pod spec sets the node name:

    env:
      - name: K8S_NODE_NAME
        valueFrom:
          fieldRef:
            fieldPath: spec.nodeName
    
  2. Activate the Collector to reference the K8S_NODE_NAME environment variable:

receivers:
  kubeletstats:
    collection_interval: 20s
    auth_type: "serviceAccount"
    endpoint: "${K8S_NODE_NAME}:10250"
    insecure_skip_verify: true
exporters:
  file:
    path: "fileexporter.txt"
service:
  pipelines:
    metrics:
      receivers: [kubeletstats]
      exporters: [file]

Caution

A missing or empty endpoint value causes the host name on which the Collector is running to be used as the endpoint. If the hostNetwork flag is set, and the Collector is running in a Pod, the host name resolves to the node’s network namespace.

Advanced use cases πŸ”—

Add metrics excluded by default πŸ”—

To import excluded metrics, use the include_metrics option as in the following example:

exporters:
  signalfx:
    include_metrics:
      - metric_names:
          - container.memory.rss.bytes
          - container.memory.available.bytes

Add additional metadata attributes πŸ”—

By default, all produced metrics get resource attributes based on what kubelet the /stats/summary endpoint provides. For some use cases, this might not be enough: use other endpoints to retrieve additional metadata entities and set them as extra attributes on the metric resource.

The kubelet stats receiver supports the following metadata:

  • container.id: Enriches metric metadata with the Container ID label obtained from container statuses exposed using /pods.

  • k8s.volume.type: Collects the volume type from the Pod spec exposed using /pods and add it as an attribute to volume metrics. If more metadata than the volume type is available, the receiver syncs it depending on the available fields and the type of volume. For example, aws.volume.id is synced from awsElasticBlockStore and gcp.pd.name is synced from gcePersistentDisk.

To add the container.id label to your metrics, set the extra_metadata_labels field. For example:

receivers:
  kubeletstats:
    collection_interval: 10s
    auth_type: "serviceAccount"
    endpoint: "${K8S_NODE_NAME}:10250"
    insecure_skip_verify: true
    extra_metadata_labels:
      - container.id

If extra_metadata_labels isn’t set, no additional API calls are made to receive metadata.

Collect additional volume metadata πŸ”—

When dealing with persistent volume claims, you can sync metadata from the underlying storage resource. For example:

receivers:
  kubeletstats:
    collection_interval: 10s
    auth_type: "serviceAccount"
    endpoint: "${K8S_NODE_NAME}:10250"
    insecure_skip_verify: true
    extra_metadata_labels:
      - k8s.volume.type
    k8s_api_config:
      auth_type: serviceAccount

If k8s_api_config is set, the receiver attempts to collect metadata from underlying storage resources for persistent volume claims. For example, if a Pod is using a persistent volume claim backed by an Elastic Block Store (EBS) instance on AWS, the receiver sets the k8s.volume.type label to awsElasticBlockStore rather than persistentVolumeClaim.

Configure metric groups πŸ”—

A metric group is a collection of metrics by component type. By default, metrics from containers, pods, and nodes are collected. If metric_groups is set, then only metrics from the listed groups are collected. Valid groups are container, pod, node, and volume.

For example, to collect only node and pod metrics from the receiver:

receivers:
  kubeletstats:
    collection_interval: 10s
    auth_type: "serviceAccount"
    endpoint: "${K8S_NODE_NAME}:10250"
    insecure_skip_verify: true
    metric_groups:
      - node
      - pod

Configure optional parameters πŸ”—

You can also set the following optional parameters:

  • collection_interval, which is the interval at which to collect data. The default value is 10s.

  • insecure_skip_verify, which specifies whether or not to skip server certificate chain and host name verification. The default value is false.

Settings πŸ”—

The following table shows the configuration options for the Kubelet stats receiver:

Metrics πŸ”—

The following metrics, resource attributes, and attributes are available.

Note

The SignalFx exporter excludes some available metrics by default. Learn more about default metric filters in List of metrics excluded by default.

Activate or deactivate specific metrics πŸ”—

You can activate or deactivate specific metrics by setting the enabled field in the metrics section for each metric. For example:

receivers:
  samplereceiver:
    metrics:
      metric-one:
        enabled: true
      metric-two:
        enabled: false

The following is an example of host metrics receiver configuration with activated metrics:

receivers:
  hostmetrics:
    scrapers:
      process:
        metrics:
          process.cpu.utilization:
            enabled: true

Note

Deactivated metrics aren’t sent to Splunk Observability Cloud.

Billing πŸ”—

  • If you’re in a MTS-based subscription, all metrics count towards metrics usage.

  • If you’re in a host-based plan, metrics listed as active (Active: Yes) on this document are considered default and are included free of charge.

Learn more at Infrastructure Monitoring subscription usage (Host and metric plans).

Troubleshooting πŸ”—

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

This page was last updated on Dec 12, 2024.