Docs » Supported integrations in Splunk Observability Cloud » Configure application receivers for orchestration » Kubernetes proxy

Kubernetes proxy πŸ”—

The Splunk Distribution of OpenTelemetry Collector uses the Smart Agent receiver with the kubernetes-proxy monitor type to export Prometheus metrics from the kube-proxy metrics in Prometheus format.

The integration queries path /metrics by default when no path is configured, and converts the Prometheus metric types to Splunk Observability Cloud metric types as described in Prometheus Exporter

This monitor type is available on Kubernetes, Linux, and Windows.

Benefits πŸ”—

After you configure the integration, you can access these features:

Installation πŸ”—

Follow these steps to deploy this integration:

  1. Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:

  2. Configure the integration, as described in the Configuration section.

  3. Restart the Splunk Distribution of the OpenTelemetry Collector.

Configuration πŸ”—

To use this integration of a Smart Agent monitor with the Collector:

  1. Include the Smart Agent receiver in your configuration file.

  2. Add the monitor type to the Collector configuration, both in the receiver and pipelines sections.

Example πŸ”—

To activate this integration, add the following to your Collector configuration:

receivers:
  smartagent/kubernetes-proxy
    type: kubernetes-proxy
    ... # Additional config

Next, add the monitor to the service.pipelines.metrics.receivers section of your configuration file:

service:
   pipelines:
     metrics:
       receivers: [smartagent/kubernetes-proxy]

Example: Kubernetes observer πŸ”—

The following is an example YAML configuration:

receivers:
  smartagent/kubernetes-proxy:
    type: kubernetes-proxy
    host: localhost
    port: 10249
    extraDimensions:
      metric_source: kubernetes-proxy

The OpenTelemetry Collector has a Kubernetes observer (k8sobserver) that can be implemented as an extension to discover networked endpoints, such as a Kubernetes pod. Using this observer assumes that the OpenTelemetry Collector is deployed in host monitoring (agent) mode, where it is running on each individual node or host instance.

To use the observer, you must create a receiver creator instance with an associated rule. For example:

extensions:
  # Configures the Kubernetes observer to watch for pod start and stop events.
  k8s_observer:
  host_observer:

receivers:
  receiver_creator/1:
    # Name of the extensions to watch for endpoints to start and stop.
    watch_observers: [k8s_observer]
    receivers:
      smartagent/kubernetes-kubeproxy:
        rule: type == "pod" && name matches "kube-proxy"
        type: kubernetes-proxy
        port: 10249
        extraDimensions:
          metric_source: kubernetes-proxy

      prometheus_simple:
        # Configure prometheus scraping if standard prometheus annotations are set on the pod.
        rule: type == "pod" && annotations["prometheus.io/scrape"] == "true"
        config:
          metrics_path: '`"prometheus.io/path" in annotations ? annotations["prometheus.io/path"] : "/metrics"`'
          endpoint: '`endpoint`:`"prometheus.io/port" in annotations ? annotations["prometheus.io/port"] : 9090`'

      redis/1:
        # If this rule matches an instance of this receiver will be started.
        rule: type == "port" && port == 6379
        config:
          # Static receiver-specific config.
          password: secret
          # Dynamic configuration value.
          collection_interval: `pod.annotations["collection_interval"]`
      resource_attributes:
          # Dynamic configuration value.
          service.name: `pod.labels["service_name"]`

      redis/2:
        # Set a resource attribute based on endpoint value.
        rule: type == "port" && port == 6379
        resource_attributes:
          # Dynamic value.
          app: `pod.labels["app"]`
          # Static value.
          source: redis

  receiver_creator/2:
    # Name of the extensions to watch for endpoints to start and stop.
    watch_observers: [host_observer]
    receivers:
      redis/on_host:
        # If this rule matches, an instance of this receiver is started.
        rule: type == "port" && port == 6379 && is_ipv6 == true
        resource_attributes:
          service.name: redis_on_host

processors:
  exampleprocessor:

exporters:
  exampleexporter:

service:
  pipelines:
    metrics:
      receivers: [receiver_creator/1, receiver_creator/2]
      processors: [exampleprocessor]
      exporters: [exampleexporter]
  extensions: [k8s_observer, host_observer]

See Receiver creator for more information.

Configuration settings πŸ”—

Config option

Required

Type

Description

httpTimeout

no

int64

HTTP timeout duration for both read and writes. This can be a

duration string that is accepted by https://golang.org/pkg/time/#ParseDuration Default is 10s.

username

no

string

Basic Auth username to use on each request, if any.

password

no

string

Basic Auth password to use on each request, if any.

useHTTPS

no

bool

If true, the agent will connect to the server using HTTPS

instead of plain HTTP. Default is false.

httpHeaders

no

map of strings

A map of HTTP header names to values. Comma separated multiple

values for the same message-header is supported.

skipVerify

no

bool

If useHTTPS is true and this option is also true, the exporter

TLS certificate will not be verified. Default is false.

sniServerName

no

string

If useHTTPS is true and skipVerify is true, the sniServerName is

used to verify the hostname on the returned certificates. It is also included in the client’s handshake to support virtual hosting unless it is an IP address.

caCertPath

no

string

Path to the CA cert that has signed the TLS cert, unnecessary if

skipVerify is set to false.

clientCertPath

no

string

Path to the client TLS cert to use for TLS required connections.

clientKeyPath

no

string

Path to the client TLS key to use for TLS required connections.

host

yes

string

Host of the exporter.

port

yes

integer

Port of the exporter.

useServiceAccount

no

bool

Use pod service account to authenticate. Default is

false.

metricPath

no

string

Path to the metrics endpoint on the exporter server. Default

is /metrics.

sendAllMetrics

no

bool

Send all the metrics that come out of the Prometheus exporter

without any filtering. This option has no effect when using the prometheus exporter monitor directly since there is no built-in filtering, only when embedding it in other monitors. Default is false.

Metrics πŸ”—

The following metrics are available for this integration:

Notes πŸ”—

  • To learn more about the available in Splunk Observability Cloud see Metric types

  • In host-based subscription plans, default metrics are those metrics included in host-based subscriptions in Splunk Observability Cloud, such as host, container, or bundled metrics. Custom metrics are not provided by default and might be subject to charges. See Metric categories for more information.

  • In MTS-based subscription plans, all metrics are custom.

  • To add additional metrics, see how to configure extraMetrics in Add additional metrics

Non-default metrics (version 4.7.0+) πŸ”—

To emit metrics that are not default, you can add those metrics in the generic monitor-level extraMetrics config option. Metrics that are derived from specific configuration options that do not appear in the above list of metrics do not need to be added to extraMetrics.

To see a list of metrics that will be emitted you can run agent-status monitors after configuring this monitor in a running agent instance.

Troubleshooting πŸ”—

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

This page was last updated on Dec 09, 2024.