Docs » Configure application receivers » Configure application receivers for orchestration » Kubernetes proxy

Kubernetes proxy 🔗

Description 🔗

The Splunk Distribution of OpenTelemetry Collector provides this kubernetes-proxy monitor type using the Splunk Observability Cloud Smart Agent Receiver.

This monitor type exports Prometheus metrics from the kube-proxy metrics in Prometheus format. The monitor type queries path /metrics by default when no path is configured. It converts the Prometheus metric types to Splunk Observability Cloud metric types as described here.

This monitor type is available on Kubernetes, Linux, and Windows.

Benefits 🔗

After you’ve configured the integration, you can:

  • View metrics using the built-in dashboard. For information about dashboards, see View dashboards in Observability Cloud.

  • View a data-driven visualization of the physical servers, virtual machines, AWS instances, and other resources in your environment that are visible to Infrastructure Monitoring. For information about navigators, see Splunk Infrastructure Monitoring navigators.

  • Access Metric Finder and search for metrics sent by the monitor. For information about Metric Finder, see Use the Metric Finder.

Installation 🔗

Follow these steps to deploy the integration:

  1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:

  2. Configure the monitor, as described in the next section.

  3. Restart the Splunk Distribution of OpenTelemetry Collector.

Configuration 🔗

This monitor is available in the Smart Agent Receiver, which is part of the Splunk Distribution of OpenTelemetry Collector. The Smart Agent Receiver lets you use existing Smart Agent monitors as OpenTelemetry Collector metric receivers.

Using this monitor assumes that you have a configured environment with a functional Smart Agent release bundle on your system, which is already provided for x86_64/amd64 Splunk Distribution of OpenTelemetry Collector installation paths.

To activate this monitor in the Splunk Distribution of OpenTelemetry Collector, add the following to your configuration file:

Example YAML configuration:

receivers:
  smartagent/kubernetes-proxy
    type: kubernetes-proxy
    ... # Additional config

To complete the monitor type activation, you must also include it in a metrics pipeline. To do this, add the monitor type to the service > pipelines > metrics > receivers section of your configuration file. For example:

service:
   pipelines:
     metrics:
       receivers: [smartagent/kubernetes-proxy]

Configuration settings 🔗

Config option Required Type Description
httpTimeout no int64 HTTP timeout duration for both read and writes. This can be a duration string that is accepted by https://golang.org/pkg/time/#ParseDuration Default is 10s.
username no string Basic Auth username to use on each request, if any.
password no string Basic Auth password to use on each request, if any.
useHTTPS no bool If true, the agent will connect to the server using HTTPS instead of plain HTTP. Default is false.
httpHeaders no map of strings A map of HTTP header names to values. Comma separated multiple values for the same message-header is supported.
skipVerify no bool If useHTTPS is true and this option is also true, the exporter's TLS certificate will not be verified. Default is false.
sniServerName no string If useHTTPS is true and skipVerify is true, the sniServerName is used to verify the hostname on the returned certificates. It is also included in the client's handshake to support virtual hosting unless it is an IP address.
caCertPath no string Path to the CA cert that has signed the TLS cert, unnecessary if skipVerify is set to false.
clientCertPath no string Path to the client TLS cert to use for TLS required connections.
clientKeyPath no string Path to the client TLS key to use for TLS required connections.
host yes string Host of the exporter.
port yes integer Port of the exporter.
useServiceAccount no bool Use pod service account to authenticate. Default is false.
metricPath no string Path to the metrics endpoint on the exporter server. Default is /metrics.
sendAllMetrics no bool Send all the metrics that come out of the Prometheus exporter without any filtering. This option has no effect when using the prometheus exporter monitor directly since there is no built-in filtering, only when embedding it in other monitors. Default is false.

Example configuration 🔗

The following is an example YAML configuration:

receivers:
  smartagent/kubernetes-proxy:
    type: kubernetes-proxy
    host: localhost
    port: 10249
    extraDimensions:
      metric_source: kubernetes-proxy

The OpenTelemetry Collector has a Kubernetes observer (k8sobserver) that can be implemented as an extension to discover networked endpoints, such as a Kubernetes pod. Using this observer assumes that the OpenTelemetry Collector is deployed in Agent mode, where it is running on each individual node or host instance.

To use the observer, you must create a receiver creator instance with an associated rule. For example:

extensions:
  # Configures the Kubernetes observer to watch for pod start and stop events.
  k8s_observer:
  host_observer:

receivers:
  receiver_creator/1:
    # Name of the extensions to watch for endpoints to start and stop.
    watch_observers: [k8s_observer]
    receivers:
      smartagent/kubernetes-kubeproxy:
        rule: type == "pod" && name matches "kube-proxy"
        type: kubernetes-proxy
        port: 10249
        extraDimensions:
          metric_source: kubernetes-proxy

      prometheus_simple:
        # Configure prometheus scraping if standard prometheus annotations are set on the pod.
        rule: type == "pod" && annotations["prometheus.io/scrape"] == "true"
        config:
          metrics_path: '`"prometheus.io/path" in annotations ? annotations["prometheus.io/path"] : "/metrics"`'
          endpoint: '`endpoint`:`"prometheus.io/port" in annotations ? annotations["prometheus.io/port"] : 9090`'

      redis/1:
        # If this rule matches an instance of this receiver will be started.
        rule: type == "port" && port == 6379
        config:
          # Static receiver-specific config.
          password: secret
          # Dynamic configuration value.
          collection_interval: `pod.annotations["collection_interval"]`
      resource_attributes:
          # Dynamic configuration value.
          service.name: `pod.labels["service_name"]`

      redis/2:
        # Set a resource attribute based on endpoint value.
        rule: type == "port" && port == 6379
        resource_attributes:
          # Dynamic value.
          app: `pod.labels["app"]`
          # Static value.
          source: redis

  receiver_creator/2:
    # Name of the extensions to watch for endpoints to start and stop.
    watch_observers: [host_observer]
    receivers:
      redis/on_host:
        # If this rule matches, an instance of this receiver is started.
        rule: type == "port" && port == 6379 && is_ipv6 == true
        resource_attributes:
          service.name: redis_on_host

processors:
  exampleprocessor:

exporters:
  exampleexporter:

service:
  pipelines:
    metrics:
      receivers: [receiver_creator/1, receiver_creator/2]
      processors: [exampleprocessor]
      exporters: [exampleexporter]
  extensions: [k8s_observer, host_observer]

Metrics 🔗

The following metrics are available for this integration:

Non-default metrics (version 4.7.0+) 🔗

To emit metrics that are not default, you can add those metrics in the generic monitor-level extraMetrics config option. Metrics that are derived from specific configuration options that do not appear in the above list of metrics do not need to be added to extraMetrics.

To see a list of metrics that will be emitted you can run agent-status monitors after configuring this monitor in a running agent instance.

Troubleshooting 🔗

If you are not able to see your data in Splunk Observability Cloud: