Docs » Get started with the Splunk Distribution of the OpenTelemetry Collector » Automatic discovery of apps and services » Automatic discovery for Kubernetes » Zero-code instrumentation for back-end applications in Kubernetes

Zero-code instrumentation for back-end applications in Kubernetes πŸ”—

The Splunk Distribution of the OpenTelemetry Collector uses automatic discovery with zero-code instrumentation to automatically detect back-end applications running in your Kubernetes environment. By deploying the Collector with zero-code instrumentation, you can instrument applications and send data to Splunk Observability Cloud without editing your application’s code or configuring files.

Zero-code instrumentation for Kubernetes can detect and configure the following applications and language runtimes:

  • Java

  • .NET

  • Node.js

How zero-code instrumentation for Kubernetes works πŸ”—

Zero-code instrumentation for Kubernetes operates as a Kubernetes DaemonSet that you install with Helm. Using Helm, you can specify which language runtimes you want zero-code instrumentation to find. After installation, Helm deploys a set of Kubernetes pods in your cluster, which includes the Splunk Distribution of OpenTelemetry Collector, the Kubernetes operator, and other supporting resources.

The Collector and Kubernetes operator listen for requests to your application and gather telemetry data upon detecting activity in your application. The Collector then sends this data to Splunk Application Performance Monitoring (APM).

Get started πŸ”—

To install zero-code instrumentation for Kubernetes, complete the following steps:

  1. Deploy the Helm Chart with the Kubernetes Operator

  2. Verify all the OpenTelemetry resources are deployed successfully

  3. Set annotations to instrument applications

  4. View results at Splunk Observability APM

Requirements πŸ”—

You need the following components to use zero-code instrumentation for back-end Kubernetes applications:

Make sure you’ve also installed the components specific to your language runtime:

Java 8 or higher and supported libraries. See Java agent compatibility and requirements for more information.

Deploy the Helm Chart with the Kubernetes Operator πŸ”—

To deploy the Helm Chart, create a file called values.yaml. In this file, you can define the settings to activate or deactivate when installing the OpenTelemetry Collector with the Helm Chart.

Populate values.yaml with the following fields and values:

clusterName: <your_cluster_name>

# Your Splunk Observability Cloud realm and access token
splunkObservability:
  realm: <splunk_realm>
  accessToken: <splunk_access_token>

# Activates the OpenTelemetry Kubernetes Operator
operator:
  enabled: true

You might need to populate the file with additional values depending on your environment. See Add certificates and Set the deployment environment for more information.

Add certificates πŸ”—

The Operator requires certain TLS certificates to work. Use the following command to check whether a certification manager is available:

# Check if cert-manager is already installed, don't deploy a second cert-manager.
kubectl get pods -l app=cert-manager --all-namespaces

If a certification manager isn’t available in the cluster, then you’ll need to add certmanager.enabled=true to your values.yaml file. For example:

clusterName: my-cluster

splunkObservability:
  realm: <splunk_realm>
  accessToken: <splunk_access_token>

certmanager:
  enabled: true
operator:
  enabled: true

Set the deployment environment πŸ”—

To properly ingest trace telemetry data, the attribute deployment.environment must be onboard the exported traces. The following table demonstrates the different methods for setting this attribute:

Method

Scope

Implementation

Through the values.yaml file environment configuration

Applies the attribute to all telemetry data (metrics, logs, traces) exported through the collector.

The chart will set an attribute processor to add deployment.environment=prd to all telemetry data processed by the collector.

Through the values.yaml file and instrumentation.env or instrumentation.{instrumentation_library}.env configuration

Allows you to set deployment.environment either for all auto-instrumented applications collectively or per auto-instrumentation language.

Add the OTEL_RESOURCE_ATTRIBUTES environment variable, setting its value to deployment.environment=prd.

Through your Kubernetes application deployment, daemonset, or pod specification

Allows you to set deployment.environment at the level of individual deployments, daemonsets, or pods.

Employ the OTEL_RESOURCE_ATTRIBUTES environment variable, assigning the value deployment.environment=prd.

The following examples show how to set the attribute using each method:

Set the environment option in the values.yaml file. This adds the deployment.environment attribute to all telemetry data the Collector receives, including data from automatically-instrumented pods.

  clusterName: my-cluster

  splunkObservability:
    realm: <splunk_realm>
    accessToken: <splunk_access_token>

  environment: prd

  certmanager:
    enabled: true
  operator:
    enabled: true

Deploy the Helm Chart πŸ”—

After configuring values.yaml, use the following command to deploy the Helm Chart:

helm install splunk-otel-collector -f ./values.yaml splunk-otel-collector-chart/splunk-otel-collector

You can change the name of the Collector instance and the namespace in which you install the Collector.

For example, to change the name of the Collector instance to otel-collector and install it in the o11y namespace, use the following command:

helm install otel-collector -f ./values.yaml splunk-otel-collector-chart/splunk-otel-collector --namespace o11y

Verify all the OpenTelemetry resources are deployed successfully πŸ”—

Resources include the Collector, the Operator, webhook, and instrumentation. Run the following commands to verify the resources are deployed correctly.

The pods running in the collector namespace must include the following:

kubectl get pods
# NAME                                                          READY
# NAMESPACE     NAME                                                            READY   STATUS
# monitoring    splunk-otel-collector-agent-lfthw                               2/2     Running
# monitoring    splunk-otel-collector-cert-manager-6b9fb8b95f-2lmv4             1/1     Running
# monitoring    splunk-otel-collector-cert-manager-cainjector-6d65b6d4c-khcrc   1/1     Running
# monitoring    splunk-otel-collector-cert-manager-webhook-87b7ffffc-xp4sr      1/1     Running
# monitoring    splunk-otel-collector-k8s-cluster-receiver-856f5fbcf9-pqkwg     1/1     Running
# monitoring    splunk-otel-collector-opentelemetry-operator-56c4ddb4db-zcjgh   2/2     Running

The webhooks in the collector namespace must include the following:

kubectl get mutatingwebhookconfiguration.admissionregistration.k8s.io
# NAME                                      WEBHOOKS   AGE
# splunk-otel-collector-cert-manager-webhook              1          14m
# splunk-otel-collector-opentelemetry-operator-mutation   3          14m

The instrumentation in the collector namespace must include the following:

kubectl get otelinst
# NAME                          AGE   ENDPOINT
# splunk-instrumentation        3m   http://$(SPLUNK_OTEL_AGENT):4317

Set annotations to instrument applications πŸ”—

If the related Kubernetes object (deployment, daemonset, or pod) is not deployed, add the instrumentation.opentelemetry.io/inject-java annotation to the application object YAML.

The annotation you set depends on the language runtime you’re using. You can set multiple annotations in the same Kubernetes object. See the following available annotations:

Add the instrumentation.opentelemetry.io/inject-java annotation to the application object YAML.

For example, given the following deployment YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-java-app
  namespace: monitoring
spec:
  template:
    spec:
    containers:
    - name: my-java-app
        image: my-java-app:latest

Activate zero-code instrumentation by adding instrumentation.opentelemetry.io/inject-java: "true" to the spec:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-java-app
  namespace: monitoring
spec:
  template:
    metadata:
      annotations:
        instrumentation.opentelemetry.io/inject-java: "true"
    spec:
      containers:
      - name: my-java-app
        image: my-java-app:latest

Applying annotations in a different namespace πŸ”—

If the current namespace isn’t monitoring, change the annotation to specify the namespace in which you installed the OpenTelemetry Collector.

For example, if the current namespace is <my-namespace> and you installed the Collector in monitoring, set the annotation to "instrumentation.opentelemetry.io/inject-<application_language>": "monitoring/splunk-otel-collector":

kubectl patch deployment <my-deployment> -n <my-namespace> -p '{"spec":{"template":{"metadata":{"annotations":{"instrumentation.opentelemetry.io/inject-<application_language>":"monitoring/splunk-otel-collector"}}}}}'

Replace <application_language> with the language of the application you want to discover.

Instrument applications in multi-container pods πŸ”—

By default, zero-code instrumentation instruments the first container in the Kubernetes pod spec. You can specify multiple containers to instrument by adding an annotation.

The following example instruments Java applications running in the myapp and myapp2 containers:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment-with-multiple-containers
spec:
  selector:
    matchLabels:
      app: my-pod-with-multiple-containers
  replicas: 1
  template:
    metadata:
      labels:
        app: my-pod-with-multiple-containers
      annotations:
        instrumentation.opentelemetry.io/inject-java: "true"
        instrumentation.opentelemetry.io/container-names: "myapp,myapp2"

You can also instrument multiple containers with specific languages. To do so, specify which languages and containers to instrument by using the instrumentation.opentelemetry.io/<language>-container-names annotation. The following example instruments Java applications in myapp and myapp2, and Node.js applications in myapp3:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment-with-multi-containers-multi-instrumentations
spec:
  selector:
    matchLabels:
      app: my-pod-with-multi-containers-multi-instrumentations
  replicas: 1
  template:
    metadata:
      labels:
        app: my-pod-with-multi-containers-multi-instrumentations
      annotations:
        instrumentation.opentelemetry.io/inject-java: "true"
        instrumentation.opentelemetry.io/java-container-names: "myapp,myapp2"
        instrumentation.opentelemetry.io/inject-nodejs: "true"
        instrumentation.opentelemetry.io/python-container-names: "myapp3"

Deactivate zero-code instrumentation πŸ”—

To deactivate zero-code instrumentation remove the annotation using the following command:

kubectl patch deployment <my-deployment> -n <my-namespace> --type=json -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/instrumentation.opentelemetry.io~1inject-<application_language>"}]'

Replace <application_language> with the language of the application for which you want to deactivate instrumentation.

Verify instrumentation πŸ”—

To verify that the instrumentation was successful, run the following command on an individual pod:

kubectl describe pod <application_pod_name> -n <namespace>

The instrumented pod contains an initContainer named opentelemetry-auto-instrumentation and the target application container should have several OTEL_* environment variables similar to those in the following demo output:

# Name:             opentelemetry-demo-frontend-57488c7b9c-4qbfb
# Namespace:        otel-demo
# Annotations:      instrumentation.opentelemetry.io/inject-java: true
# Status:           Running
# Init Containers:
#   opentelemetry-auto-instrumentation:
#     Command:
#       cp
#       -a
#       /autoinstrumentation/.
#       /otel-auto-instrumentation/
#     State:          Terminated
#       Reason:       Completed
#       Exit Code:    0
# Containers:
#   frontend:
#     State:          Running
#     Ready:          True
#     Environment:
#       FRONTEND_PORT:                              8080
#       FRONTEND_ADDR:                              :8080
#       AD_SERVICE_ADDR:                            opentelemetry-demo-adservice:8080
#       CART_SERVICE_ADDR:                          opentelemetry-demo-cartservice:8080
#       CHECKOUT_SERVICE_ADDR:                      opentelemetry-demo-checkoutservice:8080
#       CURRENCY_SERVICE_ADDR:                      opentelemetry-demo-currencyservice:8080
#       PRODUCT_CATALOG_SERVICE_ADDR:               opentelemetry-demo-productcatalogservice:8080
#       RECOMMENDATION_SERVICE_ADDR:                opentelemetry-demo-recommendationservice:8080
#       SHIPPING_SERVICE_ADDR:                      opentelemetry-demo-shippingservice:8080
#       WEB_OTEL_SERVICE_NAME:                      frontend-web
#       PUBLIC_OTEL_EXPORTER_OTLP_TRACES_ENDPOINT:  http://localhost:8080/otlp-http/v1/traces
#       NODE_OPTIONS:                                --require /otel-auto-instrumentation/autoinstrumentation.java
#       SPLUNK_OTEL_AGENT:                           (v1:status.hostIP)
#       OTEL_SERVICE_NAME:                          opentelemetry-demo-frontend
#       OTEL_EXPORTER_OTLP_ENDPOINT:                http://$(SPLUNK_OTEL_AGENT):4317
#       OTEL_RESOURCE_ATTRIBUTES_POD_NAME:          opentelemetry-demo-frontend-57488c7b9c-4qbfb (v1:metadata.name)
#       OTEL_RESOURCE_ATTRIBUTES_NODE_NAME:          (v1:spec.nodeName)
#       OTEL_PROPAGATORS:                           tracecontext,baggage,b3
#       OTEL_RESOURCE_ATTRIBUTES:                   splunk.zc.method=autoinstrumentation-java:0.41.1,k8s.container.name=frontend,k8s.deployment.name=opentelemetry-demo-frontend,k8s.namespace.name=otel-demo,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=opentelemetry-demo-frontend-57488c7b9c,service.version=1.5.0-frontend
#     Mounts:
#       /otel-auto-instrumentation from opentelemetry-auto-instrumentation (rw)
# Volumes:
#   opentelemetry-auto-instrumentation:
#     Type:        EmptyDir (a temporary directory that shares a pod's lifetime)

View results at Splunk Observability APM πŸ”—

Allow the Operator to do the work. The Operator intercepts and alters the Kubernetes API requests to create and update annotated pods, the internal pod application containers are instrumented, and trace and metrics data populates the APM dashboard.

(Optional) Configure the instrumentation πŸ”—

You can configure the Splunk Distribution of OpenTelemetry Collector to suit your instrumentation needs. In most cases, modifying the basic configuration is enough to get started.

You can add advanced configuration like activating custom sampling and including custom data in the reported spans with environment variables and system properties. To do so, use the values.yaml file and instrumentation.sampler configuration. For more information, see the documentation in GitHub and example in GitHub .

You can also use the methods shown in Set the deployment environment to configure your instrumentation with the OTEL_RESOURCE_ATTRIBUTES environment variable and other environment variables. For example, if you want every span to include the key-value pair build.id=feb2023_v2, set the OTEL_RESOURCE_ATTRIBUTES environment variable:

kubectl set env deployment/<my-deployment> OTEL_RESOURCE_ATTRIBUTES=build.id=feb2023_v2

See Advanced customization for automatic discovery and instrumtenation in Kubernetes for more information.

Troubleshooting πŸ”—

If you’re having trouble setting up automatic discovery, see the following troubleshooting guidelines.

Check the logs for failures πŸ”—

Examine logs to make sure that the operator and cert manager are working.

Application

kubectl command

Operator

kubectl logs -l app.kubernetes.io/name=operator

Cert manager

  • kubectl logs -l app=certmanager

  • kubectl logs -l app=cainjector

  • kubectl logs -l app=webhook

Resolve certificate manager issues πŸ”—

A hanging operator can indicate issues with the certificate manager.

  • Check the logs of your cert-manager pods.

  • Restart the cert-manager pods.

  • Ensure that your cluster has only one instance of cert-manager. This includes certmanager, certmanager-cainjector, and certmanager-webhook.

See the official cert manager troubleshooting guide for more information: https://cert-manager.io/docs/troubleshooting/.

Validate certificates πŸ”—

Ensure that certificates are available for use. Use the following command to search for certificates:

kubectl get certificates
# NAME                                          READY   SECRET                                                           AGE
# splunk-otel-collector-operator-serving-cert   True    splunk-otel-collector-operator-controller-manager-service-cert   5m

To troubleshoot common errors that occur when instrumenting applications, see the following guides:

Learn more πŸ”—

This page was last updated on Dec 09, 2024.