Zero-code instrumentation for back-end applications in Kubernetes π
The Splunk Distribution of the OpenTelemetry Collector uses automatic discovery with zero-code instrumentation to automatically detect back-end applications running in your Kubernetes environment. By deploying the Collector with zero-code instrumentation, you can instrument applications and send data to Splunk Observability Cloud without editing your applicationβs code or configuring files.
Zero-code instrumentation for Kubernetes can detect and configure the following applications and language runtimes:
Java
.NET
Node.js
How zero-code instrumentation for Kubernetes works π
Zero-code instrumentation for Kubernetes operates as a Kubernetes DaemonSet that you install with Helm. Using Helm, you can specify which language runtimes you want zero-code instrumentation to find. After installation, Helm deploys a set of Kubernetes pods in your cluster, which includes the Splunk Distribution of OpenTelemetry Collector, the Kubernetes operator, and other supporting resources.
The Collector and Kubernetes operator listen for requests to your application and gather telemetry data upon detecting activity in your application. The Collector then sends this data to Splunk Application Performance Monitoring (APM).
Get started π
To install zero-code instrumentation for Kubernetes, complete the following steps:
Requirements π
You need the following components to use zero-code instrumentation for back-end Kubernetes applications:
Helm version 3 or higher.
Administrator access to your Kubernetes cluster and familiarity with your Kubernetes configuration.
Your Splunk Observability Cloud realm and access token with ingest scope. For more information, see Create and manage organization access tokens using Splunk Observability Cloud.
Make sure youβve also installed the components specific to your language runtime:
Java 8 or higher and supported libraries. See Java agent compatibility and requirements for more information.
.NET version 6.0 or higher and supported .NET application libraries. For a list of supported libraries, see Supported libraries.
x86 or AMD64 (x86-64) architecture. ARM architectures arenβt supported.
Node.js version 14 or higher and supported libraries. See Splunk OTel JS compatibility and requirements for more information.
Deploy the Helm Chart with the Kubernetes Operator π
To deploy the Helm Chart, create a file called values.yaml. In this file, you can define the settings to activate or deactivate when installing the OpenTelemetry Collector with the Helm Chart.
Populate values.yaml with the following fields and values:
clusterName: <your_cluster_name>
# Your Splunk Observability Cloud realm and access token
splunkObservability:
realm: <splunk_realm>
accessToken: <splunk_access_token>
# Activates the OpenTelemetry Kubernetes Operator
operator:
enabled: true
You might need to populate the file with additional values depending on your environment. See Add certificates and Set the deployment environment for more information.
Add certificates π
The Operator requires certain TLS certificates to work. Use the following command to check whether a certification manager is available:
# Check if cert-manager is already installed, don't deploy a second cert-manager.
kubectl get pods -l app=cert-manager --all-namespaces
If a certification manager isnβt available in the cluster, then youβll need to add certmanager.enabled=true
to your values.yaml file. For example:
clusterName: my-cluster
splunkObservability:
realm: <splunk_realm>
accessToken: <splunk_access_token>
certmanager:
enabled: true
operator:
enabled: true
Set the deployment environment π
To properly ingest trace telemetry data, the attribute deployment.environment
must be onboard the exported traces. The following table demonstrates the different methods for setting this attribute:
Method |
Scope |
Implementation |
---|---|---|
Through the values.yaml file |
Applies the attribute to all telemetry data (metrics, logs, traces) exported through the collector. |
The chart will set an attribute processor to add |
Through the values.yaml file and |
Allows you to set |
Add the |
Through your Kubernetes application deployment, daemonset, or pod specification |
Allows you to set |
Employ the |
The following examples show how to set the attribute using each method:
Set the environment
option in the values.yaml file. This adds the deployment.environment
attribute to all telemetry data the Collector receives, including data from automatically-instrumented pods.
clusterName: my-cluster
splunkObservability:
realm: <splunk_realm>
accessToken: <splunk_access_token>
environment: prd
certmanager:
enabled: true
operator:
enabled: true
Add the environment variable to the values.yaml instrumentation spec as shown in the following example code. This method adds the deployment.environment
attribute to all telemetry data from automatically-instrumented pods.
operator:
enabled: true
instrumentation:
env:
- name: OTEL_RESOURCE_ATTRIBUTES
value: "deployment.environment=prd"
java:
env:
- name: OTEL_RESOURCE_ATTRIBUTES
value: "deployment.environment=prd-canary-java"
Update the application deployment YAML file. This method adds the deployment.environment
attribute to all telemetry data from pods that contain the specified environment variable.
apiVersion: apps/v1 kind: Deployment metadata: name: my-java-app spec: template: spec: containers: - name: my-java-app image: my-java-app:latest env: - name: OTEL_RESOURCE_ATTRIBUTES value: "deployment.environment=prd"
Update the environment variable OTEL_RESOURCE_ATTRIBUTES
using kubectl set env
. For example:
kubectl set env deployment/<my-deployment> OTEL_RESOURCE_ATTRIBUTES=environment=prd
Deploy the Helm Chart π
After configuring values.yaml, use the following command to deploy the Helm Chart:
helm install splunk-otel-collector -f ./values.yaml splunk-otel-collector-chart/splunk-otel-collector
You can change the name of the Collector instance and the namespace in which you install the Collector.
For example, to change the name of the Collector instance to otel-collector
and install it in the o11y
namespace, use the following command:
helm install otel-collector -f ./values.yaml splunk-otel-collector-chart/splunk-otel-collector --namespace o11y
Verify all the OpenTelemetry resources are deployed successfully π
Resources include the Collector, the Operator, webhook, and instrumentation. Run the following commands to verify the resources are deployed correctly.
The pods running in the collector namespace must include the following:
kubectl get pods
# NAME READY
# NAMESPACE NAME READY STATUS
# monitoring splunk-otel-collector-agent-lfthw 2/2 Running
# monitoring splunk-otel-collector-cert-manager-6b9fb8b95f-2lmv4 1/1 Running
# monitoring splunk-otel-collector-cert-manager-cainjector-6d65b6d4c-khcrc 1/1 Running
# monitoring splunk-otel-collector-cert-manager-webhook-87b7ffffc-xp4sr 1/1 Running
# monitoring splunk-otel-collector-k8s-cluster-receiver-856f5fbcf9-pqkwg 1/1 Running
# monitoring splunk-otel-collector-opentelemetry-operator-56c4ddb4db-zcjgh 2/2 Running
The webhooks in the collector namespace must include the following:
kubectl get mutatingwebhookconfiguration.admissionregistration.k8s.io
# NAME WEBHOOKS AGE
# splunk-otel-collector-cert-manager-webhook 1 14m
# splunk-otel-collector-opentelemetry-operator-mutation 3 14m
The instrumentation in the collector namespace must include the following:
kubectl get otelinst
# NAME AGE ENDPOINT
# splunk-instrumentation 3m http://$(SPLUNK_OTEL_AGENT):4317
Set annotations to instrument applications π
If the related Kubernetes object (deployment, daemonset, or pod) is not deployed, add the instrumentation.opentelemetry.io/inject-java
annotation to the application object YAML.
The annotation you set depends on the language runtime youβre using. You can set multiple annotations in the same Kubernetes object. See the following available annotations:
Add the instrumentation.opentelemetry.io/inject-java
annotation to the application object YAML.
For example, given the following deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-java-app
namespace: monitoring
spec:
template:
spec:
containers:
- name: my-java-app
image: my-java-app:latest
Activate zero-code instrumentation by adding instrumentation.opentelemetry.io/inject-java: "true"
to the spec
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-java-app
namespace: monitoring
spec:
template:
metadata:
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
spec:
containers:
- name: my-java-app
image: my-java-app:latest
Add the instrumentation.opentelemetry.io/inject-dotnet
annotation to the application object YAML.
Depending on your environment and your runtime identifier (RID), youβll need to add another annotation. To learn how to find your RID, see Find the runtime identifier for your .NET applications.
See the following table for details:
RID |
Annotation |
Notes |
---|---|---|
|
|
This is the default value and you can omit it. |
|
|
Use this annotation for applications running in environments based on the |
Given the following deployment YAML on a linux-x64
runtime environment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-dotnet-app
namespace: monitoring
spec:
template:
spec:
containers:
- name: my-dotnet-app
image: my-dotnet-app:latest
Activate zero-code instrumentation by adding instrumentation.opentelemetry.io/otel-dotnet-auto-runtime: "linux-x64"
and instrumentation.opentelemetry.io/inject-dotnet: "monitoring/splunk-otel-collector"
to the spec
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-dotnet-app
namespace: monitoring
spec:
template:
metadata:
annotations:
instrumentation.opentelemetry.io/otel-dotnet-auto-runtime: "linux-x64"
instrumentation.opentelemetry.io/inject-dotnet: "monitoring/splunk-otel-collector"
spec:
containers:
- name: my-dotnet-app
image: my-dotnet-app:latest
Given the following deployment YAML on a linux-musl-x64
runtime environment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-dotnet-app
namespace: monitoring
spec:
template:
spec:
containers:
- name: my-dotnet-app
image: my-dotnet-app:latest
Activate zero-code instrumentation by adding instrumentation.opentelemetry.io/otel-dotnet-auto-runtime: "linux-musl-x64"
and instrumentation.opentelemetry.io/inject-dotnet: "monitoring/splunk-otel-collector"
to the spec
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-dotnet-app
namespace: monitoring
spec:
template:
metadata:
annotations:
instrumentation.opentelemetry.io/otel-dotnet-auto-runtime: "linux-musl-x64"
instrumentation.opentelemetry.io/inject-dotnet: "monitoring/splunk-otel-collector"
spec:
containers:
- name: my-dotnet-app
image: my-dotnet-app:latest
Add the instrumentation.opentelemetry.io/inject-nodejs
annotation to the application object YAML.
For example, given the following deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nodejs-app
namespace: monitoring
spec:
template:
spec:
containers:
- name: my-nodejs-app
image: my-nodejs-app:latest
Activate zero-code instrumentation by adding instrumentation.opentelemetry.io/inject-nodejs: "true"
to the spec
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nodejs-app
namespace: monitoring
spec:
template:
metadata:
annotations:
instrumentation.opentelemetry.io/inject-nodejs: "true"
spec:
containers:
- name: my-nodejs-app
image: my-nodejs-app:latest
Applying annotations in a different namespace π
If the current namespace isnβt monitoring
, change the annotation to specify the namespace in which you installed the OpenTelemetry Collector.
For example, if the current namespace is <my-namespace>
and you installed the Collector in monitoring
, set the annotation to "instrumentation.opentelemetry.io/inject-<application_language>": "monitoring/splunk-otel-collector"
:
kubectl patch deployment <my-deployment> -n <my-namespace> -p '{"spec":{"template":{"metadata":{"annotations":{"instrumentation.opentelemetry.io/inject-<application_language>":"monitoring/splunk-otel-collector"}}}}}'
Replace <application_language>
with the language of the application you want to discover.
Instrument applications in multi-container pods π
By default, zero-code instrumentation instruments the first container in the Kubernetes pod spec. You can specify multiple containers to instrument by adding an annotation.
The following example instruments Java applications running in the myapp
and myapp2
containers:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment-with-multiple-containers
spec:
selector:
matchLabels:
app: my-pod-with-multiple-containers
replicas: 1
template:
metadata:
labels:
app: my-pod-with-multiple-containers
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/container-names: "myapp,myapp2"
You can also instrument multiple containers with specific languages. To do so, specify which languages and containers to instrument by using the instrumentation.opentelemetry.io/<language>-container-names
annotation. The following example instruments Java applications in myapp
and myapp2
, and Node.js applications in myapp3
:
apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment-with-multi-containers-multi-instrumentations spec: selector: matchLabels: app: my-pod-with-multi-containers-multi-instrumentations replicas: 1 template: metadata: labels: app: my-pod-with-multi-containers-multi-instrumentations annotations: instrumentation.opentelemetry.io/inject-java: "true" instrumentation.opentelemetry.io/java-container-names: "myapp,myapp2" instrumentation.opentelemetry.io/inject-nodejs: "true" instrumentation.opentelemetry.io/python-container-names: "myapp3"
Deactivate zero-code instrumentation π
To deactivate zero-code instrumentation remove the annotation using the following command:
kubectl patch deployment <my-deployment> -n <my-namespace> --type=json -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/instrumentation.opentelemetry.io~1inject-<application_language>"}]'
Replace <application_language>
with the language of the application for which you want to deactivate instrumentation.
Verify instrumentation π
To verify that the instrumentation was successful, run the following command on an individual pod:
kubectl describe pod <application_pod_name> -n <namespace>
The instrumented pod contains an initContainer named opentelemetry-auto-instrumentation
and the target application container should have several OTEL_*
environment variables similar to those in the following demo output:
# Name: opentelemetry-demo-frontend-57488c7b9c-4qbfb
# Namespace: otel-demo
# Annotations: instrumentation.opentelemetry.io/inject-java: true
# Status: Running
# Init Containers:
# opentelemetry-auto-instrumentation:
# Command:
# cp
# -a
# /autoinstrumentation/.
# /otel-auto-instrumentation/
# State: Terminated
# Reason: Completed
# Exit Code: 0
# Containers:
# frontend:
# State: Running
# Ready: True
# Environment:
# FRONTEND_PORT: 8080
# FRONTEND_ADDR: :8080
# AD_SERVICE_ADDR: opentelemetry-demo-adservice:8080
# CART_SERVICE_ADDR: opentelemetry-demo-cartservice:8080
# CHECKOUT_SERVICE_ADDR: opentelemetry-demo-checkoutservice:8080
# CURRENCY_SERVICE_ADDR: opentelemetry-demo-currencyservice:8080
# PRODUCT_CATALOG_SERVICE_ADDR: opentelemetry-demo-productcatalogservice:8080
# RECOMMENDATION_SERVICE_ADDR: opentelemetry-demo-recommendationservice:8080
# SHIPPING_SERVICE_ADDR: opentelemetry-demo-shippingservice:8080
# WEB_OTEL_SERVICE_NAME: frontend-web
# PUBLIC_OTEL_EXPORTER_OTLP_TRACES_ENDPOINT: http://localhost:8080/otlp-http/v1/traces
# NODE_OPTIONS: --require /otel-auto-instrumentation/autoinstrumentation.java
# SPLUNK_OTEL_AGENT: (v1:status.hostIP)
# OTEL_SERVICE_NAME: opentelemetry-demo-frontend
# OTEL_EXPORTER_OTLP_ENDPOINT: http://$(SPLUNK_OTEL_AGENT):4317
# OTEL_RESOURCE_ATTRIBUTES_POD_NAME: opentelemetry-demo-frontend-57488c7b9c-4qbfb (v1:metadata.name)
# OTEL_RESOURCE_ATTRIBUTES_NODE_NAME: (v1:spec.nodeName)
# OTEL_PROPAGATORS: tracecontext,baggage,b3
# OTEL_RESOURCE_ATTRIBUTES: splunk.zc.method=autoinstrumentation-java:0.41.1,k8s.container.name=frontend,k8s.deployment.name=opentelemetry-demo-frontend,k8s.namespace.name=otel-demo,k8s.node.name=$(OTEL_RESOURCE_ATTRIBUTES_NODE_NAME),k8s.pod.name=$(OTEL_RESOURCE_ATTRIBUTES_POD_NAME),k8s.replicaset.name=opentelemetry-demo-frontend-57488c7b9c,service.version=1.5.0-frontend
# Mounts:
# /otel-auto-instrumentation from opentelemetry-auto-instrumentation (rw)
# Volumes:
# opentelemetry-auto-instrumentation:
# Type: EmptyDir (a temporary directory that shares a pod's lifetime)
View results at Splunk Observability APM π
Allow the Operator to do the work. The Operator intercepts and alters the Kubernetes API requests to create and update annotated pods, the internal pod application containers are instrumented, and trace and metrics data populates the APM dashboard.
(Optional) Configure the instrumentation π
You can configure the Splunk Distribution of OpenTelemetry Collector to suit your instrumentation needs. In most cases, modifying the basic configuration is enough to get started.
You can add advanced configuration like activating custom sampling and including custom data in the reported spans with environment variables and system properties. To do so, use the values.yaml file and instrumentation.sampler
configuration. For more information, see the documentation in GitHub and example in GitHub .
You can also use the methods shown in Set the deployment environment to configure your instrumentation with the OTEL_RESOURCE_ATTRIBUTES
environment variable and other environment variables. For example, if you want every span to include the key-value pair build.id=feb2023_v2
, set the OTEL_RESOURCE_ATTRIBUTES
environment variable:
kubectl set env deployment/<my-deployment> OTEL_RESOURCE_ATTRIBUTES=build.id=feb2023_v2
See Advanced customization for automatic discovery and instrumtenation in Kubernetes for more information.
Troubleshooting π
If youβre having trouble setting up automatic discovery, see the following troubleshooting guidelines.
Check the logs for failures π
Examine logs to make sure that the operator and cert manager are working.
Application |
kubectl command |
---|---|
Operator |
|
Cert manager |
|
Resolve certificate manager issues π
A hanging operator can indicate issues with the certificate manager.
Check the logs of your cert-manager pods.
Restart the cert-manager pods.
Ensure that your cluster has only one instance of cert-manager. This includes
certmanager
,certmanager-cainjector
, andcertmanager-webhook
.
See the official cert manager troubleshooting guide for more information: https://cert-manager.io/docs/troubleshooting/.
Validate certificates π
Ensure that certificates are available for use. Use the following command to search for certificates:
kubectl get certificates
# NAME READY SECRET AGE
# splunk-otel-collector-operator-serving-cert True splunk-otel-collector-operator-controller-manager-service-cert 5m
To troubleshoot common errors that occur when instrumenting applications, see the following guides:
Learn more π
To learn more about how zero-code instrumentation works in Splunk Observability Cloud, see more detailed documentation in GitHub .
See the operator pattern in the Kubernetes documentation for more information.