Send traces from Istio to Splunk Observability Cloud π
Istio 1.8 and higher has native support for observability. You can configure your Istio service mesh to send traces, metrics, and logs to Splunk Observability Cloud by configuring the Splunk OpenTelemetry Collector and Istio.
Requirements π
To send telemetry from Istio to Splunk Observability Cloud you need the following:
Istio 1.8 and higher
Splunk OpenTelemetry Collector for Kubernetes in host monitoring (agent) mode. See Install the Collector for Kubernetes using Helm.
Splunk APM instrumentation with B3 context propagation. To set B3 as the context propagator, set the
OTEL_PROPAGATORS
environment variable tob3multi
.
OpenCensus and W3C trace context are not supported because Istio does not support them.
Install and configure the Splunk OpenTelemetry Collector π
Deploy the Splunk OpenTelemetry Collector for Kubernetes in host monitoring (agent) mode. The required Collector components depend on product entitlements and the data you want to collect. See Install the Collector for Kubernetes using Helm.
In the Helm chart for the Collector, set the autodetect.istio
parameter to true
by passing --set autodetect.istio=true
to the helm install
or helm upgrade
commands.
You can also add the following snippet to your values.yaml file, which you can pass using the -f myvalues.yaml
argument:
autodetect:
istio: true
Ensure that data forwarding doesnβt generate telemetry π
Forwarding telemetry from Istio to the Collector might generate undesired telemetry. To avoid this, do one of the following:
Run the Collector in a separate namespace that lacks the Istio proxy.
Add a label to the Collector pods to prevent the injection of the Istio proxy. This is default configuration when the
autodetect.istio
parameter is set totrue
.If you need the Istio proxy in the Collector pods, deactivate tracing in the Collector pods. For example:
# ... otelK8sClusterReceiver: podAnnotations: proxy.istio.io/config: '{"tracing":{}}' otelCollector: podAnnotations: proxy.istio.io/config: '{"tracing":{}}'
Note
The instrumentation pod is a DaemonSet and isnβt injected with a proxy by default. If Istio injects proxies in instrumentation pods, deactivate tracing using a podAnnotation
.
Configure the Istio Operator π
Configure the Istio Operator following these steps:
Set an
environment.deployment
attribute.Configure the Zipkin tracer to send data to the Splunk OpenTelemetry Collector running on the host.
For example:
# tracing.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-operator
spec:
meshConfig:
# Requires Splunk Log Observer
accessLogFile: /dev/stdout
# Requires Splunk APM
enableTracing: true
defaultConfig:
tracing:
max_path_tag_length: 99999
sampling: 100
zipkin:
address: $(HOST_IP):9411
custom_tags:
environment.deployment:
literal:
value: dev
To activate the new configuration, run:
istioctl install -f ./tracing.yaml
Restart the pods that contain the Istio proxy to activate the new tracing configuration.
Update all pods in the service mesh π
Update all pods that are in the Istio service mesh to include an app
label. Istio uses this to define the Splunk service.
Note
If you donβt set the app
label, identifying the relationship between the proxy and your service is more difficult.
Recommendations π
To make the best use of full-fidelity data retention, configure Istio to send as much trace data as possible by configuring sampling and maximum tag length as follows:
Set a
sampling
value of100
to ensure that all traces have correct root spans.Set a
max_path_tag_length
value of99999
to avoid that key tags get truncated.
For example:
# tracing.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-operator
spec:
meshConfig:
# Requires Splunk Log Observer
accessLogFile: /dev/stdout
# Requires Splunk APM
enableTracing: true
defaultConfig:
tracing:
max_path_tag_length: 99999
sampling: 100
zipkin:
address: $(HOST_IP):9411
custom_tags:
environment.deployment:
literal:
value: dev
For more information on how to configure Istio see the Istio distributed tracing installation documentation.
If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.
Available to Splunk Observability Cloud customers
Submit a case in the Splunk Support Portal .
Contact Splunk Support .
Available to prospective customers and free trial users
Ask a question and get answers through community support at Splunk Answers .
Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.