Troubleshoot the Collector for Kubernetes 🔗
Note
For general troubleshooting, see Troubleshoot the Collector. To troubleshoot issues with your Kubernetes containers, see Troubleshoot the Collector for Kubernetes containers.
Debug logging for the Splunk Otel Collector in Kubernetes 🔗
You can change the logging level of the Collector from info
to debug
to help you troubleshoot.
To do this, apply this configuration:
service:
telemetry:
logs:
level: "debug"
Export your logs 🔗
The Collector’s logs are not exported by default. If you already export your logs to Splunk Platform or Splunk Observability, then you might want to export the collector’s logs too.
For example, you can configure the Collector to output debug logs and export them to Splunk Platform or Splunk Observability:
agent:
config:
service:
telemetry:
logs:
# Enable debug logging from the collector.
level: debug
# Optional for exporting logs.
logsCollection:
containers:
# Enable the logs from the collector/agent to be collected at the container level.
excludeAgentLogs: false
To view logs, use:
kubectl logs {splunk-otel-collector-agent-pod}
Size your Collector instance 🔗
Set the resources allocated to your Collector instance based on the amount of data you expecte to handle. For more information, see Sizing and scaling.
Use the following configuration to bump resource limits for the agent:
agent:
resources:
limits:
cpu: 500m
memory: 1Gi
Set the resources allocated to your cluster receiver deployment based on the cluster size. For example, for a cluster with 100 nodes alllocate these resources:
clusterReceiver:
resources:
limits:
cpu: 1
memory: 2Gi