Configure the Collector for Kubernetes with Helm: Add components and data sources π
Read on to learn how to add additional components or data sources to your Collector for Kubernetes config.
For other config options, see:
For a practical example of how to configure the Collector for Kubernetes see Tutorial: Configure the Splunk Distribution of the OpenTelemetry Collector on Kubernetes.
Add additional components to the configuration π
To use any additional OTel component, integration or legacy monitor, add it the relevant configuration sections in the values.yaml file. Depending on your requirements, you might want to include it in the agent.config
or the clusterReceiver.config
section of the values.yaml. See more at Helm chart architecture and components.
For a full list of available components and how to configure them, see Collector components. For a list of available application integrations, see Supported integrations in Splunk Observability Cloud.
How to collect data: agent or cluster receiver? π
Read the following table to decide which option to chose to collect your data:
Collect via the Collector agent |
Collect via the Collector cluster receiver |
|
---|---|---|
Where is data collected? |
At the node level. |
At the Kubernetes service level, through a single point. |
Advantages |
|
Simplicity: This option simplifies the setup and management. |
Considerations |
Complexity: Managing and configuring agents on each node can increase operational complexity, specifically agent config file management. |
Uncomplete data: This option might result in a partial view of your clusterβs health and performance. If the service collects metrics only from a subset of nodes, you might miss critical metrics from parts of your cluster. |
Use cases |
|
Use this in environments where operational simplicity is a priority, or if your cluster is already simple and has only 1 node. |
Example: Add the MySQL receiver π
This example shows how to add the MySQL receiver to your configuration file.
Add the MySQL receiver in the agent
section π
To use the Collector agent daemonset to collect mysql
metrics from every node the agent is deployed to, add this to your configuration:
agent:
config:
receivers:
mysql:
endpoint: localhost:3306
...
Add the MySQL receiver in the clusterReceiver
section π
To use the Collector cluster receiver deployment to collect mysql
metrics from a single endpoint, add this to your configuration:
clusterReceiver:
config:
receivers:
mysql:
endpoint: mysql-k8s-service:3306
...
Example: Add the Rabbit MQ monitor π
This example shows how to add the RabbitMQ integration to your configuration file.
Add RabbitMQ in the agent
section π
If you want to activate the RabbitMQ monitor in the Collector agent daemonset, add mysql
to the receivers
section of your agent section in the configuration file:
agent:
config:
receivers:
smartagent/rabbitmq:
type: collectd/rabbitmq
host: localhost
port: 5672
username: otel
password: ${env:RABBITMQ_PASSWORD}
Next, include the receiver in the metrics
pipeline of the service
section of your configuration file:
service:
pipelines:
metrics:
receivers:
- smartagent/rabbitmq
Add RabbitMQ in the clusterReceiver
section π
Similarly, if you want to activate the RabbitMQ monitor in the cluster receiver, add mysql
to the receivers
section of your cluster receiver section in the configuration file:
clusterReceiver:
config:
receivers:
smartagent/rabbitmq:
type: collectd/rabbitmq
host: rabbitmq-service
port: 5672
username: otel
password: ${env:RABBITMQ_PASSWORD}
Next, include the receiver in the metrics
pipeline of the service
section of your configuration file:
service:
pipelines:
metrics:
receivers:
- smartagent/rabbitmq
Activate discovery mode on the Collector π
Use the discovery mode of the Splunk Distribution of OpenTelemetry Collector to detect metric sources and create a configuration based on the results.
See Deploy the Collector with automatic discovery for instructions on how to activate discovery mode in the Helm chart.
Add additional telemetry sources π
Use the autodetect
configuration option to activate additional telemetry sources.
Set autodetect.prometheus=true
if you want the Collector to scrape Prometheus metrics from pods that have generic Prometheus-style annotations. Add the following annotations on pods to allow a fine control of the scraping process:
prometheus.io/scrape: true
: The default configuration scrapes all pods. If set tofalse
, this annotation excludes the pod from the scraping process.prometheus.io/path
: The path to scrape the metrics from. The default value is/metrics
.prometheus.io/port
: The port to scrape the metrics from. The default value is9090
.
If the Collector is running in an Istio environment, set autodetect.istio=true
to make sure that all traces, metrics, and logs reported by Istio are collected in a unified manner.
For example, use the following configuration to activate automatic detection of both Prometheus and Istio telemetry sources:
splunkObservability:
accessToken: xxxxxx
realm: us0
clusterName: my-k8s-cluster
autodetect:
istio: true
prometheus: true
Deactivate particular types of telemetry π
By default, OpenTelemetry sends only metrics and traces to Splunk Observability Cloud and sends only logs to Splunk Platform. You can activate or deactivate any kind of telemetry data collection for a specific destination.
For example, the following configuration allows the Collector to send all collected telemetry data to Splunk Observability Cloud and the Splunk Platform if youβve properly configured them:
splunkObservability:
metricsEnabled: true
tracesEnabled: true
logsEnabled: true
splunkPlatform:
metricsEnabled: true
logsEnabled: true