Install the Collector for Kubernetes π
The Splunk Distribution of OpenTelemetry Collector for Kubernetes is a Helm chart for the Splunk Distribution of OpenTelemetry Collector. Use Helm charts to define, install, and upgrade Kubernetes applications.
Install the chart using one of these methods:
Install the Collector with the Helm chart π
Use the Helm chart to do the following:
Create a Kubernetes DaemonSet along with other Kubernetes objects in a Kubernetes cluster.
Receive, process, and export metric, trace, and log data for Splunk Enterprise, Splunk Cloud Platform, and Splunk Observability Cloud.
Supported Kubernetes distributions π
The Helm chart works with default configurations of the main Kubernetes distributions. Use actively supported versions:
- Minikube. This distribution was made for local developers and is not meant to be used in production.
Minikube was created to spin up various past versions of Kubernetes.
Minikube versions donβt necessarily align with Kubernetes versions. For example, the Minikube v1.27.1 releases notes state the default Kubernetes version was bumped to v1.25.2.
While the chart should work for other Kubernetes distributions, the values.yaml configuration file could require additional updates.
Helm chart components π
The Helm chart for the Collector has three components: agent, cluster receiver, and gateway (optional).
For use cases about the different components, see the GitHub documentation Splunk OpenTelemetry Collector Helm Chart Components: Use Cases .
Agent component π
The agent component consists of the following config files:
daemonset.yaml
Defines a DaemonSet to ensure that some (or all) nodes in the cluster run a copy of the agent pod.
Collects data from each node in the Kubernetes cluster.
configmap-agent.yaml
Provides configuration data to the agent component.
Contains details about how the agent collects and forwards data.
service-agent.yaml (optional)
Defines a Kubernetes Service for the agent.
Used for internal communication within the cluster or for exposing specific metrics or health endpoints.
Cluster receiver component π
The cluster receiver component consists of the following config files:
deployment-cluster-receiver.yaml
Defines a deployment to manage the replicated application for the cluster receiver.
Receives and processes data at the cluster level.
configmap-cluster-receiver.yaml
Provides configuration data to the cluster receiver.
Contains details about how the receiver processes and forwards the data it collects.
pdb-cluster-receiver.yaml
Defines a Pod Disruption Budget (PDB) for the cluster receiver.
Ensures that a certain number or percentage of replicas remain available during operations like node maintenance.
service-cluster-receiver-stateful-set.yaml (optional)
Defines a Kubernetes service for the cluster receiver.
Associated with a StatefulSet and used for load balancing, internal communication, or exposing specific endpoints.
Gateway component (optional) π
The gateway component consists of the following config files:
deployment-gateway.yaml
Defines a Deployment for the gateway.
Processes and forwards data between the agents/receivers and external destinations.
configmap-gateway.yaml
Provides configuration data to the gateway.
Contains details about how the gateway processes, transforms, and forwards the data it receives.
service.yaml
Defines a Kubernetes Service for the gateway.
Used for internal communication within the cluster for accepting data from the agent or cluster receiver and forwarding it to the Splunk backend endpoint.
pdb-gateway.yaml
Defines a Pod Disruption Budget (PDB) for the gateway.
Ensures that a certain number or percentage of replicas of the gateway remain available during voluntary disruptions.
Prerequisites π
You need the following resources to use the chart:
Helm 3 . Helm 2 is not supported.
Administrator access to your Kubernetes cluster.
Prerequisites: Destination π
The Collector for Kubernetes requires a destination: Splunk Enterprise or Splunk Cloud Platform (splunkPlatform
) or Splunk Observability Cloud (splunkObservability
).
Depending on your destination, you need:
To send data to
splunkPlatform
:Splunk Enterprise 8.0 or higher.
A minimum of one Splunk platform index ready to collect the log data. This index is used for ingesting logs.
An HTTP Event Collector (HEC) token and endpoint. See https://docs.splunk.com/Documentation/Splunk/8.2.0/Data/UsetheHTTPEventCollector and https://docs.splunk.com/Documentation/Splunk/8.2.0/Data/ScaleHTTPEventCollector .
splunkPlatform.endpoint
. URL to a Splunk instance, for example:"http://localhost:8088/services/collector"
.splunkPlatform.token
. Splunk HTTP Event Collector token.
To send data to
splunkObservability
:splunkObservability.accessToken
. Your Splunk Observability org access token. See Create and manage organization access tokens using Splunk Observability Cloud.splunkObservability.realm
. Splunk realm to send telemetry data to. The default isus0
. See realms .
Note
The default Splunk platform index used by the Collector for Kubernetes is main
.
Deploy the Helm chart π
Run the following commands to deploy the Helm chart:
Add the Helm repo:
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
Determine your destination.
For Splunk Observability Cloud:
helm install my-splunk-otel-collector --set="splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
For Splunk Enterprise or Splunk Cloud Platform:
helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
For both Splunk Observability Cloud and Splunk Enterprise or Splunk Cloud Platform:
helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
Specify a namespace to deploy the chart to with the
-n
argument:helm -n otel install my-splunk-otel-collector -f values.yaml splunk-otel-collector-chart/splunk-otel-collector
Caution
The values.yaml file lists all supported configurable parameters for the Helm chart, along with a detailed explanation of each parameter. Review it to understand how to configure this chart.
You can also configure the Helm chart to support different use cases, such as trace sampling and sending data through a proxy server. See Examples of chart configuration for more information.
Configure other parameters π
You can configure the following:
For example:
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
helm install my-splunk-otel-collector --set="splunkRealm=us0,splunkAccessToken=xxxxxx,clusterName=my-cluster" --set=distribution={value},cloudProvider={value} splunk-otel-collector-chart/splunk-otel-collector
Read more about Configure Helm for Kubernetes and also the advanced Kubernetes config.
See examples of Helm chart configuration for additional chart installation examples or upgrade commands to change the default behavior.
For logs, see Configure logs and events for Kubernetes.
Set Helm using a YAML file π
You can also set Helm values as arguments using a YAML file. For example, after creating a YAML file named my_values.yaml, run the following command to deploy the Helm chart:
helm install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector
See an example of a YAML file in GitHub . Options include:
Set
isWindows
totrue
to apply the Kubernetes cluster with Windows worker nodes.
Set Prometheus metrics π
Set the Collector to automatically scrape any pod emitting Prometheus by adding this property to the Helm chartβs values YAML:
autodetect:
prometheus: true
Add this configuration in the resources file for any pods in the deployment:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "8080"
Verify the deployment π
If the chart is deployed successfully, the output displays a message informing that the Splunk Distribution of OpenTelemetry Collector for Kubernetes is being deployed in your Kubernetes cluster, the last deployment date, and the status.
Install the Collector with resource YAML manifests π
Note
To specify the configuration, you at least need to know your Splunk realm and base64-encoded access token.
A configuration file can contain multiple resource manifests. Each manifest applies a specific state to a Kubernetes object. The manifests must be configured for Splunk Observability Cloud only and come with all telemetry types activated for the agent, which is the default when installing the Helm chart.
Determine which manifest you want to use π
Download the necessary manifest files from the examples repository . Refer to the README
files for more details on each example.
Determine which Collector deployment modes you want to use, agent or gateway. By default, host monitoring (agent) mode is configured to send data directly to Splunk SaaS endpoints. Host monitoring (agent) mode can be reconfigured to send to a gateway.
Update the manifest π
Once youβve decided which manifest suits you better, make the following updates:
In the secret.yaml manifest, update the
splunk_observability_access_token
data field with your base64-encoded access token.- Update any configmap-agent.yaml, configmap-gateway.yaml, and configmap-cluster-receiver.yaml manifest files you use. Search for βCHANGEMEβ to find the values that must be updated to use the rendered manifests directly.
You need to update βCHANGEMEβ in exporter configurations to the value of the Splunk realm.
You need to update βCHANGEMEβ in attribute processor configurations to the value of the cluster name.
Apply the manifest π
After youβve updated them, apply the manifests using kubectl
, as shown in the following examples.
For host monitoring (agent) mode, download the agent-only manifest directory on GitHub for pre-rendered Kubernetes resource manifests that can be applied using the kubectl apply
command after being updated with your token, realm information, and cluster name:
kubectl apply -f <agent-manifest-directory> --recursive
For data forwarding (gateway) mode, download the gateway-only manifest directory on GitHub for pre-rendered Kubernetes resource manifests that can be applied using the kubectl apply
command after being updated with your token, realm information, and cluster name:
kubectl apply -f <gateway-manifest-directory> --recursive
Use templates π
You can create your own manifest YAML files with customized parameters using helm template
command.
helm template --namespace default --set cloudProvider='aws' --set distribution='openshift' --set splunkObservability.accessToken='KUwtoXXXXXXXX' --set clusterName='my-openshift-EKS-dev-cluster' --set splunkObservability.realm='us1' --set gateway.enabled='false' --output-dir <rendered_manifests_dir> --generate-name splunk-otel-collector-chart/splunk-otel-collector
If you prefer, you can update the values.yaml file first.
helm template --namespace default --values values.yaml --output-dir <rendered_manifests_dir> --generate-name splunk-otel-collector-chart/splunk-otel-collector
Manifest files will be created in your specified folder <rendered_manifests_dir>
.
Manifest examples π
See the following manifest to set security constraints:
---
# Source: splunk-otel-collector/templates/securityContextConstraints.yaml
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: default-splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.88.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.88.0"
app: splunk-otel-collector
chart: splunk-otel-collector-0.88.0
release: default
heritage: Helm
users:
- system:serviceaccount:default:default-splunk-otel-collector
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegedContainer: false
allowedCapabilities: []
defaultAddCapabilities: []
fsGroup:
type: MustRunAs
priority: 10
readOnlyRootFilesystem: true
requiredDropCapabilities:
- ALL
runAsUser:
type: RunAsAny
seLinuxContext:
seLinuxOptions:
level: s0
role: system_r
type: spc_t
user: system_u
type: MustRunAs
supplementalGroups:
type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- hostPath
- secret
Use the Kubernetes Operator in OpenTelemetry π
You can install the Collector with an upstream Kubernetes Operator for Auto Instrumentation. This instance of the Kubernetes Operator is part of the upstream OpenTelemetry Operator project. See the OpenTelemetry GitHub repo <OpenTelemetry GitHub repo for more information.
Note
The upstream Kubernetes Operator is not related to the Splunk Operator for Kubernetes, which is used to deploy and operate Splunk Enterprise deployments in a Kubernetes infrastructure.
Splunk Distribution for the Kubernetes Operator (Alpha) π
Caution
This project is Alpha. Do not use in production.
The Splunk Distribution of OpenTelemetry Collector for Kubernetes Operator is the Splunk Observability Cloud implementation of a Kubernetes Operator, and it helps deploy and manage the Splunk Distribution of OpenTelemetry Collector for Kubernetes. See the README file in GitHub for installation instructions.
Next steps π
After installing the package, you can: