Install the Collector for Kubernetes ๐
The Splunk Distribution of OpenTelemetry Collector for Kubernetes is a Helm chart for the Splunk Distribution of OpenTelemetry Collector. Use Helm charts to define, install, and upgrade Kubernetes applications.
Install the chart using one of these methods:
Install the Collector with the Helm chart ๐
Use the Helm chart to do the following:
Create a Kubernetes DaemonSet along with other Kubernetes objects in a Kubernetes cluster.
Receive, process, and export metric, trace, and log data for Splunk Enterprise, Splunk Cloud Platform, and Splunk Observability Cloud.
Supported Kubernetes distributions ๐
The Helm chart works with default configurations of the main Kubernetes distributions. Use actively supported versions:
- Minikube. This distribution was made for local developers and is not meant to be used in production.
Minikube was created to spin up various past versions of Kubernetes.
Minikube versions donโt necessarily align with Kubernetes versions. For example, the Minikube v1.27.1 releases notes state the default Kubernetes version was bumped to v1.25.2.
While the chart should work for other Kubernetes distributions, the values.yaml configuration file could require additional updates.
Prerequisites ๐
You need the following resources to use the chart:
Helm 3. Helm 2 is not supported.
Administrator access to your Kubernetes cluster.
Prerequisites: Destination ๐
The Collector for Kubernetes requires a destination: Splunk Enterprise or Splunk Cloud (splunkPlatform
) or Splunk Observability Cloud (splunkObservability
).
Depending on your destination, you need:
To send data to
splunkPlatform
:Splunk Enterprise 8.0 or later.
A minimum of one Splunk platform index ready to collect the log data. This index is used for ingesting logs.
An HTTP Event Collector (HEC) token and endpoint. See https://docs.splunk.com/Documentation/Splunk/8.2.0/Data/UsetheHTTPEventCollector and https://docs.splunk.com/Documentation/Splunk/8.2.0/Data/ScaleHTTPEventCollector.
splunkPlatform.endpoint
. URL to a Splunk instance, for example:"http://localhost:8088/services/collector"
.splunkPlatform.token
. Splunk HTTP Event Collector token.
To send data to
splunkObservability
:splunkObservability.accessToken
. Your Splunk Observability org access token. See Create and manage organization access tokens using Splunk Observability Cloud.splunkObservability.realm
. Splunk realm to send telemetry data to. The default isus0
. See realms.
Deploy the Helm chart ๐
Run the following commands to deploy the Helm chart:
Add the Helm repo:
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
Determine your destination.
For Observability Cloud:
helm install my-splunk-otel-collector --set="splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
For Splunk Enterprise or Splunk Cloud:
helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
For both Splunk Observability Cloud and Splunk Enterprise or Splunk Cloud:
helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
Specify a namespace to deploy the chart to with the
-n
argument:helm -n otel install my-splunk-otel-collector -f values.yaml splunk-otel-collector-chart/splunk-otel-collector
Caution
The values.yaml file lists all supported configurable parameters for the Helm chart, along with a detailed explanation of each parameter. Review it to understand how to configure this chart.
You can also configure the Helm chart to support different use cases, such as trace sampling and sending data through a proxy server. See Examples of chart configuration for more information.
Configure other parameters ๐
You can configure the following:
For example:
helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
helm install my-splunk-otel-collector --set="splunkRealm=us0,splunkAccessToken=xxxxxx,clusterName=my-cluster" --set=distribution={value},cloudProvider={value} splunk-otel-collector-chart/splunk-otel-collector
Read more about Configure Helm for Kubernetes and also the advanced Kubernetes config.
See examples of Helm chart configuration for additional chart installation examples or upgrade commands to change the default behavior.
For logs, see Configure logs for Kubernetes.
Set Helm using a YAML file ๐
You can also set Helm values as arguments using a YAML file. For example, after creating a YAML file named my_values.yaml, run the following command to deploy the Helm chart:
helm install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector
See an example of a YAML file in GitHub. Options include:
Set
isWindows
totrue
to apply the Kubernetes cluster with Windows worker nodes.Set
networkExplorer.enabled
totrue
to use the default values for splunk-otel-network-explorer.
Set Prometheus metrics ๐
Set the Collector to automatically scrape any pod emitting Prometheus by adding this property to the Helm chartโs values YAML:
autodetect:
prometheus: true
Add this configuration in the resources file for any pods in the deployment:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "8080"
Verify the deployment ๐
If the chart is deployed successfully, the output displays a message informing that the Splunk Distribution of OpenTelemetry Collector for Kubernetes is being deployed in your Kubernetes cluster, the last deployment date, and the status.
Install the Collector with resource YAML manifests ๐
Note
To specify the configuration, you at least need to know your Splunk realm and base64-encoded access token.
A configuration file can contain multiple resource manifests. Each manifest applies a specific state to a Kubernetes object. The manifests must be configured for Splunk Observability Cloud only and come with all telemetry types activated for the agent, which is the default when installing the Helm chart.
Determine which manifest you want to use ๐
Download the necessary manifest files from the examples repository. Refer to the README
files for more details on each example.
Determine which Collector deployment modes you want to use, agent or gateway. By default, agent mode is configured to send data directly to Splunk SaaS endpoints. Agent mode can be reconfigured to send to a gateway.
Update the manifest ๐
Once youโve decided which manifest suits you better, make the following updates:
In the secret.yaml manifest, update the
splunk_observability_access_token
data field with your base64-encoded access token.- Update any configmap-agent.yaml, configmap-gateway.yaml, and configmap-cluster-receiver.yaml manifest files youโre going to use. Search for โCHANGEMEโ to find the values that must be updated to use the rendered manifests directly.
You need to update โCHANGEMEโ in exporter configurations to the value of the Splunk realm.
You need to update โCHANGEMEโ in attribute processor configurations to the value of the cluster name.
Apply the manifest ๐
After youโve updated them, apply the manifests using kubectl
, as shown in the following examples.
For agent mode, download the agent-only manifest directory on GitHub for pre-rendered Kubernetes resource manifests that can be applied using the kubectl apply
command after being updated with your token, realm information, and cluster name:
kubectl apply -f <agent-manifest-directory> --recursive
For gateway mode, download the gateway-only manifest directory on GitHub for pre-rendered Kubernetes resource manifests that can be applied using the kubectl apply
command after being updated with your token, realm information, and cluster name:
kubectl apply -f <gateway-manifest-directory> --recursive
Use templates ๐
You can create your own manifest YAML files with customized parameters using helm template
command.
helm template --namespace default --set cloudProvider='aws' --set distribution='openshift' --set splunkObservability.accessToken='KUwtoXXXXXXXX' --set clusterName='my-openshift-EKS-dev-cluster' --set splunkObservability.realm='us1' --set gateway.enabled='false' --output-dir <rendered_manifests_dir> --generate-name splunk-otel-collector-chart/splunk-otel-collector
If you prefer, you can update the values.yaml file first.
helm template --namespace default --values values.yaml --output-dir <rendered_manifests_dir> --generate-name splunk-otel-collector-chart/splunk-otel-collector
Manifest files will be created in your specified folder <rendered_manifests_dir>
.
Manifest examples ๐
See the following manifest to set security constraints:
---
# Source: splunk-otel-collector/templates/securityContextConstraints.yaml
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: default-splunk-otel-collector
labels:
app.kubernetes.io/name: splunk-otel-collector
helm.sh/chart: splunk-otel-collector-0.77.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: default
app.kubernetes.io/version: "0.77.0"
app: splunk-otel-collector
chart: splunk-otel-collector-0.77.0
release: default
heritage: Helm
users:
- system:serviceaccount:default:default-splunk-otel-collector
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegedContainer: false
allowedCapabilities: []
allowedFlexVolumes: []
defaultAddCapabilities: []
fsGroup:
type: MustRunAs
priority: 10
readOnlyRootFilesystem: true
requiredDropCapabilities:
- ALL
runAsUser:
type: RunAsAny
seLinuxContext:
seLinuxOptions:
level: s0
role: system_r
type: spc_t
user: system_u
type: MustRunAs
supplementalGroups:
type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- hostPath
- secret
Use the Kubernetes Operator in OpenTelemetry ๐
You can install the Collector with an upstream Kubernetes Operator for Auto Instrumentation. This instance of the Kubernetes Operator is part of the upstream OpenTelemetry Operator project. See more at Install the Collector and the upstream Kubernetes Operator for Auto Instrumentation.
Note
The upstream Kubernetes Operator is not related to the Splunk Operator for Kubernetes, which is used to deploy and operate Splunk Enterprise deployments in a Kubernetes infrastructure.
Splunk Distribution for the Kubernetes Operator (Alpha) ๐
Caution
This project is Alpha. Do not use in production.
The Splunk Distribution of OpenTelemetry Collector for Kubernetes Operator is the Observability Cloud implementation of a Kubernetes Operator, and it helps deploy and manage the Splunk Distribution of OpenTelemetry Collector for Kubernetes. See the README file in GitHub for installation instructions.
Next steps ๐
After installing the package, you can: