Docs » Get started with the Splunk Distribution of the OpenTelemetry Collector » Get started with the Collector for Kubernetes » Install the Collector for Kubernetes

Install the Collector for Kubernetes πŸ”—

The Splunk Distribution of OpenTelemetry Collector for Kubernetes is a Helm chart for the Splunk Distribution of OpenTelemetry Collector. Use Helm charts to define, install, and upgrade Kubernetes applications.

Install the chart using one of these methods:

Install the Collector with the Helm chart πŸ”—

Use the Helm chart to do the following:

  • Create a Kubernetes DaemonSet along with other Kubernetes objects in a Kubernetes cluster.

  • Receive, process, and export metric, trace, and log data for Splunk Enterprise, Splunk Cloud Platform, and Splunk Observability Cloud.

Supported Kubernetes distributions πŸ”—

The Helm chart works with default configurations of the main Kubernetes distributions. Use actively supported versions:

While the chart should work for other Kubernetes distributions, the default values.yaml configuration file could require additional updates.

Helm chart components πŸ”—

The Helm chart for the Collector has three components: agent, cluster receiver, and gateway (optional).

For use cases about the different components, see the GitHub documentation Splunk OpenTelemetry Collector Helm Chart Components: Use Cases .

Agent component πŸ”—

The agent component is deployed to each node in the Kubernetes cluster as a DaemonSet, and monitors all the data sources within each node.

The agent component consists of the following config files:

  • daemonset.yaml

    • Defines a DaemonSet to ensure that some (or all) nodes in the cluster run a copy of the agent pod.

    • Collects data from each node in the Kubernetes cluster.

  • configmap-agent.yaml

    • Provides configuration data to the agent component.

    • Contains details about how the agent collects and forwards data.

  • service-agent.yaml (optional)

    • Defines a Kubernetes Service for the agent.

    • Used for internal communication within the cluster or for exposing specific metrics or health endpoints.

Cluster receiver component πŸ”—

The cluster receiver component runs as a single pod in the cluster created by a deployment, and collects data from a single location. Use this component in scenarios where telemetry data is available from a cluster-wide service or endpoint.

The cluster receiver component consists of the following config files:

  • deployment-cluster-receiver.yaml

    • Defines a deployment to manage the replicated application for the cluster receiver.

    • Receives and processes data at the cluster level.

  • configmap-cluster-receiver.yaml

    • Provides configuration data to the cluster receiver.

    • Contains details about how the receiver processes and forwards the data it collects.

  • pdb-cluster-receiver.yaml

    • Defines a Pod Disruption Budget (PDB) for the cluster receiver.

    • Ensures that a certain number or percentage of replicas remain available during operations like node maintenance.

  • service-cluster-receiver-stateful-set.yaml (optional)

    • Defines a Kubernetes service for the cluster receiver.

    • Associated with a StatefulSet and used for load balancing, internal communication, or exposing specific endpoints.

Gateway component (optional) πŸ”—

The gateway component serves as an intermediary. It receives, processes, enriches, and forwards data, enhancing data exportation. Use it primarily in larger clusters to scale monitoring capabilities.

The gateway component consists of the following config files:

  • deployment-gateway.yaml

    • Defines a Deployment for the gateway.

    • Processes and forwards data between the agents/receivers and external destinations.

  • configmap-gateway.yaml

    • Provides configuration data to the gateway.

    • Contains details about how the gateway processes, transforms, and forwards the data it receives.

  • service.yaml

    • Defines a Kubernetes Service for the gateway.

    • Used for internal communication within the cluster for accepting data from the agent or cluster receiver and forwarding it to the Splunk backend endpoint.

  • pdb-gateway.yaml

    • Defines a Pod Disruption Budget (PDB) for the gateway.

    • Ensures that a certain number or percentage of replicas of the gateway remain available during voluntary disruptions.

Prerequisites πŸ”—

You need the following resources to use the chart:

  • Helm 3 . Helm 2 is not supported.

  • Administrator access to your Kubernetes cluster.

Prerequisites: Destination πŸ”—

The Collector for Kubernetes requires a destination: Splunk Enterprise or Splunk Cloud Platform (splunkPlatform) or Splunk Observability Cloud (splunkObservability).

Depending on your destination, you need:

  • To send data to splunkPlatform:

    • Splunk Enterprise 8.0 or higher.

    • A minimum of one Splunk platform index ready to collect the log data. This index is used for ingesting logs.

    • An HTTP Event Collector (HEC) token and endpoint. See Set up and use HTTP Event Collector in Splunk Web and Scale HTTP Event Collector .

    • splunkPlatform.endpoint. URL to a Splunk instance, for example: "http://localhost:8088/services/collector".

    • splunkPlatform.token. Splunk HTTP Event Collector token.

  • To send data to splunkObservability:

Note

The default Splunk platform index used by the Collector for Kubernetes is main.

Deploy the Helm chart πŸ”—

Run the following commands to deploy the Helm chart:

  1. Add the Helm repo:

    helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
    
  2. Determine your destination.

    For Splunk Observability Cloud:

    helm install my-splunk-otel-collector --set="splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
    

    For Splunk Enterprise or Splunk Cloud Platform:

    helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
    

    For both Splunk Observability Cloud and Splunk Enterprise or Splunk Cloud Platform:

    helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://127.0.0.1:8088/services/collector,splunkPlatform.token=xxxxxx,splunkPlatform.metricsIndex=k8s-metrics,splunkPlatform.index=main,splunkObservability.realm=us0,splunkObservability.accessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
    
  3. Specify a namespace to deploy the chart to with the -n argument:

    helm -n otel install my-splunk-otel-collector -f values.yaml splunk-otel-collector-chart/splunk-otel-collector
    

Caution

The values.yaml file lists all supported configurable parameters for the Helm chart, along with a detailed explanation of each parameter. Review it to understand how to configure this chart.

You can also configure the Helm chart to support different use cases, such as trace sampling and sending data through a proxy server. See Examples of chart configuration for more information.

Configure other parameters πŸ”—

You can configure the following:

For example:

helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
helm install my-splunk-otel-collector --set="splunkRealm=us0,splunkAccessToken=xxxxxx,clusterName=my-cluster" --set=distribution={value},cloudProvider={value} splunk-otel-collector-chart/splunk-otel-collector

Set Helm using a YAML file πŸ”—

You can also set Helm values as arguments using a YAML file. For example, after creating a YAML file named my_values.yaml, run the following command to deploy the Helm chart:

helm install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector

See an example of a YAML file in GitHub . Options include:

  • Set isWindows to true to apply the Kubernetes cluster with Windows worker nodes.

Set Prometheus metrics πŸ”—

Set the Collector to automatically scrape any pod emitting Prometheus by adding this property to the Helm chart’s values YAML:

autodetect:
   prometheus: true

Add this configuration in the resources file for any pods in the deployment:

metadata:
   annotations:
      prometheus.io/scrape: "true"
      prometheus.io/path: /metrics
      prometheus.io/port: "8080"

Verify the deployment πŸ”—

If the chart is deployed successfully, the output displays a message informing that the Splunk Distribution of OpenTelemetry Collector for Kubernetes is being deployed in your Kubernetes cluster, the last deployment date, and the status.

Install the Collector with resource YAML manifests πŸ”—

Note

To specify the configuration, you at least need to know your Splunk realm and base64-encoded access token.

A configuration file can contain multiple resource manifests. Each manifest applies a specific state to a Kubernetes object. The manifests must be configured for Splunk Observability Cloud only and come with all telemetry types activated for the agent, which is the default when installing the Helm chart.

Determine which manifest you want to use πŸ”—

Download the necessary manifest files from the examples repository . Refer to the README files for more details on each example.

Determine which Collector deployment modes you want to use, agent or gateway. By default, host monitoring (agent) mode is configured to send data directly to Splunk SaaS endpoints. Host monitoring (agent) mode can be reconfigured to send to a gateway.

Update the manifest πŸ”—

Once you’ve decided which manifest suits you better, make the following updates:

  1. In the secret.yaml manifest, update the splunk_observability_access_token data field with your base64-encoded access token.

  2. Update any configmap-agent.yaml, configmap-gateway.yaml, and configmap-cluster-receiver.yaml manifest files you use. Search for β€œCHANGEME” to find the values that must be updated to use the rendered manifests directly.
    1. You need to update β€œCHANGEME” in exporter configurations to the value of the Splunk realm.

    2. You need to update β€œCHANGEME” in attribute processor configurations to the value of the cluster name.

Apply the manifest πŸ”—

After you’ve updated them, apply the manifests using kubectl, as shown in the following examples.

For host monitoring (agent) mode, download the agent-only manifest directory on GitHub for pre-rendered Kubernetes resource manifests that can be applied using the kubectl apply command after being updated with your token, realm information, and cluster name:

kubectl apply -f <agent-manifest-directory> --recursive

For data forwarding (gateway) mode, download the gateway-only manifest directory on GitHub for pre-rendered Kubernetes resource manifests that can be applied using the kubectl apply command after being updated with your token, realm information, and cluster name:

kubectl apply -f <gateway-manifest-directory> --recursive

Use templates πŸ”—

You can create your own manifest YAML files with customized parameters using helm template command.

helm template --namespace default --set cloudProvider='aws' --set distribution='openshift' --set splunkObservability.accessToken='KUwtoXXXXXXXX' --set clusterName='my-openshift-EKS-dev-cluster' --set splunkObservability.realm='us1' --set gateway.enabled='false' --output-dir <rendered_manifests_dir> --generate-name splunk-otel-collector-chart/splunk-otel-collector

If you prefer, you can update the values.yaml file first.

helm template --namespace default --values values.yaml --output-dir <rendered_manifests_dir> --generate-name splunk-otel-collector-chart/splunk-otel-collector

Manifest files will be created in your specified folder <rendered_manifests_dir>.

Manifest examples πŸ”—

See the following manifest to set security constraints:

---
# Source: splunk-otel-collector/templates/securityContextConstraints.yaml
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
  name: default-splunk-otel-collector
  labels:
    app.kubernetes.io/name: splunk-otel-collector
    helm.sh/chart: splunk-otel-collector-0.96.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: default
    app.kubernetes.io/version: "0.96.1"
    app: splunk-otel-collector
    chart: splunk-otel-collector-0.96.0
    release: default
    heritage: Helm
users:
- system:serviceaccount:default:default-splunk-otel-collector
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegedContainer: false
allowedCapabilities: []
defaultAddCapabilities: []
fsGroup:
  type: MustRunAs
priority: 10
readOnlyRootFilesystem: true
requiredDropCapabilities:
- ALL
runAsUser:
  type: RunAsAny
seLinuxContext:
  seLinuxOptions:
    level: s0
    role: system_r
    type: spc_t
    user: system_u
  type: MustRunAs
supplementalGroups:
  type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- hostPath
- secret

Use the Kubernetes Operator in OpenTelemetry πŸ”—

You can install the Collector with an upstream Kubernetes Operator for Auto Instrumentation. This instance of the Kubernetes Operator is part of the upstream OpenTelemetry Operator project. See the OpenTelemetry GitHub repo <OpenTelemetry GitHub repo for more information.

Note

The upstream Kubernetes Operator is not related to the Splunk Operator for Kubernetes, which is used to deploy and operate Splunk Enterprise deployments in a Kubernetes infrastructure.

Splunk Distribution for the Kubernetes Operator (Alpha) πŸ”—

Caution

This project is Alpha. Do not use in production.

The Splunk Distribution of OpenTelemetry Collector for Kubernetes Operator is the Splunk Observability Cloud implementation of a Kubernetes Operator, and it helps deploy and manage the Splunk Distribution of OpenTelemetry Collector for Kubernetes. See the README file in GitHub for installation instructions.

Next steps πŸ”—

After installing the package, you can: