Docs » Get started with the Splunk Distribution of the OpenTelemetry Collector » Get started: Understand and use the Collector

Get started: Understand and use the Collector 🔗

For a quick overview of the Collector, see Get started with the Splunk Distribution of the OpenTelemetry Collector.

Get started with the available options to install, deploy, and configure the Splunk Distribution of the OpenTelemetry Collector. Next, learn how to use the Collector.

Install the Collector using packages and deployment tools

The Splunk Distribution of OpenTelemetry Collector is supported on Kubernetes, Linux, Windows, and Mac. Use one of the following packages to gather data for Splunk Observability Cloud:

See also other deployment tools and options.

Verify the Docker image of the Collector

Docker images of the Collector are automatically signed.

If you need to verify and trust your software package, use the following public key to verify the Docker images of the Collector for versions 0.93 or higher:


For older Collector versions, use this public key:

-----END PUBLIC KEY-----

Images are signed using cosign. To verify them:

  1. Save the public key to a file. For example,

  2. Run the following command:

cosign verify --insecure-ignore-tlog --key<collector-version>

Configure the Collector: Config files, auto-config, and other configuration sources

Use these configurations to change the default settings in each Collector package:


Splunk Observability Cloud offers several options for no-hassle automatic discovery and configuraiton. Learn more at Discover telemetry sources automatically.

Use multiple configuration files

To define multiple config files simultaneously use:

./otelcol --config=file:/path/to/first/file --config=file:/path/to/second/file

Additional configuration sources

You can also use these additional configuration sources:

Configure log collection

The Collector can capture logs using Fluentd, but this option is deactivated by default.


If you have a Log Observer entitlement or wish to collect logs for the target host, make sure Fluentd is installed and enabled in your Collector instance.

Configure Fluentd

You can use the Fluentd receiver to collect logs.

Common sources such as filelog, journald, and Windows Event Viewer are included in the installation. The following table describes the artifacts in the Fluentd directory:



fluent.conf or td-agent.conf

These are the main Fluentd configuration files used to forward events to the Collector. The file locations are /etc/otel/collector/fluentd/fluent.conf on Linux and C:\opt\td-agent\etc\td-agent\td-agent.conf on Windows. By default, these files configure Fluentd to include custom Fluentd sources and forward all log events with the @SPLUNK label to the Collector.


This directory contains the custom Fluentd configuration files. The location is /etc/otel/collector/fluentd/conf.d on Linux and \opt\td-agent\etc\td-agent\conf.d on Windows. All files in this directory ending with the .conf extension are automatically included by Fluentd, including \opt\td-agent\etc\td-agent\conf.d\eventlog.conf on Windows.


This is the drop-in file for the Fluentd service on Linux. Use this file to override the default Fluentd configuration path in favor of the custom Fluentd configuration file for Linux (fluent.conf).

The following is a sample configuration to collect custom logs:

  @type tail
  @label @SPLUNK
    @type none
  path /path/to/my/custom.log
  pos_file /var/log/td-agent/my-custom-logs.pos
  tag my-custom-logs

To learn more about the Fluentd receiver, see Fluent Forward receiver.

Use the Collector

The OpenTelemetry Collector is a tech-agnostic way to receive, process and export telemetry data.

After you’ve installed the Collector in your platform, update your config file to define the different Collector components (receivers, processors, and exporters) you want to use. However, receivers and exporters are not enabled until they are in a pipeline, as explained in the next paragraph. You can also add extensions that provide the OpenTelemetry Collector with additional functionality, such as diagnostics and health checks. Find the available components at Collector components.

Next, you need to configure your service pipelines to determine how to process your data. In the pipelines section you tie together the receivers, processors and exporters, designing the path your data takes. Multiple pipelines can be defined, and a single receiver or exporter definition can be used in multiple pipelines. A single pipeline can also have multiple receivers or exporters within it. Learn more at Process your data with pipelines.

See also the following documents to understand how the Collector works, and how to use it:

Components and services of the Collector

The Splunk Distribution of OpenTelemetry Collector has the following components and services:

  • Receivers: Determine how you’ll get data into the Collector.

  • Processors: Configure which operations you’ll perform on data before it’s exported. For example, filtering.

  • Exporters: Set up where to send data to. It can be one or more backends or destinations.

  • Extensions: Extend the capabilities of the Collector.

  • Services. It consists of two elements:

    • List of the extensions you’ve configured.

    • Pipelines: Path data will follow from reception, then through processing or modification, and finally exiting through exporters.

For more information, see Collector components.

Collector variables and internal metrics

The Collector operates using these environmental variables and internal metrics: