Docs » Other data ingestion methods » Send telemetry using the OpenTelemetry Collector Contrib project

Caution

Splunk provides best-effort support for the OpenTelemetry Collector Contrib. Only Splunk OpenTelemetry distributions are in scope for official Splunk support and support-related service-level agreements (SLAs).

Send telemetry using the OpenTelemetry Collector Contrib project πŸ”—

The OpenTelemetry Collector Contrib project, referred to officially as the upstream Collector, is the upstream source of all OpenTelemetry Collector distributions, including the Splunk Distribution of OpenTelemetry Collector. The upstream Collector contains vendor-specific components, such as receivers and exporters for several observability back ends, including Splunk Observability Cloud.

The Splunk Distribution of OpenTelemetry Collector, on the other hand, is configured for Splunk Observability Cloud and can be deployed automatically by a variety of configuration management tools or using the installer scripts. The distribution adds additional functionality to the Collector while preserving all the features from the OpenTelemetry Collector Contrib project. See Get started with the Splunk Distribution of the OpenTelemetry Collector.

If you need to use the upstream Collector due to technical or practical reasons, you can still send traces and metrics to Splunk Observability Cloud. Read on to learn about the differences between the upstream Collector and the Splunk OTel Collector, how to configure the upstream Collector for Splunk Observability Cloud, and how to migrate from the upstream Collector to the Splunk Distribution of OpenTelemetry Collector.

Note

Splunk participates in the OpenTelemetry project and is committed to its growth. Features developed for the Splunk distribution are regularly added to the upstream Collector for the benefit of the entire community. The goal is for all Splunk distributions to eventually become snapshots of the OpenTelemetry Contrib Collector project.

Feature comparison πŸ”—

The following table compares the Splunk Distribution of OpenTelemetry Collector with the Collector from the OpenTelemetry Collector Contrib project.

Feature

Splunk Distribution of OpenTelemetry Collector

OpenTelemetry Collector Contrib project

Splunk support

Full support

Best effort

Installer scripts for Linux and Windows

Yes, for Windows and Linux

No

Configured for Splunk Observability Cloud

Yes, for host monitoring (agent) and data forwarding (gateway) modes

No

Zero config automatic instrumentation

Yes

No

Discovery mode

Yes

No

Recipes for configuration management tools

Yes, for Ansible, Chef, Puppet, and Salt

No

Receivers for IMM already included

Yes

No

AlwaysOn Profiling

Yes, CPU and memory

No

Related content

Yes

Yes, when using Splunk exporters

Prerequisites πŸ”—

To send data to Splunk Observability Cloud you can use the Collector from the OpenTelemetry Collector Contrib project. See https://github.com/open-telemetry/opentelemetry-collector-contrib on GitHub for more information.

Note

Make sure that the version number of OpenTelemetry Collector Contrib is the same as the latest Splunk distribution before configuring the Collector. To check the version of the Splunk Distribution of OpenTelemetry Collector, see the Releases page on GitHub.

Sample configuration for Splunk Observability Cloud πŸ”—

The following example shows how to configure the upstream Collector to send metrics and traces to Splunk Observability Cloud:

# Minimum configuration file for the OpenTelemetry Collector Contrib distribution
# https://github.com/open-telemetry/opentelemetry-collector-contrib
#
# For official documentation, see the following page:
# https://docs.splunk.com/Observability/gdi/other-ingestion-methods/upstream-collector.html

# The following environment variables are required to be manually defined or
# configured below:
# - SPLUNK_ACCESS_TOKEN: The Splunk access token to authenticate requests
# - SPLUNK_API_URL: The Splunk API URL, e.g. https://api.us0.signalfx.com
# - SPLUNK_BALLAST_SIZE_MIB: This size of the ballast which should be 1/3 to 1/2 of memory allocated
# - SPLUNK_MEMORY_LIMIT_MIB: 90% of memory allocated
# - SPLUNK_HEC_TOKEN: The Splunk HEC authentication token
# - SPLUNK_HEC_URL: The Splunk HEC endpoint URL, e.g. https://ingest.us0.signalfx.com/v1/log
# - SPLUNK_INGEST_URL: The Splunk ingest URL, e.g. https://ingest.us0.signalfx.com
# - SPLUNK_TRACE_URL: The Splunk trace endpoint URL, e.g. https://ingest.us0.signalfx.com/v2/trace

extensions:
  health_check:
    endpoint: 0.0.0.0:13133
  http_forwarder:
    ingress:
      endpoint: 0.0.0.0:6060
    egress:
      endpoint: "${SPLUNK_API_URL}"
      # Use instead when sending to gateway
      #endpoint: "${SPLUNK_GATEWAY_URL}"
  zpages:
    #endpoint: 0.0.0.0:55679

receivers:
  fluentforward:
    endpoint: 127.0.0.1:8006
  hostmetrics:
    collection_interval: 10s
    scrapers:
      cpu:
      disk:
      filesystem:
      memory:
      network:
      # System load average metrics https://en.wikipedia.org/wiki/Load_(computing)
      load:
      # Paging/Swap space utilization and I/O metrics
      paging:
      # Aggregated system process count metrics
      processes:
      # System processes metrics, disabled by default
      # process:
  jaeger:
    protocols:
      grpc:
        endpoint: 0.0.0.0:14250
      thrift_binary:
        endpoint: 0.0.0.0:6832
      thrift_compact:
        endpoint: 0.0.0.0:6831
      thrift_http:
        endpoint: 0.0.0.0:14268
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  # This section is used to collect the OpenTelemetry Collector metrics
  # Even if just a Splunk APM customer, these metrics are included
  prometheus/internal:
    config:
      scrape_configs:
      - job_name: 'otel-collector'
        scrape_interval: 10s
        static_configs:
        - targets: ['0.0.0.0:8888']
        metric_relabel_configs:
          - source_labels: [ __name__ ]
            regex: 'otelcol_rpc_.*'
            action: drop
          - source_labels: [ __name__ ]
            regex: 'otelcol_http_.*'
            action: drop
          - source_labels: [ __name__ ]
            regex: 'otelcol_processor_batch_.*'
            action: drop
  signalfx:
    endpoint: 0.0.0.0:9943
  zipkin:
    endpoint: 0.0.0.0:9411

processors:
  batch:
  # Enabling the memory_limiter is strongly recommended for every pipeline.
  # Configuration is based on the amount of memory allocated to the collector.
  # In general, the limit should be 90% of the collector's memory. The simplest way to specify the
  # ballast size is set the value of SPLUNK_BALLAST_SIZE_MIB env variable.
  # For more information about memory limiter, see
  # https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiter/README.md
  memory_limiter:
    check_interval: 2s
    limit_mib: ${SPLUNK_MEMORY_LIMIT_MIB}

  # Detect if the collector is running on a cloud system, which is important for creating unique cloud provider dimensions.
  # Detector order is important: the `system` detector goes last so it can't preclude cloud detectors from setting host/os info.
  # Resource detection processor is configured to override all host and cloud attributes because instrumentation
  # libraries can send wrong values from container environments.
  # https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor#ordering
  resourcedetection:
    detectors: [gcp, ecs, ec2, azure, system]
    override: true

  # Optional: The following processor can be used to add a default "deployment.environment" attribute to the logs and
  # traces when it's not populated by instrumentation libraries.
  # If enabled, make sure to enable this processor in the pipeline below.
  #resource/add_environment:
    #attributes:
      #- action: insert
        #value: staging/production/...
        #key: deployment.environment

exporters:
  # Traces
  sapm:
    access_token: "${SPLUNK_ACCESS_TOKEN}"
    endpoint: "${SPLUNK_TRACE_URL}"
  # Metrics + Events
  signalfx:
    access_token: "${SPLUNK_ACCESS_TOKEN}"
    api_url: "${SPLUNK_API_URL}"
    ingest_url: "${SPLUNK_INGEST_URL}"
    # Use instead when sending to gateway
    #api_url: http://${SPLUNK_GATEWAY_URL}:6060
    #ingest_url: http://${SPLUNK_GATEWAY_URL}:9943
    sync_host_metadata: true
    correlation:
  # Logs for Splunk Cloud Platform and Log Observer Connect
  # See https://docs.splunk.com/Observability/gdi/opentelemetry/components/splunk-hec-exporter.html
  splunk_hec:
    token: "${SPLUNK_HEC_TOKEN}"
    endpoint: "${SPLUNK_HEC_URL}"
    source: "otel"
    sourcetype: "otel"
    profiling_data_enabled: false
  splunk_hec/profiling:
    token: "${SPLUNK_ACCESS_TOKEN}"
    endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v1/log"
    log_data_enabled: false
  # Send to gateway
  otlp:
    endpoint: "${SPLUNK_GATEWAY_URL}:4317"
    tls:
      insecure: true
  # Debug
  debug:
    verbosity: detailed

service:
  extensions: [health_check, http_forwarder, zpages]
  pipelines:
    # Required for Splunk APM
    traces:
      receivers: [jaeger, otlp, zipkin]
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      #- resource/add_environment
      exporters: [sapm, signalfx]
      # Use instead when sending to gateway
      #exporters: [otlp, signalfx]
    # Required for Splunk Infrastructure Monitoring
    metrics:
      receivers: [hostmetrics, otlp, signalfx]
      processors: [memory_limiter, batch, resourcedetection]
      exporters: [signalfx]
      # Use instead when sending to gateway
      #exporters: [otlp]
    # Required for Splunk APM and/or Infrastructure Monitoring
    metrics/internal:
      receivers: [prometheus/internal]
      processors: [memory_limiter, batch, resourcedetection]
      exporters: [signalfx]
      # Use instead when sending to gateway
      #exporters: [otlp]
    # Required for Splunk Infrastructure Monitoring
    logs/signalfx:
      receivers: [signalfx]
      processors: [memory_limiter, batch]
      exporters: [signalfx]
      # Use instead when sending to gateway
      #exporters: [otlp]
    # Required for profiling
    logs:
      receivers: [otlp]
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      #- resource/add_environment
      exporters: [splunk_hec/profiling]
      # Use instead when sending to gateway
      #exporters: [otlp]

Migrate to the Splunk Distribution of OpenTelemetry Collector πŸ”—

Migrating from existing upstream Collectors to the Splunk Distribution of OpenTelemetry Collector requires fewer steps than migrating from other proprietary agents, because the Splunk distribution is based on the OpenTelemetry Collector.

To migrate from the Collector Contrib to the Splunk OTel Collector, follow these steps:

  1. Save a copy of your current upstream Collector configuration.

  2. Stop the Collector Contrib service using sudo systemctl stop otelcol on Linux or net stop otelcol on Windows. If you’re running the Collector Contrib in a Terminal session, interrupt it by selecting Ctrl+C.

  3. Remove the OpenTelemetry Collector Contrib binary and configuration files, including system service configuration files, or use the package manager in your system to remove the upstream Collector.

  4. Install the Splunk OTel Collector. See Get started: Understand and use the Collector. If you’ve deployed the Collector in Kubernetes use the Helm chart. See Install the Collector with the Helm chart for more information.

  5. Configure the Splunk OTel Collector taking into account the settings you saved before removing the Collector Contrib project, as well as the components available in the Splunk Distribution of OpenTelemetry Collector. See Sample configuration for Splunk Observability Cloud and Collector components.

Troubleshooting πŸ”—

If you are a Splunk Observability Cloud customer and are not able to see your data in Splunk Observability Cloud, you can get help in the following ways.

Available to Splunk Observability Cloud customers

Available to prospective customers and free trial users

  • Ask a question and get answers through community support at Splunk Answers .

  • Join the Splunk #observability user group Slack channel to communicate with customers, partners, and Splunk employees worldwide. To join, see Chat groups in the Get Started with Splunk Community manual.

To learn about even more support options, see Splunk Customer Success .