Docs » Get started with the Splunk Distribution of the OpenTelemetry Collector » Get started with the Collector for Linux » Collector for Linux default configuration

Collector for Linux default configuration ๐Ÿ”—

The Splunk Distribution of the OpenTelemetry Collector has the following components and services:

  • Receivers: Determine how youโ€™ll get data into the Collector.

  • Processors: Configure which operations youโ€™ll perform on data before itโ€™s exported. For example, filtering.

  • Exporters: Set up where to send data to. It can be one or more backends or destinations.

  • Extensions: Extend the capabilities of the Collector.

  • Services. It consists of two elements:

    • List of the extensions youโ€™ve configured.

    • Pipelines: Path data will follow from reception, then through processing or modification, and finally exiting through exporters.

For more information, see Collector components.

The Collector configuration is stored in a YAML file and specifies the behavior of the different components and services. See an overview of the elements and pipelines in the default configuration in the following sections.

See Tutorial: Configure the Splunk Distribution of OpenTelemetry Collector on a Linux host to learn how to configure the Collector.

Default configuration ๐Ÿ”—

This is the default configuration file for the Linux (Debian/RPM) Installer collector packages:

# Default configuration file for the Linux (deb/rpm) and Windows MSI collector packages

# If the collector is installed without the Linux/Windows installer script, the following
# environment variables are required to be manually defined or configured below:
# - SPLUNK_ACCESS_TOKEN: The Splunk access token to authenticate requests
# - SPLUNK_API_URL: The Splunk API URL, e.g. https://api.us0.signalfx.com
# - SPLUNK_BUNDLE_DIR: The path to the Smart Agent bundle, e.g. /usr/lib/splunk-otel-collector/agent-bundle
# - SPLUNK_COLLECTD_DIR: The path to the collectd config directory for the Smart Agent, e.g. /usr/lib/splunk-otel-collector/agent-bundle/run/collectd
# - SPLUNK_HEC_TOKEN: The Splunk HEC authentication token
# - SPLUNK_HEC_URL: The Splunk HEC endpoint URL, e.g. https://ingest.us0.signalfx.com/v1/log
# - SPLUNK_INGEST_URL: The Splunk ingest URL, e.g. https://ingest.us0.signalfx.com
# - SPLUNK_LISTEN_INTERFACE: The network interface the agent receivers listen on.

extensions:
  health_check:
    endpoint: "${SPLUNK_LISTEN_INTERFACE}:13133"
  http_forwarder:
    ingress:
      endpoint: "${SPLUNK_LISTEN_INTERFACE}:6060"
    egress:
      endpoint: "${SPLUNK_API_URL}"
      # Use instead when sending to gateway
      #endpoint: "${SPLUNK_GATEWAY_URL}"
  smartagent:
    bundleDir: "${SPLUNK_BUNDLE_DIR}"
    collectd:
      configDir: "${SPLUNK_COLLECTD_DIR}"
  zpages:
    #endpoint: "${SPLUNK_LISTEN_INTERFACE}:55679"

receivers:
  fluentforward:
    endpoint: "${SPLUNK_LISTEN_INTERFACE}:8006"
  hostmetrics:
    collection_interval: 10s
    scrapers:
      cpu:
      disk:
      filesystem:
      memory:
      network:
      # System load average metrics https://en.wikipedia.org/wiki/Load_(computing)
      load:
      # Paging/Swap space utilization and I/O metrics
      paging:
      # Aggregated system process count metrics
      processes:
      # System processes metrics, disabled by default
      # process:
  jaeger:
    protocols:
      grpc:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:14250"
      thrift_binary:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:6832"
      thrift_compact:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:6831"
      thrift_http:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:14268"
  otlp:
    protocols:
      grpc:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:4317"
      http:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:4318"
  # This section is used to collect the OpenTelemetry Collector metrics
  # Even if just a Splunk APM customer, these metrics are included
  prometheus/internal:
    config:
      scrape_configs:
      - job_name: 'otel-collector'
        scrape_interval: 10s
        static_configs:
        - targets: ["0.0.0.0:8888"]
        metric_relabel_configs:
          - source_labels: [ __name__ ]
            regex: 'otelcol_rpc_.*'
            action: drop
          - source_labels: [ __name__ ]
            regex: 'otelcol_http_.*'
            action: drop
          - source_labels: [ __name__ ]
            regex: 'otelcol_processor_batch_.*'
            action: drop
  smartagent/processlist:
    type: processlist
  signalfx:
    endpoint: "${SPLUNK_LISTEN_INTERFACE}:9943"
    # Whether to preserve incoming access token and use instead of exporter token
    # default = false
    #access_token_passthrough: true
  zipkin:
    endpoint: "${SPLUNK_LISTEN_INTERFACE}:9411"
  nop:

processors:
  batch:
    metadata_keys:
      - X-SF-Token
  # Enabling the memory_limiter is strongly recommended for every pipeline.
  # Configuration is based on the amount of memory allocated to the collector.
  # For more information about memory limiter, see
  # https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiter/README.md
  memory_limiter:
    check_interval: 2s
    limit_mib: ${SPLUNK_MEMORY_LIMIT_MIB}

  # Detect if the collector is running on a cloud system, which is important for creating unique cloud provider dimensions.
  # Detector order is important: the `system` detector goes last so it can't preclude cloud detectors from setting host/os info.
  # Resource detection processor is configured to override all host and cloud attributes because instrumentation
  # libraries can send wrong values from container environments.
  # https://docs.splunk.com/Observability/gdi/opentelemetry/components/resourcedetection-processor.html#ordering-considerations
  resourcedetection:
    detectors: [gcp, ecs, ec2, azure, system]
    override: true

  # Optional: The following processor can be used to add a default "deployment.environment" attribute to the logs and
  # traces when it's not populated by instrumentation libraries.
  # If enabled, make sure to enable this processor in a pipeline.
  # For more information, see https://docs.splunk.com/Observability/gdi/opentelemetry/components/resource-processor.html
  #resource/add_environment:
    #attributes:
      #- action: insert
        #value: staging/production/...
        #key: deployment.environment

  # The following processor is used to add "otelcol.service.mode" attribute to the internal metrics
  resource/add_mode:
    attributes:
      - action: insert
        value: "agent"
        key: otelcol.service.mode

exporters:
  # Traces
  otlphttp:
    traces_endpoint: "${SPLUNK_INGEST_URL}/v2/trace/otlp"
    headers:
      "X-SF-Token": "${SPLUNK_ACCESS_TOKEN}"
  # Metrics + Events
  signalfx:
    access_token: "${SPLUNK_ACCESS_TOKEN}"
    api_url: "${SPLUNK_API_URL}"
    ingest_url: "${SPLUNK_INGEST_URL}"
    # Use instead when sending to gateway
    #api_url: http://${SPLUNK_GATEWAY_URL}:6060
    #ingest_url: http://${SPLUNK_GATEWAY_URL}:9943
    sync_host_metadata: true
    correlation:
  # Entities (applicable only if discovery mode is enabled)
  otlphttp/entities:
    logs_endpoint: "${SPLUNK_INGEST_URL}/v3/event"
    headers:
      "X-SF-Token": "${SPLUNK_ACCESS_TOKEN}"
  # Logs
  splunk_hec:
    token: "${SPLUNK_HEC_TOKEN}"
    endpoint: "${SPLUNK_HEC_URL}"
    source: "otel"
    sourcetype: "otel"
    profiling_data_enabled: false
  # Profiling
  splunk_hec/profiling:
    token: "${SPLUNK_ACCESS_TOKEN}"
    endpoint: "${SPLUNK_INGEST_URL}/v1/log"
    log_data_enabled: false
  # Send to gateway
  otlp/gateway:
    endpoint: "${SPLUNK_GATEWAY_URL}:4317"
    tls:
      insecure: true
  # Debug
  debug:
    verbosity: detailed

service:
  extensions: [health_check, http_forwarder, zpages, smartagent]
  pipelines:
    traces:
      receivers: [jaeger, otlp, zipkin]
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      #- resource/add_environment
      exporters: [otlphttp, signalfx]
      # Use instead when sending to gateway
      #exporters: [otlp/gateway, signalfx]
    metrics:
      receivers: [hostmetrics, otlp, signalfx]
      processors: [memory_limiter, batch, resourcedetection]
      exporters: [signalfx]
      # Use instead when sending to gateway
      #exporters: [otlp/gateway]
    metrics/internal:
      receivers: [prometheus/internal]
      processors: [memory_limiter, batch, resourcedetection, resource/add_mode]
      # When sending to gateway, at least one metrics pipeline needs
      # to use signalfx exporter so host metadata gets emitted
      exporters: [signalfx]
    logs/signalfx:
      receivers: [signalfx, smartagent/processlist]
      processors: [memory_limiter, batch, resourcedetection]
      exporters: [signalfx]
    logs/entities:
      # Receivers are dynamically added if discovery mode is enabled
      receivers: [nop]
      processors: [memory_limiter, batch, resourcedetection]
      exporters: [otlphttp/entities]
    logs:
      receivers: [fluentforward, otlp]
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      #- resource/add_environment
      exporters: [splunk_hec, splunk_hec/profiling]
      # Use instead when sending to gateway
      #exporters: [otlp/gateway]

Default pipelines ๐Ÿ”—

By default, ingested data follows these pipelines.

Default pipelines for logs ๐Ÿ”—

The following diagram shows the default logs pipeline:

flowchart LR accTitle: Default logs pipeline diagram accDescr: Receivers send logs to the logs/memory_limiter processor. The logs/memory_limiter processor sends logs to the batch processor, and the batch processor sends logs to the resource detection processor. The resource detection processor sends logs to the exporter. The SignalFx logs pipeline follows the same steps, but uses internal receivers, processors, and exporters to send logs. %% LR indicates the direction (left-to-right) %% You can define classes to style nodes and other elements classDef receiver fill:#00FF00 classDef processor fill:#FF9900 classDef exporter fill:#FF33FF %% Each subgraph determines what's in each category subgraph Receivers direction LR logs/signalfx/signalfx/in:::receiver logs/signalfx/smartagent/processlist:::receiver logs/fluentforward:::receiver logs/otlp:::receiver end subgraph Processor direction LR logs/signalfx/memory_limiter:::processor --> logs/signalfx/batch:::processor --> logs/signalfx/resourcedetection:::processor logs/memory_limiter:::processor --> logs/batch:::processor --> logs/resourcedetection:::processor end subgraph Exporters direction LR logs/signalfx/signalfx/out:::exporter logs/splunk_hec:::exporter end %% Connections beyond categories are added later logs/signalfx/signalfx/in --> logs/signalfx/memory_limiter logs/signalfx/resourcedetection --> logs/signalfx/signalfx/out logs/signalfx/smartagent/processlist --> logs/signalfx/memory_limiter logs/fluentforward --> logs/memory_limiter logs/resourcedetection --> logs/splunk_hec logs/otlp --> logs/memory_limiter

Learn more about these receivers:

Learn more about these processors:

Learn more about these exporters:

Default pipelines for metrics ๐Ÿ”—

The following diagram shows the default metrics pipeline:

flowchart LR accTitle: Default metric pipeline diagram accDescr: Receivers send logs to the metrics/memory_limiter processor. The metrics/memory_limiter processor sends metrics to the batch processor, and the batch processor sends metrics to the resource detection processor. The resource detection processor sends metrics to the exporter. The internal metrics pipeline follows the same steps, but uses internal receivers, processors, and exporters to send metrics. %% LR indicates the direction (left-to-right) %% You can define classes to style nodes and other elements classDef receiver fill:#00FF00 classDef processor fill:#FF9900 classDef exporter fill:#FF33FF %% Each subgraph determines what's in each category subgraph Receivers direction LR metrics/hostmetrics:::receiver metrics/otlp:::receiver metrics/signalfx/in:::receiver metrics/internal/prometheus/internal:::receiver end subgraph Processor direction LR metrics/memory_limiter:::processor --> metrics/batch:::processor --> metrics/resourcedetection:::processor metrics/internal/memory_limiter:::processor --> metrics/internal/batch:::processor --> metrics/internal/resourcedetection:::processor end subgraph Exporters direction LR metrics/signalfx/out:::exporter metrics/internal/signalfx/out:::exporter end %% Connections beyond categories are added later metrics/hostmetrics --> metrics/memory_limiter metrics/resourcedetection --> metrics/signalfx/out metrics/otlp --> metrics/memory_limiter metrics/signalfx/in --> metrics/memory_limiter metrics/internal/prometheus/internal --> metrics/internal/memory_limiter metrics/internal/resourcedetection --> metrics/internal/signalfx/out

Learn more about these receivers:

Learn more about these processors:

Learn more about these exporters:

Default pipelines for traces ๐Ÿ”—

The following diagram shows the default traces pipeline:

flowchart LR accTitle: Default traces pipeline diagram accDescr: Receivers send traces to the traces/memory_limiter processor. The traces/memory_limiter processor sends traces to the batch processor, and the batch processor sends traces to the resource detection processor. The resource detection processor sends traces to the Splunk APM exporter and the SignalFx exporter. %% LR indicates the direction (left-to-right) %% You can define classes to style nodes and other elements classDef receiver fill:#00FF00 classDef processor fill:#FF9900 classDef exporter fill:#FF33FF %% Each subgraph determines what's in each category subgraph Receivers direction LR traces/jaeger:::receiver traces/otlp:::receiver traces/zipkin:::receiver end subgraph Processor direction LR traces/memory_limiter:::processor --> traces/batch:::processor --> traces/resourcedetection:::processor end subgraph Exporters direction LR traces/sapm:::exporter traces/signalfx/out:::exporter end %% Connections beyond categories are added later traces/jaeger --> traces/memory_limiter traces/otlp --> traces/memory_limiter traces/zipkin --> traces/memory_limiter traces/resourcedetection --> traces/sapm traces/resourcedetection --> traces/signalfx/out

Learn more about these receivers:

Learn more about these processors:

Learn more about these exporters:

Learn more ๐Ÿ”—

See also the following documents:

This page was last updated on Dec 09, 2024.