Docs » Splunk Distribution of the OpenTelemetry Collector の利用開始 » Collector for Linux を使い始める » Collector for Linux のデフォルト設定

Collector for Linux のデフォルト設定 🔗

Splunk Distribution of OpenTelemetry Collectorには、以下のコンポーネントとサービスがあります:

  • レシーバー: Collector にデータを取り込む方法を決定します。

  • プロセッサー :エクスポートする前にデータに対してどのような処理を行うかを設定します。例えば、フィルターリングなど。

  • エクスポーター: データの送信先を設定します。1つまたは複数のバックエンドまたは送信先を指定できます。

  • 拡張 : Collector の機能を拡張します。

  • サービス。これは2つの要素で構成されています:

    • 設定した 拡張機能 のリスト。

    • パイプライン: データが受信から加工や修正を経て、最終的にエクスポーターを経由して出るまでのパス。

詳細は Collector コンポーネント を参照してください。

Collector の設定は YAML ファイル に保存され、さまざまなコンポーネントとサービスの動作を指定します。以下のセクションで、デフォルト設定の要素とパイプラインの概要を参照してください。

Collectorの設定方法については、Tutorial: Configure the Splunk Distribution of OpenTelemetry Collector on a Linux host を参照してください。

デフォルト設定 🔗

これは、Linux (Debian/RPM) インストーラコレクターパッケージのデフォルト設定ファイルです:

# Default configuration file for the Linux (deb/rpm) and Windows MSI collector packages

# If the collector is installed without the Linux/Windows installer script, the following
# environment variables are required to be manually defined or configured below:
# - SPLUNK_ACCESS_TOKEN: The Splunk access token to authenticate requests
# - SPLUNK_API_URL: The Splunk API URL, e.g. https://api.us0.signalfx.com
# - SPLUNK_BUNDLE_DIR: The path to the Smart Agent bundle, e.g. /usr/lib/splunk-otel-collector/agent-bundle
# - SPLUNK_COLLECTD_DIR: The path to the collectd config directory for the Smart Agent, e.g. /usr/lib/splunk-otel-collector/agent-bundle/run/collectd
# - SPLUNK_HEC_TOKEN: The Splunk HEC authentication token
# - SPLUNK_HEC_URL: The Splunk HEC endpoint URL, e.g. https://ingest.us0.signalfx.com/v1/log
# - SPLUNK_INGEST_URL: The Splunk ingest URL, e.g. https://ingest.us0.signalfx.com
# - SPLUNK_LISTEN_INTERFACE: The network interface the agent receivers listen on.
# - SPLUNK_TRACE_URL: The Splunk trace endpoint URL, e.g. https://ingest.us0.signalfx.com/v2/trace

extensions:
  health_check:
    endpoint: "${SPLUNK_LISTEN_INTERFACE}:13133"
  http_forwarder:
    ingress:
      endpoint: "${SPLUNK_LISTEN_INTERFACE}:6060"
    egress:
      endpoint: "${SPLUNK_API_URL}"
      # Use instead when sending to gateway
      #endpoint: "${SPLUNK_GATEWAY_URL}"
  smartagent:
    bundleDir: "${SPLUNK_BUNDLE_DIR}"
    collectd:
      configDir: "${SPLUNK_COLLECTD_DIR}"
  zpages:
    #endpoint: "${SPLUNK_LISTEN_INTERFACE}:55679"

receivers:
  fluentforward:
    endpoint: "${SPLUNK_LISTEN_INTERFACE}:8006"
  hostmetrics:
    collection_interval: 10s
    scrapers:
      cpu:
      disk:
      filesystem:
      memory:
      network:
      # System load average metrics https://en.wikipedia.org/wiki/Load_(computing)
      load:
      # Paging/Swap space utilization and I/O metrics
      paging:
      # Aggregated system process count metrics
      processes:
      # System processes metrics, disabled by default
      # process:
  jaeger:
    protocols:
      grpc:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:14250"
      thrift_binary:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:6832"
      thrift_compact:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:6831"
      thrift_http:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:14268"
  otlp:
    protocols:
      grpc:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:4317"
      http:
        endpoint: "${SPLUNK_LISTEN_INTERFACE}:4318"
  # This section is used to collect the OpenTelemetry Collector metrics
  # Even if just a Splunk APM customer, these metrics are included
  prometheus/internal:
    config:
      scrape_configs:
      - job_name: 'otel-collector'
        scrape_interval: 10s
        static_configs:
        - targets: ["${SPLUNK_LISTEN_INTERFACE}:8888"]
        metric_relabel_configs:
          - source_labels: [ __name__ ]
            regex: 'otelcol_rpc_.*'
            action: drop
          - source_labels: [ __name__ ]
            regex: 'otelcol_http_.*'
            action: drop
          - source_labels: [ __name__ ]
            regex: 'otelcol_processor_batch_.*'
            action: drop
  smartagent/processlist:
    type: processlist
  signalfx:
    endpoint: "${SPLUNK_LISTEN_INTERFACE}:9943"
    # Whether to preserve incoming access token and use instead of exporter token
    # default = false
    #access_token_passthrough: true
  zipkin:
    endpoint: "${SPLUNK_LISTEN_INTERFACE}:9411"

processors:
  batch:
  # Enabling the memory_limiter is strongly recommended for every pipeline.
  # Configuration is based on the amount of memory allocated to the collector.
  # For more information about memory limiter, see
  # https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/memorylimiter/README.md
  memory_limiter:
    check_interval: 2s
    limit_mib: ${SPLUNK_MEMORY_LIMIT_MIB}

  # Detect if the collector is running on a cloud system, which is important for creating unique cloud provider dimensions.
  # Detector order is important: the `system` detector goes last so it can't preclude cloud detectors from setting host/os info.
  # Resource detection processor is configured to override all host and cloud attributes because instrumentation
  # libraries can send wrong values from container environments.
  # https://docs.splunk.com/Observability/gdi/opentelemetry/components/resourcedetection-processor.html#ordering-considerations
  resourcedetection:
    detectors: [gcp, ecs, ec2, azure, system]
    override: true

  # Optional: The following processor can be used to add a default "deployment.environment" attribute to the logs and
  # traces when it's not populated by instrumentation libraries.
  # If enabled, make sure to enable this processor in a pipeline.
  # For more information, see https://docs.splunk.com/Observability/gdi/opentelemetry/components/resource-processor.html
  #resource/add_environment:
    #attributes:
      #- action: insert
        #value: staging/production/...
        #key: deployment.environment

exporters:
  # Traces
  sapm:
    access_token: "${SPLUNK_ACCESS_TOKEN}"
    endpoint: "${SPLUNK_TRACE_URL}"
  # Metrics + Events
  signalfx:
    access_token: "${SPLUNK_ACCESS_TOKEN}"
    api_url: "${SPLUNK_API_URL}"
    ingest_url: "${SPLUNK_INGEST_URL}"
    # Use instead when sending to gateway
    #api_url: http://${SPLUNK_GATEWAY_URL}:6060
    #ingest_url: http://${SPLUNK_GATEWAY_URL}:9943
    sync_host_metadata: true
    correlation:
  # Logs
  splunk_hec:
    token: "${SPLUNK_HEC_TOKEN}"
    endpoint: "${SPLUNK_HEC_URL}"
    source: "otel"
    sourcetype: "otel"
    profiling_data_enabled: false
  # Profiling
  splunk_hec/profiling:
    token: "${SPLUNK_ACCESS_TOKEN}"
    endpoint: "${SPLUNK_INGEST_URL}/v1/log"
    log_data_enabled: false
  # Send to gateway
  otlp:
    endpoint: "${SPLUNK_GATEWAY_URL}:4317"
    tls:
      insecure: true
  # Debug
  debug:
    verbosity: detailed

service:
  telemetry:
    metrics:
      address: "${SPLUNK_LISTEN_INTERFACE}:8888"
  extensions: [health_check, http_forwarder, zpages, smartagent]
  pipelines:
    traces:
      receivers: [jaeger, otlp, zipkin]
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      #- resource/add_environment
      exporters: [sapm, signalfx]
      # Use instead when sending to gateway
      #exporters: [otlp, signalfx]
    metrics:
      receivers: [hostmetrics, otlp, signalfx]
      processors: [memory_limiter, batch, resourcedetection]
      exporters: [signalfx]
      # Use instead when sending to gateway
      #exporters: [otlp]
    metrics/internal:
      receivers: [prometheus/internal]
      processors: [memory_limiter, batch, resourcedetection]
      # When sending to gateway, at least one metrics pipeline needs
      # to use signalfx exporter so host metadata gets emitted
      exporters: [signalfx]
    logs/signalfx:
      receivers: [signalfx, smartagent/processlist]
      processors: [memory_limiter, batch, resourcedetection]
      exporters: [signalfx]
    logs:
      receivers: [fluentforward, otlp]
      processors:
      - memory_limiter
      - batch
      - resourcedetection
      #- resource/add_environment
      exporters: [splunk_hec, splunk_hec/profiling]
      # Use instead when sending to gateway
      #exporters: [otlp]

デフォルトのパイプライン 🔗

デフォルトでは、取り込まれたデータはこれらのパイプラインに従います。

ログのデフォルト・パイプライン 🔗

以下の図は、デフォルトのログ・パイプラインを示しています:

flowchart LR accTitle: Default logs pipeline diagram accDescr: Receivers send logs to the logs/memory_limiter processor. The logs/memory_limiter processor sends logs to the batch processor, and the batch processor sends logs to the resource detection processor. The resource detection processor sends logs to the exporter. The SignalFx logs pipeline follows the same steps, but uses internal receivers, processors, and exporters to send logs. %% LR indicates the direction (left-to-right) %% You can define classes to style nodes and other elements classDef receiver fill:#00FF00 classDef processor fill:#FF9900 classDef exporter fill:#FF33FF %% Each subgraph determines what's in each category subgraph Receivers direction LR logs/signalfx/signalfx/in:::receiver logs/signalfx/smartagent/processlist:::receiver logs/fluentforward:::receiver logs/otlp:::receiver end subgraph Processor direction LR logs/signalfx/memory_limiter:::processor --> logs/signalfx/batch:::processor --> logs/signalfx/resourcedetection:::processor logs/memory_limiter:::processor --> logs/batch:::processor --> logs/resourcedetection:::processor end subgraph Exporters direction LR logs/signalfx/signalfx/out:::exporter logs/splunk_hec:::exporter end %% Connections beyond categories are added later logs/signalfx/signalfx/in --> logs/signalfx/memory_limiter logs/signalfx/resourcedetection --> logs/signalfx/signalfx/out logs/signalfx/smartagent/processlist --> logs/signalfx/memory_limiter logs/fluentforward --> logs/memory_limiter logs/resourcedetection --> logs/splunk_hec logs/otlp --> logs/memory_limiter

これらのレシーバーについて詳しく学びます:

これらのプロセッサーについて詳しく学びます:

これらのエクスポーターについて詳しく学びます:

メトリクスのデフォルトのパイプライン 🔗

次の図は、デフォルトのメトリクス・パイプラインを示しています:

flowchart LR accTitle: Default metric pipeline diagram accDescr: Receivers send logs to the metrics/memory_limiter processor. The metrics/memory_limiter processor sends metrics to the batch processor, and the batch processor sends metrics to the resource detection processor. The resource detection processor sends metrics to the exporter. The internal metrics pipeline follows the same steps, but uses internal receivers, processors, and exporters to send metrics. %% LR indicates the direction (left-to-right) %% You can define classes to style nodes and other elements classDef receiver fill:#00FF00 classDef processor fill:#FF9900 classDef exporter fill:#FF33FF %% Each subgraph determines what's in each category subgraph Receivers direction LR metrics/hostmetrics:::receiver metrics/otlp:::receiver metrics/signalfx/in:::receiver metrics/internal/prometheus/internal:::receiver end subgraph Processor direction LR metrics/memory_limiter:::processor --> metrics/batch:::processor --> metrics/resourcedetection:::processor metrics/internal/memory_limiter:::processor --> metrics/internal/batch:::processor --> metrics/internal/resourcedetection:::processor end subgraph Exporters direction LR metrics/signalfx/out:::exporter metrics/internal/signalfx/out:::exporter end %% Connections beyond categories are added later metrics/hostmetrics --> metrics/memory_limiter metrics/resourcedetection --> metrics/signalfx/out metrics/otlp --> metrics/memory_limiter metrics/signalfx/in --> metrics/memory_limiter metrics/internal/prometheus/internal --> metrics/internal/memory_limiter metrics/internal/resourcedetection --> metrics/internal/signalfx/out

これらのレシーバーについて詳しく学びます:

これらのプロセッサーについて詳しく学びます:

これらのエクスポーターについて詳しく学びます:

トレースのデフォルトのパイプライン 🔗

以下の図は、デフォルトのトレース・パイプラインを示しています:

flowchart LR accTitle: Default traces pipeline diagram accDescr: Receivers send traces to the traces/memory_limiter processor. The traces/memory_limiter processor sends traces to the batch processor, and the batch processor sends traces to the resource detection processor. The resource detection processor sends traces to the Splunk APM exporter and the SignalFx exporter. %% LR indicates the direction (left-to-right) %% You can define classes to style nodes and other elements classDef receiver fill:#00FF00 classDef processor fill:#FF9900 classDef exporter fill:#FF33FF %% Each subgraph determines what's in each category subgraph Receivers direction LR traces/jaeger:::receiver traces/otlp:::receiver traces/zipkin:::receiver end subgraph Processor direction LR traces/memory_limiter:::processor --> traces/batch:::processor --> traces/resourcedetection:::processor end subgraph Exporters direction LR traces/sapm:::exporter traces/signalfx/out:::exporter end %% Connections beyond categories are added later traces/jaeger --> traces/memory_limiter traces/otlp --> traces/memory_limiter traces/zipkin --> traces/memory_limiter traces/resourcedetection --> traces/sapm traces/resourcedetection --> traces/signalfx/out

これらのレシーバーについて詳しく学びます:

これらのプロセッサーについて詳しく学びます:

これらのエクスポーターについて詳しく学びます:

さらに詳しく 🔗

以下の文書も参照してください:

This page was last updated on 2024年02月27日.