Docs » Splunk Distribution of the OpenTelemetry Collector の利用開始 » はじめに:Collectorを理解して使用する » Tutorial: Use the Collector to send container logs to Splunk Enterprise » Part 2: Configure the Collector and Splunk Enterprise instance

Part 2: Configure the Collector and Splunk Enterprise instance 🔗

Now that you configured your services using Docker Compose, you can create the Splunk Distribution of the OpenTelemetry Collector configuration to assemble all the Collector components, and then create the Splunk Enterprise index configuration. For an overview of the tutorial, see Tutorial: Use the Collector to send container logs to Splunk Enterprise.

Collector の設定 🔗

The Collector gathers the container logs and sends them to the Splunk Enterprise service. Follow these steps to configure the Collector:

  1. Create a file in the log-collection directory called otel-collector-config.yml.

  2. In the otel-collector-config.yml file, define the receivers used to collect the logs from the 2 logging services:

    receivers:
      # Each filelog receiver requires a unique name that follows the slash.
      filelog/output1:
        # The include field specifies the path from which the receiver collects the container logs.
        include: [ /output1/file.log ]
      filelog/output2:
        include: [ /output2/file.log ]
    
  3. After the receivers in the otel-collector-config.yml file, define the processors used to transform the collected log data for use with Splunk Enterprise:

    # ...
    processors:
      # The batch processor helps regulate the data flow from the receivers.
      batch:
      # The transform processor is configured to set the `com.splunk.index` attribute to `index2`
      # for the logs with a `logging2` message, and `index1` for all other logs.
      transform:
        log_statements:
          - context: log
            statements:
              - set(attributes["com.splunk.index"], "index1")
              - set(attributes["com.splunk.index"], "index2") where ParseJSON(body)["message"] == "logging2"
      # The groupbyattrs processor groups the logs by their `com.splunk.index` attribute,
      # which is either `index1` or `index2`.
      groupbyattrs:
        keys:
          - com.splunk.index
    
  4. After the processors in the otel-collector-config.yml file, define the exporter used to send the logs to the Splunk server’s HTTP Event Collector (HEC):

    # ...
    exporters:
      splunk_hec/logs:
        # Splunk HTTP Event Collector token.
        token: "00000000-0000-0000-0000-0000000000000"
        # Splunk instance URL where the exporter sends the log data.
        endpoint: "https://splunk:8088/services/collector"
        tls:
          # Skips checking the certificate of the HEC endpoint when sending data over HTTPS.
          insecure_skip_verify: true
    
  5. After the exporter in the otel-collector-config.yml file, define the service, which consists of a logs pipeline that organizes the flow of logging data through the 3 component types:

    # ...
    service:
      pipelines:
        logs:
          receivers: [ filelog/output1, filelog/output2 ]
          processors: [ transform, groupbyattrs, batch ]
          exporters: [ splunk_hec/logs ]
    

Configure the Splunk Enterprise indexes 🔗

Splunk Enterprise indexes store the data that the Collector sends to the Splunk Enterprise service. Follow these steps to configure the indexes:

  1. Create a file in the log-collection directory called splunk.yml.

  2. In the splunk.yml file, define the index1 and index2 indexes:

    splunk:
      conf:
        indexes:
          directory: /opt/splunk/etc/apps/search/local
          content:
            index1:
              coldPath: $SPLUNK_DB/index1/colddb
              datatype: event
              homePath: $SPLUNK_DB/index1/db
              maxTotalDataSizeMB: 512000
              thawedPath: $SPLUNK_DB/index1/thaweddb
            index2:
              coldPath: $SPLUNK_DB/index2/colddb
              datatype: event
              homePath: $SPLUNK_DB/index2/db
              maxTotalDataSizeMB: 512000
              thawedPath: $SPLUNK_DB/index2/thaweddb
    

Next step 🔗

You’ve now defined the components for collecting, processing, and exporting the container logs using the Collector, and defined the Splunk Enterprise indexes for storing the logs. Next, deploy the services using Docker Compose and verify that everything works as expected. To continue, see Part 3: Deploy and verify the environment.

さらに詳しく 🔗

This page was last updated on 2024年07月09日.