Docs » Set up Splunk Log Observer

Set up Splunk Log Observer 🔗

What type of data is supported? 🔗

Splunk Log Observer supports unstructured log data at ingest.

Prerequisites 🔗

Before setting up Log Observer, you must meet the following criteria:

  • Your Observability Cloud organization must be provisioned with an entitlement for Log Observer.

  • You must be an administrator in an Observability Cloud organization to set up integrations.

Start using Log Observer 🔗

You can use Observability Cloud integrations wizards to send logs to Log Observer from your hosts, containers, and cloud providers. Use the Splunk Distribution of OpenTelemetry Collector at https://github.com/signalfx/splunk-otel-collector to capture logs from your resources and applications. Decide whether you want to see logs from each data source, only one, or any combination of data sources. The more complete your log collection in Log Observer, the more effective your use of Log Observer can be for troubleshooting your entire environment using logs. You can complete step 1, step 2, or both in the list below, depending on which logs you want to see.

To start using Log Observer, complete the following tasks:

  1. Collect logs from your hosts and containers

  2. Collect logs from your cloud providers

  3. Filter and aggregate your data in Log Observer

  4. Ensure the severity key is correctly mapped

Collect logs from your hosts and containers 🔗

To send logs from your hosts and containers to Log Observer, follow these instructions:

  1. In the Observability Cloud main menu, click Data Setup.

  2. In the CATEGORIES menu, select Platforms to display only platform-related data setup options. Select the platform you want to import logs from:

    • Windows

    • Kubernetes

    • Linux

  3. Follow the instructions in the integration wizard then see Filter and aggregate your data in Log Observer.

After you see data coming into Log Observer from your data source, you can send logs from another data source or continue analyzing logs from the platform you have just set up.

Collect logs from your cloud providers 🔗

To send logs from your cloud providers to Log Observer, follow these instructions:

  1. In the Observability Cloud main menu, click Data Setup.

  2. At the top of the CATEGORIES menu, select one of the following cloud providers from the FEATURED category:

    • Amazon Web Services

    • Google Cloud Platform

    • Microsoft Azure

  3. Follow the instructions in the integration wizard then see Filter and aggregate your data in Log Observer.

After you see data coming into Log Observer from your data source, you can send logs from another data source or continue analyzing logs from the cloud provider you have just set up.

Note

If you already have existing Fluentd or Fluent Bit deployments, you can configure them to send logs to Log Observer. However, it is important to note that the following are true when using Fluentd or Fluent Bit:

  • Logs captured by your own FluentD or Fluent Bit agents do not include the resource metadata that automatically links log data to other related sources available within APM and Infrastructure Monitoring.

  • Although there are multiple ways to send log data to Log Observer, Splunk only provides direct support for the Splunk distribution of OpenTelemetry Collector.

If you still want to use FluentD to send logs to Log Observer, see Configure Fluentd to send logs.

Filter and aggregate your data in Log Observer 🔗

After you have collected some logs, use filters and aggregation to efficiently navigate your logs in Log Observer. You can verify that Log Observer is correctly processing and indexing your logs by filtering and aggregating your log data.

You can use the Log Observer interface to filter your logs based on keywords or fields. To filter your data, follow these steps:

  1. Click Add Filter.

  2. To find logs containing a keyword, click the Keyword tab and enter a keyword.

  3. To find logs containing a specific field, click the Fields tab and enter a field in Find a field then select it from the list. If helpful, you can enter a value for the specified field.

  4. To display only results that include the keywords, fields, or field values you entered, click the equal sign (=) next to the appropriate entry. To display only results that exclude the keywords, fields, or field values you entered, click the not equal sign (!=) next to the appropriate entry.

The resulting logs appear in the Raw Logs table. You can add more filters, enable and disable existing filters, and click individual logs to learn more.

Perform aggregations on logs to visualize problems in a histogram that shows averages, sums, and other statistics related to logs. Aggregations group related data by one field and then perform statistical calculation on other fields. Find the aggregations controls in the control bar at the top of the Log Observer UI. The default aggregation shows all logs grouped by severity.

See Identify problem areas using log aggregation to learn how to perform more aggregations.

Ensure severity key is correctly mapped 🔗

The severity key is a field that all logs contain. It has the values Debug, Error, Info, Unknown, and Warning. Because the severity field in many logs is called level, Log Observer automatically remaps the log field level to severity.

If your logs call the severity key by a different name, that’s okay. To ensure that Log Observer can read your field, transform your field name to severity using a Field Copy Processor. See Field Copy Processors to learn how.

Configure Fluentd to send logs 🔗

If you already have Fluentd running in your environment, you can reconfigure it to send logs to an additional output. To send logs to Splunk Observability Cloud in addition to your current system, follow these steps:

  1. Make sure that you have the HEC plugin for Fluentd installed.

    Option A
    Install the plugin and rebuild the Fluentd using the instructions in fluent-plugin-splunk-hec.
    Option B
    Use an existing Fluentd docker image with HEC plugin included. To get this image, enter
    docker pull splunk/fluentd-hec.
  2. Add HEC output. Change your Fluentd configuration by adding another output section. The new HEC output section points to Splunk’s SignalFx Observability ingest endpoint.

    For example, if you have one output to elasticsearch, follow these steps:

    • Change type from @elasticsearch to @copy in the match section.

    • Put elasticsearch into the <store> block.

    • Add another <store> block for HEC output.

    The following is a sample of output to @elasticsearch:

    <match **>
       @type elasticsearch
       ...
       <buffer>
       ...
       </buffer>
    </match>
    
  3. Change the @elasticsearch output to the following:

    <match **>
       @type copy
       <store>
         @type elasticsearch
         ...
         <buffer>
         ...
         </buffer>
       </store>
       <store>
         @type splunk_hec
         hec_host "ingest.<SIGNALFX_REALM>.signalfx.com"
         hec_port 443
         hec_token "<SIGNALFX_TOKEN>"
         ...
         <buffer>
         ...
         </buffer>
       </store>
    </match>
    
  4. In the new <store> section for splunk_hec, provide at least the following fields:

    • Instructions hec_host - Set the HEC ingest host (for example, ingest.us1.signalfx.com hec_port) to 443.

    • hec_token - Provide the SignalFx access token.

  5. Specify the following parameters:

    • sourcetype_key or sourcetype - Defines source type of logs by using a particular log field or static value

    • source_key or source - Defines source of logs by using a particular log field or static value

  6. Set up a buffer configuration for HEC output. The following is an example using memory buffer:

    <buffer>
      @type memory
      chunk_limit_records 100000
      chunk_limit_size 200k
      flush_interval 2s
      flush_thread_count 1
      overflow_action block
      retry_max_times 10
      total_limit_size 600m
    </buffer>
    

For more details on buffer configuration, see About buffer.

See HEC exporter documentation to learn about other optional fields.