Docs » Splunk Log Observer » Set up Log Observer

Set up Log Observer πŸ”—

Note

Customers with a Splunk Log Observer entitlement in Splunk Observability Cloud must transition from Log Observer to Log Observer Connect by December 2023. With Log Observer Connect, you can ingest more logs from a wider variety of data sources, enjoy a more advanced logs pipeline, and expand into security logging. See Splunk Log Observer transition to learn how.

Complete the instructions on this page if you have a Log Observer entitlement in Observability Cloud. If you don’t have a Log Observer entitlement in Observability Cloud, see Introduction to Splunk Log Observer Connect to set up the integration and begin using Log Observer to query your Splunk platform logs.

By default, Log Observer indexes and stores all logs data that you send to Observability Cloud unless you choose to archive some of your logs data in Amazon S3 buckets. See Archive your logs with infinite logging rules to learn how to archive logs until you want to index and analyze them in Log Observer. If you use Log Observer Connect, your logs data remains in your Splunk platform instance and is never stored in Log Observer or Observability Cloud.

What type of data is supported? πŸ”—

Splunk Log Observer supports unstructured log data at ingest.

Prerequisites πŸ”—

Before setting up Log Observer, you must meet the following criteria:

  • Your Observability Cloud organization must be provisioned with an entitlement for Log Observer.

  • You must be an administrator in an Observability Cloud organization to set up integrations.

Start using Log Observer πŸ”—

You can use Observability Cloud guided setups to send logs to Log Observer from your hosts, containers, and cloud providers. Use the Splunk Distribution of OpenTelemetry Collector to capture logs from your resources and applications. Decide whether you want to see logs from each data source, only one, or any combination of data sources. The more complete your log collection in Log Observer, the more effective your use of Log Observer can be for troubleshooting your entire environment using logs. You can complete step 1, step 2, or both in the following list, depending on which logs you want to see.

To start using Log Observer, complete the following tasks:

  1. Collect logs from your hosts and containers

  2. Collect logs from your cloud providers

  3. Filter and aggregate your data in Log Observer

  4. Ensure the severity key is correctly mapped

Collect logs from your hosts and containers πŸ”—

To send logs from your hosts and containers to Log Observer, follow these instructions:

  1. Log in to Splunk Observability Cloud.

  2. In the left navigation menu, select Data Management to open the Integrate Your Data page.

  3. On the Integrate Your Data page in Observability Cloud, select the tile for the platform you want to import logs from. You can select Windows, Kubernetes, or Linux. The guided setup for your platform appears.

  4. Follow the instructions in the guided setup then see Filter and aggregate your data in Log Observer.

After you see data coming into Log Observer from your data source, you can send logs from another data source or continue analyzing logs from the platform you have just set up.

Collect logs from your cloud providers πŸ”—

Amazon Web Services πŸ”—

To send logs from Amazon Web Services to Log Observer, follow these instructions:

  1. Log in to Splunk Observability Cloud.

  2. In the left navigation menu, select Data Management to display the Integrate Your Data page.

  3. Select Add Integration.

  4. In the Cloud Integrations section, select the the Amazon Web Services tile.

  5. Follow the instructions in the guided setup then see Filter and aggregate your data in Log Observer.

For more information about setting up an AWS connection, see Send AWS logs to Splunk Platform and Available CloudFormation templates.

Google Cloud Platform πŸ”—

To send logs from Google Cloud Platform to Log Observer, follow the instructions in Send GCP logs to Splunk Platform then see Filter and aggregate your data in Log Observer.

Microsoft Azure πŸ”—

To send logs from Microsoft Azure to Log Observer, follow the instructions in Send Azure logs to Splunk Platform then see Filter and aggregate your data in Log Observer.

After you see data coming into Log Observer from your data source, you can send logs from another data source or continue analyzing logs from the cloud provider you have just set up.

Note

If you already have existing Fluentd or Fluent Bit deployments, you can configure them to send logs to Log Observer. However, it is important to note that the following are true when using Fluentd or Fluent Bit:

  • Logs captured by your own Fluentd or Fluent Bit agents do not include the resource metadata that automatically links log data to other related sources available within APM and Infrastructure Monitoring.

  • Although there are multiple ways to send log data to Log Observer, Splunk only provides direct support for the Splunk distribution of OpenTelemetry Collector.

If you still want to use Fluentd to send logs to Log Observer, see Configure Fluentd to send logs.

Filter and aggregate your data in Log Observer πŸ”—

After you have collected some logs, use filters and aggregation to efficiently navigate your logs in Log Observer. You can verify that Log Observer is correctly processing and indexing your logs by filtering and aggregating your log data.

You can use the Log Observer interface to filter your logs based on keywords or fields. To filter your data, follow these steps:

  1. Select Add Filter.

  2. To find logs containing a keyword, select the Keyword tab and enter a keyword.

  3. To find logs containing a specific field, select the Fields tab and enter a field in Find a field then select it from the list. If helpful, you can enter a value for the specified field.

  4. To display only results that include the keywords, fields, or field values you entered, select the equal sign (=) next to the appropriate entry. To display only results that exclude the keywords, fields, or field values you entered, select the not equal sign (!=) next to the appropriate entry.

The resulting logs appear in the Logs table. You can add more filters, enable and disable existing filters, and select individual logs to learn more.

Perform aggregations on logs to visualize problems in a histogram that shows averages, sums, and other statistics related to logs. Aggregations group related data by one field and then perform statistical calculation on other fields. Find the aggregations controls in the control bar at the top of the Log Observer UI. The default aggregation shows all logs grouped by severity.

See Group logs by fields using log aggregation to learn how to perform more aggregations.

Ensure severity key is correctly mapped πŸ”—

The severity key is a field that all logs contain. It has the values DEBUG, ERROR, INFO, UNKNOWN, and WARNING. Because the severity field in many logs is called level, Log Observer automatically remaps the log field level to severity.

If your logs call the severity key by a different name, that’s okay. To ensure that Log Observer can read your field, transform your field name to severity using a Field Copy Processor. See Field copy processors to learn how.

Configure Fluentd to send logs πŸ”—

If you already have Fluentd running in your environment, you can reconfigure it to send logs to an additional output. To send logs to Splunk Observability Cloud in addition to your current system, follow these steps:

  1. Make sure that you have the HEC plugin for Fluentd installed.

    Option A
    Install the plugin and rebuild the Fluentd using the instructions in fluent-plugin-splunk-hec .
    Option B
    Use an existing Fluentd docker image with HEC plugin included. To get this image, enter
    docker pull splunk/fluentd-hec.
  2. Add HEC output. Change your Fluentd configuration by adding another output section. The new HEC output section points to Splunk’s SignalFx Observability ingest endpoint.

    For example, if you have one output to elasticsearch, follow these steps:

    • Change type from @elasticsearch to @copy in the match section.

    • Put elasticsearch into the <store> block.

    • Add another <store> block for HEC output.

    The following is a sample of output to @elasticsearch:

    <match **>
       @type elasticsearch
       ...
       <buffer>
       ...
       </buffer>
    </match>
    
  3. Change the @elasticsearch output to the following:

    <match **>
       @type copy
       <store>
         @type elasticsearch
         ...
         <buffer>
         ...
         </buffer>
       </store>
       <store>
         @type splunk_hec
         hec_host "ingest.<SIGNALFX_REALM>.signalfx.com"
         hec_port 443
         hec_token "<SIGNALFX_TOKEN>"
         ...
         <buffer>
         ...
         </buffer>
       </store>
    </match>
    
  4. In the new <store> section for splunk_hec, provide at least the following fields:

    • hec_host - Set the HEC ingest host (for example, ingest.us1.signalfx.com hec_port) to 443.

    • hec_token - Provide the SignalFx access token.

  5. Specify the following parameters:

    • sourcetype_key or sourcetype - Defines source type of logs by using a particular log field or static value

    • source_key or source - Defines source of logs by using a particular log field or static value

  6. Set up a buffer configuration for HEC output. The following is an example using memory buffer:

    <buffer>
      @type memory
      chunk_limit_records 100000
      chunk_limit_size 200k
      flush_interval 2s
      flush_thread_count 1
      overflow_action block
      retry_max_times 10
      total_limit_size 600m
    </buffer>
    

For more details on buffer configuration, see About buffer .

See HEC exporter documentation to learn about other optional fields.