Splunk® Data Stream Processor

Install and administer the Data Stream Processor

On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.

Configure your Splunk environment to monitor DSP metrics

To use metrics to analyze the health of your DSP deployment with Splunk software, you must configure DSP to send metrics data to the Splunk platform using the HTTP Event Collector (HEC), and you must configure your Splunk environment to properly receive the metrics data.

Configure Splunk Enterprise or Splunk Cloud to receive DSP metrics

You must configure your Splunk environment to properly receive the metrics data from your DSP deployment. The default index for the DSP metrics data is _dsp_metrics. It is best practice for Splunk Enterprise to use the default index, but depending on your needs and local configuration, you can define a custom index in the indexes.conf file. If you are using Splunk Cloud, you must define a custom index.

See Create custom indexes for information about creating custom indexes in Splunk Enterprise. See Manage Splunk Cloud Platform indexes for information about creating indexes in Splunk Cloud.

If you define a custom index, you must edit the macros.conf file in the Splunk App for DSP and update the definition for DSP metrics index in the following stanza.

[dsp_metrics_index]
definition = index=_dsp_metrics
iseval = 0

The index set in macros.conf must match the target index you define in your Splunk Enterprise or Splunk Cloud configuration

To learn more about Splunk Enterprise configuration files, see:

Configure DSP to send metrics to the Splunk platform using HEC

You must configure DSP to send data to the Splunk platform using the HTTP Event Collector (HEC).

Prerequisites

  • A Splunk instance with HEC enabled and a valid HEC token. Your HEC token must be configured to send data to the _dsp_metrics index. For information about how to enable HEC and create a HEC token, see Use the HTTP Event Collector.

Steps

Follow these steps to configure DSP to send data to the Splunk Platform using the HTTP Event Collector (HEC).

  1. Type the following in the working directory from a DSP node.
    1. Provide DSP with the HEC token to use to send DSP metrics data to the Splunk platform.
      dsp config set prometheus-writer hec_token=<your token>
    2. Provide DSP with the HEC URL. For load balancing, you can specify multiple HEC URLs, separated by commas.
      dsp config set prometheus-writer hec_url=https://<your IP>:8088
    3. Set the metrics index to send data to. If you are using a custom metrics index, enter the name of your custom metrics index instead. You must change the application's knowledge objects if you are using a custom metrics index.
      dsp config set prometheus-writer metrics_index=_dsp_metrics
    4. (Optional - Skip this step if you've already given the DSP cluster a name during installation). Give the DSP cluster a name. This name will be shown in the dashboards in the Splunk App for DSP.
      dsp config set prometheus-writer cluster_name=<cluster_name> 
    5. (Optional) Set the scrape interval for uaa,flink, or seaweedfs to define the frequency that metrics data is collected. The default is 30 seconds. You must specify the amount of time by using a number and the "s" time unit. For example, if you wanted the scrape interval to be 10 seconds, use 10s in your command.
      dsp config set <SERVICE> PROMETHEUS_SCRAPE_INTERVAL=<number_of_seconds>
      dsp deploy <SERVICE>
    6. Finally, enable the metrics to be sent.
      dsp config set prometheus-writer enable_remote_write=true
  2. After setting the configurations, deploy your changes.
    dsp deploy prometheus-writer
  3. (Optional) Confirm that the deployment was successful by checking that a prometheus-writer pod is now running.
    kubectl -n monitoring get pods
  4. Wait for DSP to start sending data to your Splunk environment. This may take up to 10 minutes.
  5. To confirm that DSP is sending the metrics data to the Splunk platform, open the Search & Reporting app in your Splunk instance and search for your data. Use the following search criteria:

    | mstats count(*) WHERE index="dsp_metrics"

Last modified on 16 November, 2023
Install the Splunk App for DSP   Configure your Splunk environment to monitor DSP logs

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters