Splunk® Data Stream Processor

Use the Data Stream Processor

Acrobat logo Download manual as PDF


On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
Acrobat logo Download topic as PDF

Send data from a pipeline to multiple destinations

When creating a data pipeline in the , you can choose to connect the pipeline to multiple data destinations. For example, you can create a single pipeline that sends data to a Splunk index, an Amazon S3 bucket, and Microsoft Azure Event Hubs concurrently. You can send the same set of data to multiple data destinations at once, or filter and route the data to different destinations depending on whether the data meets certain filter criteria. See Filtering and routing data in the for more information about this latter use case.

To send data from your pipeline to multiple destinations, branch the pipeline and end each branch with a different sink function as needed. The following steps assume that you've already started building the pipeline and now want to specify the data destinations, and that you are using the Canvas View. See Branch a pipeline using SPL2 for information about using the SPL2 View and the SPL2 Pipeline Builder to branch a pipeline.

  1. From the Canvas View of your pipeline, click the + icon beside the last function in your pipeline and add the sink function for one of your desired data destinations.
  2. To branch the pipeline and send the data to an additional data destination, click the Splunk Data Stream Processor pipeline branching icon icon beside the function that you want to branch from, and then add another sink function.
  3. (Optional) To continue adding transformation functions to the pipeline, do either of the following, depending on the location in the pipeline where you want to add the function:
    Location in pipeline Action
    Between two functions in the pipeline, or after the branching point. Click the + icon at the pipeline location where you want to add the function and then select your desired function.
    Immediately before the branching point. Click the Splunk Data Stream Processor pipeline branching icon icon, then select Insert function before branch, and then select your desired function.
  4. (Optional) Repeat steps 2 and 3 as needed to finish building your pipeline and creating additional pipeline branches that point to other data destinations.

Create a pipeline that sends data to multiple destinations

In this example, we create a pipeline that does the following:

  • Ingests data from a Splunk universal forwarder.
  • Processes the data to ensure that events are correctly grouped into records.
  • Sends identical copies of this data to two different Splunk instances.

Prerequisites

Steps

  1. From the Templates page, select the Splunk universal forwarder template.
    This template creates a pipeline that reads data from Splunk forwarders, completes the processing required for data that comes from universal forwarders, and then sends the data to the main index of the default Splunk instance associated with the .
  2. Branch the pipeline so that it also sends data to the main index of a different Splunk instance:
    1. Click the Splunk Data Stream Processor pipeline branching icon icon beside the Apply Timestamp Extraction function and select the Send to Splunk HTTP Event Collector sink function.
    2. On the View Configurations tab, set connection_id to the connection for your non-default Splunk instance.
    3. In the index field, enter "" (two quotation marks).
    4. In the default_index field, enter "main" (including the quotation marks).
  3. Validate your pipeline to confirm that all of the functions are configured correctly. Click the More Options Splunk Data Stream Processor "More Options" button button located beside the Activate Pipeline button, and then select Validate.
  4. Click Save, enter a name for your pipeline, and then click Save again.
  5. (Optional) Click Activate to activate your pipeline. If it's the first time activating your pipeline, do not enable any of the optional Activate settings.

You now have a pipeline that receives data from a universal forwarder, processes the data to ensure that events are grouped into records correctly, and then sends all the records to two different Splunk instances.

This screenshot shows a Splunk Data Stream Processor pipeline that splits into two branches. Each branch points to a different sink function.

The following is the complete SPL2 statement for this pipeline:

$statement_2 = | from forwarders("forwarders:all") | apply_line_breaking linebreak_type="auto" | apply_timestamp_extraction fallback_to_auto=false extraction_type="auto";
| from $statement_2 | into index("", "main");
| from $statement_2 | into into_splunk_enterprise_indexes("2f1ce641-baeb-4695-82cc-8f16ae64eb71", "", "main");

See also

Functions
Send data to a Splunk Index (Default for Environment)
Send data to Splunk HTTP Event Collector
Related topics
Create a pipeline using a template
Process data from a universal forwarder in the
Filtering and routing data in the
Last modified on 22 March, 2022
PREVIOUS
Create a pipeline with multiple data sources
  NEXT
Back up, restore, and share pipelines

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.4.2, 1.4.3


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters