Splunk® Data Stream Processor

Use the Data Stream Processor

On April 3, 2023, Splunk Data Stream Processor will reach its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.

Send data to multiple destinations in a pipeline

With the Data Stream Processor, you can send your data to multiple destinations in a single pipeline.

For example, in the following pipeline, all of the data from the Splunk Firehose data source passes to both branches of the pipeline for different types of processing. Data moves from the Splunk Firehose data source, gets normalized to match the Kafka schema, and then sent to Kafka as a destination. Data also moves from the Splunk Firehose data source directly into a Splunk Enterprise index.
This image shows data from the Splunk Firehose being sent to Splunk Enterprise and to Apache Kafka.


The following steps assume that you want to add a branch to a pre-existing pipeline.

  1. From the Data Pipelines Canvas view, click the + icon to the immediate right of the function where you want to branch from.
  2. Select a new function to add your pipeline. This function is added on a separate branch of your pipeline.
    When you branch a pipeline, all the same data flows through from the upstream function to all its downstream functions. For example, in the screenshot above, the same data flows from the upstream Splunk Firehose function to the Normalize Kafka function and the Write to Index function.
  3. Fill out the desired configurations for your function.
  4. Click Start Preview to verify that your pipeline is valid and data is flowing as expected.
  5. Save and activate your pipeline. If this is the first time you are activating your pipeline, do not select any of the activation options.
Last modified on 31 August, 2020
Create a pipeline with multiple data sources   Data Stream Processor data types

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.1.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters