Splunk® Data Stream Processor

Use the Data Stream Processor

On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.

Create a pipeline with multiple data sources

When creating a data pipeline in the , you can choose to connect multiple data sources to the pipeline. For example, you can create a single pipeline that gets data from a Splunk forwarder, an Apache Kafka broker, and Microsoft Azure Event Hubs concurrently. You can apply transformations to the data from all three data sources as the data passes through the pipeline, and then send the transformed data out from the pipeline to a destination of your choosing.

If you want to create a pipeline with multiple data sources, in most cases, you can use the Splunk DSP Firehose source function. See the Data sources supported by Splunk DSP Firehose topic in the Connect to Data Sources and Destinations with manual.

However, if you want to use multiple data sources that are not supported by the Splunk DSP Firehose function or if you want to apply specific transformations to the data streams before combining them then do the following tasks:

  1. From the Pipelines page, select a data source.
  2. (Optional) From the Canvas View of your pipeline, click the + icon and add any desired transformation functions to the pipeline.
  3. Once you have added all the desired transformation functions to your pipeline, click the + icon and add a union function to your pipeline.
  4. Click the + icon on the immediate left of the Union function, and then add a second source function to your pipeline. You can optionally union more data sources, if desired.
  5. (Optional) In order to union all of your data streams, they must have the same schema. If your data streams don't have the same schema, you can use the select streaming function to match your schemas.
  6. After unioning your data streams, continue building your pipeline by clicking the + icon to the immediate right of the union function.

Create a pipeline with two data sources: Kafka and Splunk DSP Firehose

In this example, create a pipeline with two data sources, Kafka and Splunk DSP Firehose, and union the two data streams by normalizing them to fit the expected Kafka schema.

The following screenshot shows two data streams from two different data sources being unioned together into one data stream in a pipeline.
This screen image shows two data streams from two different data sources being unioned together in a pipeline.

Prerequisites

Steps

  1. From the Pipelines page, select the Splunk DSP Firehose data source.
  2. From the Canvas View of your pipeline, add a union function to your pipeline.
  3. . Click the + icon on the immediate left of the Union function, and then select the Kafka source function.
  4. With the Kafka source function selected, on the View Configurations tab, provide your connection ID and topic name.
  5. Normalize the schemas to match. Hover over the circle in between the Splunk DSP Firehose and Union functions, click the + icon, and add an Eval function.
  6. In the Eval function, type the following SPL2. This SPL2 converts the event schema to the default Kafka schema.
    value=to_bytes(cast(body, "string")),
    topic=source_type,
    key=to_bytes(time())
    
  7. Hover over the circle in between the Eval and Union functions, click the + icon, and add a Fields function.
  8. To modify the records from the Splunk DSP Firehose so that the schema matches the Kafka record schema, drop all the fields from these records except for the value, topic, and key fields. In the fields_list parameter of the Fields function, do the following:
    1. Type value.
    2. Click + Add, and then type topic.
    3. Click + Add, and then type key.
  9. Now, let's normalize the other data stream. Hover over the circle in between the Kafka and Union functions, click the + icon, and add another Fields function.
  10. In the fields_list parameter of the Fields function, do the following:
    1. Type value.
    2. Click + Add, and then type topic.
    3. Click + Add, and then type key.
  11. Validate your pipeline.

You now have a pipeline that reads from two data sources, Kafka and Splunk DSP Firehose, and merges the data from both sources into one data stream.

Last modified on 02 November, 2022
Create a pipeline using a template   Send data from a pipeline to multiple destinations

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters