Splunk® Data Stream Processor

Connect to Data Sources and Destinations with DSP

On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.

Connecting Kafka to your DSP pipeline as a data source

When creating a data pipeline in Splunk Data Stream Processor, you can connect to an Apache Kafka or Confluent Kafka broker and use it as a data source. You can get data from Kafka into a pipeline, transform the data as needed, and then send the transformed data out from the pipeline to a destination of your choosing.

If you have a Universal license, you can also use Kafka as a data destination. See Connecting Kafka to your DSP pipeline as a data destination for information about this use case. See Licensing for the Splunk Data Stream Processor for information about licensing.

DSP supports three types of connections for accessing Kafka brokers:

Kafka connection type Description
SASL-authenticated Username and password authentication is used. You can choose to protect your credentials using SCRAM (Salted Challenge Response Authentication Mechanism) or leave them in plaintext. The connection is encrypted using SSL.


This type of connection is suitable for use in production environments.

SSL-authenticated Two-way SSL authentication is used, so that DSP and the Kafka broker authenticate each other using the SSL protocol. Additionally, the connection is encrypted using SSL.


This type of connection is suitable for use in production environments.

Unauthenticated No authentication takes place between DSP and the Kafka broker. The connection is not encrypted.


This type of connection should only be used for testing purposes in a secure internal environment.

To connect to Kafka as a data source, you must complete the following tasks:

  1. Create a connection that allows DSP to access your Kafka data.
  2. Create a pipeline that starts with the Kafka source function. See the Building a pipeline chapter in the Use the Data Stream Processor manual for instructions on how to build a data pipeline.
  3. Configure the Kafka source function to use your Kafka connection. See Get data from Kafka in the Function Reference manual.
  4. (Optional) Convert the value field in the Kafka records from bytes to a more commonly supported data type such as strings. This conversion makes the field human-readable during data preview and compatible with a wider range of streaming functions. See Deserialize and preview data from Kafka in DSP.

When you activate the pipeline, the source function starts collecting data from Kafka.

If your data fails to get into DSP, check the connection settings to make sure you have the correct broker, as well as the correct credentials and certificates if you are using an authenticated connection. DSP doesn't run a check to see if you enter valid credentials.

Last modified on 25 March, 2022
Formatting data for Parquet files in Amazon S3   Connecting Kafka to your pipeline as a data destination

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters