All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
Connecting Apache Pulsar to your DSP pipeline as a data source
When creating a data pipeline in Splunk Data Stream Processor, you can connect to an Apache Pulsar cluster and use it as a data source. You can get data from Pulsar into a pipeline, transform the data as needed, and then send the transformed data out from the pipeline to a destination of your choosing.
To connect to Kafka as a data source, you must complete the following tasks:
- Create a connection that allows DSP to access your Pulsar data. See Create a DSP connection to Apache Pulsar.
- Create a pipeline that starts with the Apache Pulsar source function. See the Building a pipeline chapter in the Use the Data Stream Processor manual for instructions on how to build a data pipeline.
- Configure the Apache Pulsar source function to use your Pulsar connection. See Get data from Apache Pulsar in the Function Reference manual.
When you activate the pipeline, the source function starts collecting data from Pulsar.
If your data fails to get into DSP, check the connection settings to make sure you have the correct service URL, SSL certificates, and client private key for your Pulsar cluster. DSP doesn't run a check to see if you enter valid credentials.
Deserialize and preview data from Kafka in DSP | Create a DSP connection to Apache Pulsar |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0, 1.2.1-patch02, 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5, 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6
Feedback submitted, thanks!