Deserialize and preview data from Kafka
If you are creating a pipeline to ingest data from Kafka using the Read from Kafka source function, read the following to deserialize and preview your data:
Prerequisites
- A properly configured Kafka system that includes at least one broker and one defined Kafka topic that you want to ingest. For details, see the Kafka documentation.
- A DSP Kafka connection. See Create a connection for the DSP Kafka SSL Connector and Create a connection for the DSP Apache Kafka Connector without authentication in the Getting Data In manual.
Steps
Once you satisfy the prerequisites, you can ingest data from Kafka.
- From the Data Stream Processor home page, go to the Build Pipeline tab.
- Select Read from Apache Kafka as your source function.
- On the next page, complete the following fields:
Field Description Example Connection ID The ID associated with your Kafka connection. 461b1915-131e-4daf-a144-0630307436d0 Topic You must enter one Kafka topic. my-kafka-topic Consumer Properties Optional. Enter any Kafka consumer properties that you want to set on the Kafka consumer that the Splunk Data Stream Processor creates. See the Apache or Confluent Kafka documentation for details of what consumer properties Kafka consumers accept. To enter more than one property, click Add input for every new property you want to add. key = value - Click the + icon to add a new function.
- Click the
Eval
function, and convert thevalue
field from bytes into a string:value=to_string(value)
- Click Start Preview and click on the Eval function to confirm that your data is now in a format where you can perform additional transformations to it.
- (Optional) Click Stop Preview and continue building your pipeline by adding new functions to it.
Create a Splunk DSP pipeline that processes universal forwarder data | Aggregate records in a pipeline |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.1.0
Feedback submitted, thanks!