Splunk® Data Stream Processor

Use the Data Stream Processor

Acrobat logo Download manual as PDF


On April 3, 2023, Splunk Data Stream Processor will reach its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Deserialize and preview data from Kafka

If you are creating a pipeline to ingest data from Kafka using the Read from Kafka source function, read the following to deserialize and preview your data:

Prerequisites

Steps
Once you satisfy the prerequisites, you can ingest data from Kafka.

  1. From the Data Stream Processor home page, go to the Build Pipeline tab.
  2. Select Read from Apache Kafka as your source function.
  3. On the next page, complete the following fields:
    Field Description Example
    Connection ID The ID associated with your Kafka connection. 461b1915-131e-4daf-a144-0630307436d0
    Topic You must enter one Kafka topic. my-kafka-topic
    Consumer Properties Optional. Enter any Kafka consumer properties that you want to set on the Kafka consumer that the Splunk Data Stream Processor creates. See the Apache or Confluent Kafka documentation for details of what consumer properties Kafka consumers accept. To enter more than one property, click Add input for every new property you want to add. key = value
  4. Click the + icon to add a new function.
  5. Click the Eval function, and convert the value field from bytes into a string:
    value=to_string(value)
  6. Click Start Preview and click on the Eval function to confirm that your data is now in a format where you can perform additional transformations to it.
  7. (Optional) Click Stop Preview and continue building your pipeline by adding new functions to it.
Last modified on 31 August, 2020
PREVIOUS
Create a Splunk DSP pipeline that processes universal forwarder data
  NEXT
Aggregate records in a pipeline

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.1.0


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters