Splunk® Data Stream Processor

Function Reference

Acrobat logo Download manual as PDF

Acrobat logo Download topic as PDF

Send data to Kafka

Send data to an Apache or Confluent Kafka topic.

Before you can use this function, you must do the following:

  • Create a Kafka connection. See Create an SSL-authenticated connection to Kafka and Create an unauthenticated connection to Kafka. When configuring this sink function, use the ID of that connection for the connection_id argument.
  • Create the destination topic in your Apache Kafka or Confluent Kafka broker.
    • For information about creating a topic in Apache Kafka, search for "Apache Kafka Quickstart" in the Apache Kafka documentation.
    • For information about creating a topic in Confluent Kafka, search for "Quick Start for Apache Kafka" in the Confluent documentation.

    If you activate your pipeline before creating the topic specified in the topic argument, the pipeline fails to send data to Kafka and restarts indefinitely.

To send data to a Kafka topic, you must provide the topic, key and value (payload) to the Send to Kafka function. You can only specify one topic per Send to Kafka function. The key and value fields are dynamic: you specify them on a per-record basis. The key and value passed into this function must return bytes, otherwise, your pipeline fails to validate.

Function input schema

collection<record<R>>
This function takes in collections of records with schema R.

Required arguments

connection_id
Syntax: string
Description: The ID of your Kafka connection.
Example: "879837b0-cabf-4bc2-8589-fcc4dad753e7"
topic
Syntax: string
Description: Specify your Kafka topic here.
Example: my-topic

Make sure that the destination topic exists in your Kafka broker. If you activate your pipeline before the specified topic is created, the pipeline fails to send data to Kafka and restarts indefinitely.

key
Syntax: expression<bytes>
Description: Your Kafka key, in bytes. Kafka keys are used for partition assignment. To use Kafka's default partition assignment mechanism, set this to null.
Example: to_bytes("")
value
Syntax: expression<bytes>
Description: The data payload, in bytes, for each event.
Example: to_bytes("")

Optional arguments

producer_properties
Syntax: map<string, string>
Description: Add optional producer properties here. For a list of valid producer properties, see the "Producer Configs" section in the Apache Kafka documentation.
Example: {"reconnect.backoff.max.ms": 1500}

SPL2 example

You can write the function by providing the arguments in this exact order.

...| into kafka("879837b0-cabf-4bc2-8589-fcc4dad753e7", topic1, to_bytes(""), to_bytes(""));

Alternatively, you can use named arguments to declare the arguments in any order and leave out optional arguments you don't want to declare. All unprovided arguments use their default values. See SPL2 syntax for more details. The following example provides the arguments in an arbitrary order.

...| into kafka(topic: topic1, connection_id: "879837b0-cabf-4bc2-8589-fcc4dad753e7", key: to_bytes(""), value: to_bytes(""));
Last modified on 03 December, 2020
PREVIOUS
Send data to Amazon S3
  NEXT
Send data to Microsoft Azure Event Hubs (Beta)

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0


Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters