Plan your deployment
Use one of the following connector deployment options to deploy Splunk Connect for Kafka:
- Splunk Connect for Kafka in a dedicated Kafka Connect Cluster (recommended).
- Splunk Connect for Kafka in an existing Kafka Connect Cluster.
Splunk Connect for Kafka can run in containers, in virtual machines, or on physical machines. You can leverage any automation tools for deployment.
See the Plan a deployment section of the Splunk Enterprise manual for more information on planning your Splunk platform deployment.
If you are using Splunk Cloud, use the Splunk Support Portal to request that Splunk Connect for Kafka be installed on your deployment. Splunk Support will set up and provide a URL for your HTTP Event Collector endpoint. If you are ingesting Kinesis Firehose events, you can reuse the HTTP Event Collector (HEC) endpoint setting you configured for the Splunk Add-on for Amazon Kinesis Firehose.
Sizing guidelines
Do not create more tasks than the number of partitions in your deployment. Creating 2 * CPU tasks per Kafka Connector is a safe estimate.
For example, if you have the following deployment:
- 5 Kafka Connects running the Splunk Connect for Kafka.
- Each host has 8 CPUs with 16 GB memory.
- There are 200 partitions to collect data from.
max.tasks
will be: max.tasks = 2 * CPUs/host * Kafka Connect instances = 2 * 8 * 5 = 80 tasks. - Alternatively, if there are only 60 partitions to consume from, set
max.tasks
to 60.
Determine how many Kafka Connect instances to deploy by calculating how much volume per day Splunk Connect for Kafka needs to index in your Splunk platform deployment. For example, an 8 CPU, 16 GB memory machine can potentially achieve 50 - 60 MBs per second throughput from Kafka Connect into your Splunk platform deployment if your Splunk platform deployment is sized correctly.
About | System requirements |
This documentation applies to the following versions of Splunk® Connect for Kafka: 1.0.0
Feedback submitted, thanks!