Troubleshoot the Splunk Add-on for Kafka
Search the add-on internal logs with this search:
The add-on logs are located in the
$SPLUNK_HOME/var/log/splunk directory. The log files are
Adjust log levels
By default, the add-on logs are set to INFO. You can change this to DEBUG or ERROR in
local/kafka.conf or in the add-on setup page.
Cannot launch add-on
This add-on does not have views and is not intended to be visible in Splunk Web. If you are trying to launch or load views for this add-on and you are experiencing results you do not expect, turn off visibility for the add-on.
For more details about add-on visibility and instructions for turning visibility off, see Check if the add-on is intended to be visible or not in the Splunk Add-ons Troubleshooting topic.
Problems with the modular input
Optimize performance for very large topics by using the Partition IDs field (
local/kafka_credentials.conf). For example, if you have a topic containing a high volume of data spread across 40 partitions, configure multiple different input configurations, each with a subset of the partitions in that topic. This method spreads the load over multiple separate processes to improve performance.
Optimize performance when collecting multiple small topics using the Topic Group field (
local/kafka_credentials.conf). For example, if you have six Kafka topics that are all very small in size, pick a group name and apply it to all of those topics. The add-on collects data from these groups using a single collection task and TCP/IP connection to your Kafka brokers, resulting in more efficient resource usage.
Kafka messages exceed 1MB
By default, the modular input collects Kafka messages which are 1 MB or smaller. If you have messages which are larger than 1 MB:
- Edit or create
- Under the stanza
[global_settings], add the line
fetch_message_max_bytes =and set a value large enough for your use case.
Kafka data injected with Snappy compression enabled
If you have the Snappy compression method enabled when injecting data into Kafka, install a Snappy binding to allow this add-on to support Snappy Kafka messages. You might also need to install other dependent software libraries if they do not already exist on your system. The console produces errors to inform you which dependencies you need.
The add-on includes a Snappy build for CentOS7/RHEL7 X86_64 version in
$SPLUNK_HOME/etc/apps/Splunk_TA_Kafka/bin/snappy_libs, so if you are using CentOS7/RHEL7, you can copy the
snappy.py to the
$SPLUNK_HOME/etc/apps/Splunk_TA_kafka/bin directory and restart the Splunk platform.
For other operating systems, obtain and install the appropriate Snappy build. For example, in a centOS/RHEL platform:
- Run the command:
yum search snappy
- Look for the development package that matches your OS architecture and version. For example,
sudo yum install snappy-devel.x86_64
easy_install --install-dir /opt/splunk/etc/apps/Splunk_TA_kafka/bin python-snappy
- Go to
- Restart the Splunk platform and proceed with configuring data collection.
Problems connecting to the JMX server remotely
If you have followed the directions to enable the JMX server, but you are still having problems connecting to the Kafka JMX server remotely, revise the following line in
$KAFKA_HOME/bin/kafka-run-class.sh and restart the Kafka cluster again. In some cases, adding the hostname in the JMX server starting parameters resolves the remote JMX connection problem.
Revise this line:
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=<your_kafka_hostname>"
Configure modular inputs for the Splunk Add-on for Kafka
This documentation applies to the following versions of Splunk® Supported Add-ons: released