Splunk® Supported Add-ons

Splunk Add-on for Kafka

Troubleshoot the Splunk Add-on for Kafka

General troubleshooting

For helpful troubleshooting tips that you can apply to all add-ons, see Troubleshoot add-ons in Splunk Add-ons. For additional resources, see Support and resource links for add-ons in Splunk Add-ons.

Access logs

Search the add-on internal logs with this search:

index=_internal sourcetype=kafka:*log*

The add-on logs are located in the $SPLUNK_HOME/var/log/splunk directory. The log files are splunk_ta_kafka_main.log, splunk_ta_kafka_setup.log, and splunk_ta_kafka_util.log.

Adjust log levels

By default, the add-on logs are set to INFO. You can change this to DEBUG or ERROR in local/kafka.conf or in the add-on setup page.

Cannot launch add-on

This add-on does not have views and is not intended to be visible in Splunk Web. If you are trying to launch or load views for this add-on and you are experiencing results you do not expect, turn off visibility for the add-on.

For more details about add-on visibility and instructions for turning visibility off, see Check if the add-on is intended to be visible or not in the Splunk Add-ons Troubleshooting topic.

Problems with the modular input

Performance issues

Optimize performance for very large topics by using the Partition IDs field (kafka_partition in local/kafka_credentials.conf). For example, if you have a topic containing a high volume of data spread across 40 partitions, configure multiple different input configurations, each with a subset of the partitions in that topic. This method spreads the load over multiple separate processes to improve performance.

Optimize performance when collecting multiple small topics using the Topic Group field (kafka_topic_group in local/kafka_credentials.conf). For example, if you have six Kafka topics that are all very small in size, pick a group name and apply it to all of those topics. The add-on collects data from these groups using a single collection task and TCP/IP connection to your Kafka brokers, resulting in more efficient resource usage.

Kafka messages exceed 1MB

By default, the modular input collects Kafka messages which are 1 MB or smaller. If you have messages which are larger than 1 MB:

  1. Edit or create $SPLUNK_HOME/etc/apps/Splunk_TA_Kafka/local/kafka.conf
  2. Under the stanza [global_settings], add the line fetch_message_max_bytes = and set a value large enough for your use case.

Kafka data injected with Snappy compression enabled

If you have the Snappy compression method enabled when injecting data into Kafka, install a Snappy binding to allow this add-on to support Snappy Kafka messages. You might also need to install other dependent software libraries if they do not already exist on your system. The console produces errors to inform you which dependencies you need.

The add-on includes a Snappy build for CentOS7/RHEL7 X86_64 version in $SPLUNK_HOME/etc/apps/Splunk_TA_Kafka/bin/snappy_libs, so if you are using CentOS7/RHEL7, you can copy the snappy.so and snappy.py to the $SPLUNK_HOME/etc/apps/Splunk_TA_kafka/bin directory and restart the Splunk platform.

For other operating systems, obtain and install the appropriate Snappy build. For example, in a centOS/RHEL platform:

  1. Run the command:
    yum search snappy
    
  2. Look for the development package that matches your OS architecture and version. For example, snappy-devel.x86_64.
  3. Run:
    sudo yum install snappy-devel.x86_64
    
  4. Run:
    export PYTHONPATH=/opt/splunk/etc/apps/Splunk_TA_kafka/bin
    
  5. Run:
    easy_install --install-dir /opt/splunk/etc/apps/Splunk_TA_kafka/bin python-snappy
    
  6. Go to /opt/splunk/etc/apps/Splunk_TA_kafka/bin and run:
    unzip python_snappy-0.5-py2.-xxx.egg
    
  7. Restart the Splunk platform and proceed with configuring data collection.

Problems connecting to the JMX server remotely

If you have followed the directions to enable the JMX server, but you are still having problems connecting to the Kafka JMX server remotely, revise the following line in $KAFKA_HOME/bin/kafka-run-class.sh and restart the Kafka cluster again. In some cases, adding the hostname in the JMX server starting parameters resolves the remote JMX connection problem.

Revise this line:

KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"

To this:

KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=<your_kafka_hostname>"
Last modified on 24 April, 2018
Configure modular inputs for the Splunk Add-on for Kafka  

This documentation applies to the following versions of Splunk® Supported Add-ons: released


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters