Splunk® Connect for Kafka

Install and Administer Splunk Connect for Kafka

Acrobat logo Download manual as PDF


This documentation does not apply to the most recent version of Splunk® Connect for Kafka. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Security configurations for Splunk Connect for Kafka

Splunk Connect for Kafka supports the following security processes:

  • Secure Socket Layer (SSL)
  • SASL/Generic Security Service Application Program Interface (GSSAPI/GSS-API) (Kerberos)
  • Simple Authentication and Security Layer (SASL)/PLAIN
  • SASL/Salted Challenge Response Authentication Mechanism (SCRAM)-SHA-256
  • SASL/SCRAM-SHA-512

Configure SSL security for Splunk Connect for Kafka

Use the following information to configure SSL security for Splunk Connect for Kafka.

Configure a certificate for Kafka connector with Splunk

  1. Create a cert directory
    mkdir ~/cert
    cd ~/cert
    
  2. Generate a Self-Signed Certificate
    openssl req -newkey rsa:2048 -nodes -keyout kafka_connect.key \
    -x509 -days 365 -out kafka_connect.crt
    
  3. Generate a certificate .pem file from .crt. The Splunk HTTP Event Collector (HEC) requires .pem format.
    openssl x509 -in kafka_connect.crt -out kafka_connect.pem -outform PEM
  4. Generate a new keystore. You will need to create a password when generating the new keystore.
    keytool -genkeypair -keyalg RSA -keystore keystore.jks
  5. Import Signed/Root/Intermediate Certificate
    keytool -importcert -trustcacerts -file kafka_connect.crt \
    -alias localhost -keystore keystore.jks

  6. Configure HEC using your certificate.
    1. Copy the certificate and key to Splunk
      cp ~/cert/kafka_connect.key ~/splunk/etc/auth
      cp ~/cert/kafka_connect.pem ~/splunk/etc/auth
      
    2. In $SPLUNK_HOME, navigate to ~/etc/apps/splunk_httpinput/local/.
    3. Open inputs.conf with a text editor and add the following lines under the [http] stanza:
      [http]
      disabled = 0
      enableSSL = 1
      serverCert = ~/splunk/etc/auth/kafka_connect.pem <absolute path to your certificate>
      privKeyPath = ~/splunk/etc/auth/kafka_connect.key
      sslPassword = <your password of certificate private key>
      
    4. Restart your Splunk platform instance.
      cd ~/splunk/bin
      ./splunk restart
      



Configure a certificate authority for your Kafka broker and the Splunk Connect for Kafka

  1. Generate your own certificate authority (CA) certificate, and add the same CA certificate to each client and broker's truststore. The following bash script generates the keystore and truststore for brokers (kafka.server.keystore.jks and kafka.server.truststore.jks) and clients (kafka.client.keystore.jks and kafka.client.truststore.jks):
    #!/bin/bash
    PASSWORD=test1234
    VALIDITY=365
    keytool -keystore kafka.server.keystore.jks -alias localhost -validity $VALIDITY -genkey
    openssl req -new -x509 -keyout ca-key -out ca-cert -days $VALIDITY
    keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-cert
    keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert
    keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file cert-file
    openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days $VALIDITY -CAcreateserial -passin pass:$PASSWORD
    keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert
    keytool -keystore kafka.server.keystore.jks -alias localhost -import -file cert-signed
    keytool -keystore kafka.client.keystore.jks -alias localhost -validity $VALIDITY -genkey
    keytool -keystore kafka.client.keystore.jks -alias localhost -certreq -file cert-file
    openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days $VALIDITY -CAcreateserial -passin pass:$PASSWORD
    keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert
    keytool -keystore kafka.client.keystore.jks -alias localhost -import -file cert-signed
    
  2. Configure Kafka brokers.
    1. Navigate to config/server.properties, and add the following lines:
      listeners=SSL://localhost:9092
      
      security.inter.broker.protocol=SSL
      ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
      ssl.client.auth=none
      ssl.keystore.type = JKS
      ssl.keystore.location=~/cert/kafka.server.keystore.jks
      ssl.keystore.password=test1234
      ssl.key.password=test1234
      ssl.truststore.type = JKS
      ssl.truststore.location=~/cert/kafka.server.truststore.jks
      ssl.truststore.password=test1234
      
    2. Save your changes.

  3. Configure Kafka Connect
    1. Navigate to config/connect-distributed.properties, and add the following lines:
      bootstrap.servers=https://localhost:9092
      
      security.protocol=SSL
      
      ssl.key.password=test1234
      ssl.keystore.location=~/cert/kafka.client.keystore.jks
      ssl.keystore.password=test1234
      ssl.truststore.location=~/cert/kafka.client.truststore.jks
      ssl.truststore.password=test1234
      ssl.enabled.protocols=TLSv1.2,TLSv1.1
      ssl.truststore.type=JKS
      
      # Authentication settings for Connect consumers used with sink connectors
      consumer.security.protocol=SSL
      consumer.ssl.key.password=test1234
      consumer.ssl.keystore.location=~/cert/kafka.client.keystore.jks
      consumer.ssl.keystore.password=test1234
      consumer.ssl.truststore.location=~/cert/kafka.client.truststore.jks
      consumer.ssl.truststore.password=test1234
      consumer.ssl.enabled.protocols=TLSv1.2,TLSv1.1
      consumer.ssl.truststore.type=JKS
      
    2. Save your changes.
  4. Start your Kafka server and Kafka Connect
    cd $KAFKA_HOME
    bin/zookeeper-server-start.sh config/zookeeper.properties
    bin/kafka-server-start.sh config/server.properties
    ./bin/connect-distributed.sh config/connect-distributed.properties
    
  5. Create a Kafka topic
    1. Create a new properties file named client.properties. This file is referenced when you use the command line tools to open a connection. Configure it to use your key store and trust store JKS files.
      security.protocol=SSL
      ssl.keystore.location=~/cert/kafka.client.keystore.jks
      ssl.keystore.password=test1234 <keystore password>
      ssl.key.password=test1234 <pkcs12 password>
      ssl.truststore.location=~/cert/kafka.client.truststore.jks
      ssl.truststore.password=test1234 <truststore password>
      
    2. Change the access permissions for your file system, in order to hide passwords.
      chmod 0600 client.properties
    3. Use your client.properties file to make connections to a broker from the Kafka command line tools.
      $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 -topic mytopic --consumer.config client.properties
      $ bin/kafka-console-producer.sh --broker-list localhost:9092 -topic mytopic --producer.config client.properties
      
  6. Run the following command to create connector tasks. Use the following table as reference to adjust the command to fit your deployment.
    Parameter Description
    topics Configure the Kafka topic to be ingested.
    splunk.indexes Set the destination Splunk indexes.
    splunk.hec.token Set your Http Event Collector (HEC) token.
    splunk.hec.uri URI for your destination Splunk HEC endpoint.
    curl localhost:8083/connectors -X POST -H "Content-Type: application/json" -d '{
      "name": "ssl_validate_certs_true",
      "config": {
        "connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
        "tasks.max": "1",
        "topics": "<YOUR_TOPIC>",
        "splunk.indexes": "<SPLUNK_INDEXES>",                     // mytopic
        "splunk.hec.uri": "<SPLUNK_HEC_URI:SPLUNK_HEC_PORT>",     
        "splunk.hec.token": "<YOUR_TOKEN>",
        "splunk.hec.ssl.trust.store.path": "<KEYSTORE_LOCATION>",   // ~/cert/keystore.jks
        "splunk.hec.ssl.trust.store.password": "<PASSWORD_KEYSTORE>",
        "splunk.hec.ack.enabled": "false",
        "splunk.hec.ssl.validate.certs": "true"
      }
    }'
    

Configure workers and SinkTasks to work with your SSL secured cluster

  1. Navigate to $KAFKA_HOME/config/connect-distributed.properties to configure the Kafka Connect worker and consumer settings to use SSL.
  2. Adjust the settings consumer.ssl.truststore.location and ssl.truststore.password to reflect your setup.
    # Worker security are located at the top level
    security.protocol=SSL
    ssl.truststore.location=/var/private/ssl/kafka.client.truststore.jks
    ssl.truststore.password=test1234
    
    # Sink security settings are prefixed with "consumer."
    consumer.security.protocol=SSL
    consumer.ssl.truststore.location=/var/private/ssl/kafka.client.truststore.jks
    consumer.ssl.truststore.password=test1234
    


    There is currently no way to change the configuration for connectors individually, but if your server supports client authentication over SSL, use a separate principal for the worker and the connectors. See Confluent's documentation on configuring workers and connectors with security for more information.

  3. Start Kafka Connect.
    ./bin/connect-distributed.sh config/connect-distributed-quickstart.properties
    

SASL/GSSAPI (Kerberos)

Configure Kafka Connect when your Kafka cluster is secured using Kerberos.

  1. Configure the Kafka Connect worker and consumer settings to use Kerberos in $KAFKA_HOME/config/connect-distributed.properties.
    # Worker security are located at the top level
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=GSSAPI
    
    # Sink security settings are prefixed with "consumer."
    consumer.sasl.mechanism=GSSAPI
    consumer.security.protocol=SASL_PLAINTEXT
    sasl.kerberos.service.name=kafka
    
  2. Modify bin/connect-distributed.sh by editing the EXTRA_ARGS environment variable.
  3. Pass in the location of the JAAS conf file. Optionally, you can specify the path to your Kerberos configuration file and set Kerberos debugging to true for troubleshooting connection issues.
    EXTRA_ARGS=${EXTRA_ARGS-'-name connectDistributed -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/root/kafka_connect_jaas.conf -Dsun.security.krb5.debug=true'}
    

    For example, a Kafka Client JAAS file using the principal connect:

    KafkaClient {
    	com.sun.security.auth.module.Krb5LoginModule required
    	useKeyTab=true
    	storeKey=true
    	keyTab="/etc/security/keytabs/connect.keytab"
    	principal="connect/_HOST@REALM";
    };
    
  4. Modify the keytab and principal settings to reflect your environment.

  5. Start Kafka Connect.
    ./bin/connect-distributed.sh config/connect-distributed.properties
    

See Confluent's documentation for more information on configuring Kafka Connect using JAAS.

SASL/PLAIN

Do not run SASL/PLAIN in production without SSL.

Configure Kafka Connect worker and consumer settings to use SASL/PLAIN:

  1. Configure the Kafka Connect worker and consumer settings to use SASL/PLAIN in $KAFKA_HOME/config/connect-distributed.properties.
    # Worker security are located at the top level
    security.protocol=SASL_SSL
    sasl.mechanism=PLAIN
    
    # Sink security settings are prefixed with "consumer."
    consumer.security.protocol=SASL_SSL
    consumer.sasl.mechanism=PLAIN
    
  2. Navigate to $KAFKA_HOME/config/connect-distributed.properties and edit the EXTRA_ARGS environment variable.
  3. Pass in the location of the JAAS conf file.
    EXTRA_ARGS=${EXTRA_ARGS-'-name connectDistributed -Djava.security.auth.login.config=/root/kafka_connect_jaas.conf'}
    

    For example, a Kafka Client JAAS file for SASL/PLAIN.

    KafkaClient {
      org.apache.kafka.common.security.plain.PlainLoginModule required
      username="alice"
      password="alice-secret";
    };
    
  4. Start Kafka Connect.
    ./bin/connect-distributed.sh config/connect-distributed.properties
    

See Confluent's documentation for more information on configuring Kafka Connect using SASL/PLAIN.

SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512

Configure the Kafka Connect worker and consumer settings to use SASL/SCRAM:

  1. Navigate to $KAFKA_HOME/config/connect-distributed.properties and make the following adjustments:
    # Worker security are located at the top level
    security.protocol=SASL_SSL
    sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)
    
    # Sink security settings are prefixed with "consumer."
    consumer.security.protocol=SASL_SSL
    consumer.sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)
    
  2. Modify bin/connect-distributed.sh by editing the EXTRA_ARGS environment variable. Pass in the location of the JAAS configuration file.
    EXTRA_ARGS=${EXTRA_ARGS-'-name connectDistributed -Djava.security.auth.login.config=/root/kafka_connect_jaas.conf'}
    

    For example, a Kafka Client JAAS file for SASL/SCRAM:

    KafkaClient {
      org.apache.kafka.common.security.scram.ScramLoginModule required
      username="alice"
      password="alice-secret";
    };
    
  3. Start Kafka Connect ./bin/connect-distributed.sh config/connect-distributed.properties

Workers and SinkTasks now work with your SASL/SCRAM secured cluster. See Confluent's documentation for more information on configuring Kafka Connect using JAAS.

Last modified on 10 November, 2020
PREVIOUS
Configure Splunk Connect for Kafka
  NEXT
Load balancing configurations for Splunk Connect for Kafka

This documentation applies to the following versions of Splunk® Connect for Kafka: 2.0.1, 2.0.2


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters