Splunk® User Behavior Analytics Kafka Ingestion App

Splunk UBA Kafka Ingestion App

Configure two-way SSL communication for Kafka data ingestion

You can perform the following steps in your Splunk Enterprise deployment. If you are using Splunk Cloud Platform, you must work with your Splunk account team to perform these steps for you.

You can change from Simple Authentication and Security Layer (SASL) authentication to two-way Secure Sockets Layer (SSL) authentication for communication between the indexers on your Splunk platform and Kafka in Splunk UBA.

Complete the following steps:

Using the keytool command

Several of the configuration steps include use of the keytool command. Using this command might cause the following warning message to appear:

The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /opt/caspida/conf/kafka/auth/server.keystore.jks -destkeystore /opt/caspida/conf/kafka/auth/server.keystore.jks -deststoretype pkcs12".


You can safely ignore this warning message.

Obtain a root certificate authority

If you don't already have a certificate from a root certificate authority (CA), you can create a self-signed root certificate using OpenSSL. Include the number of days this certificate is considered valid and store the certificate key in a secure location.

You can skip step 1 if you already have a root certificate authority (CA).

Perform the following steps to create a self-signed CA OpenSSL:

  1. Create a self-signed root certificate using OpenSSL:
    openssl req -new -x509 -keyout new-ca-key -out new-ca-cert -days <number of valid day>
  2. Retrieve the keystore location and keystore password from the /opt/caspida/conf/kafka/kafka.properties file:
    ssl.keystore.location=<keystore location>
    ssl.keystore.password=<keystore password>
  3. Move the current CA certificate out of the way. You will need to provide the keystore location and password you obtained in the previous step:
    keytool -keystore <keystore location> -storepass <keystore password> -alias caroot -changealias  -destalias "original-caroot"
  4. Import the new root certificate into the key store:
    keytool -keystore <keystore location> -storepass <keystore password> -alias CARoot -importcert -file new-ca-cert

Update certification configuration on each Kafka node

For each Kafka node in UBA, perform the following steps:

  1. Log into the node as the caspida user.
  2. Retrieve the keystore location, keystore password, and key password from the /opt/caspida/conf/kafka/kafka.properties file:
    ssl.keystore.location=<keystore location>
    ssl.keystore.password=<keystore password>
    ssl.key.password=<key password>
  3. Move the current server cert out of the way in the keystore:
    keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -changealias  -destalias "original-localhost"
  4. Create a new server certificate with the same hostname of the UBA node used when setting up UBA. Include the number of days this cert should be considered valid:

    The hostname must match the name used to set up the node.

    keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -genkey -keyalg RSA -validity <number of valid day> -keypass <key password> -dname CN=<hostname of the node>
    
  5. Generate a certificate request for signing the server certificate:
    keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -certreq -file cert-file -storepass <keystore password>
    
  6. Sign the new server certificate with your root certificate. Include the number of days this certificate is considered valid:
    openssl x509 -req -CA <new-ca-cert> -CAkey <new-ca-key> -in cert-file -out cert-signed -days <number of valid day> -CAcreateserial
  7. Import the new signed certificate back into the keystore:
    keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -importcert -file cert-signed
  8. Import the root CA into a new truststore:
    keytool -keystore /opt/caspida/conf/kafka/auth/server.truststore.jks -storepass <keystore password> -alias CARoot -importcert -file new-ca-cert
  9. Modify /opt/caspida/conf/kafka/kafka.properties as follows:
    1. Change the following line:
      listeners=PLAINTEXT://:9092,SASL_SSL://:9093

      To:

      listeners=PLAINTEXT://:9092,SSL://:9093
    2. Add the following lines:
      ssl.truststore.location=/opt/caspida/conf/kafka/auth/server.truststore.jks
      ssl.truststore.password=<keystore password>
      ssl.client.auth=required
  10. Restart Caspida:
    /opt/caspida/bin/Caspida stop-kafka
    /opt/caspida/bin/Caspida start-kafka

Configure your Splunk search heads

Create and configure the local/ubakafka.conf file on your search heads to override the default settings in the ubakafka.conf file.

Configure a single search head

Perform the following steps on your Splunk search head:

  1. Create a new client key:
    openssl genrsa -out client-key 4096
  2. Create a certificate request:
    openssl req -new -key client-key -out client-csr
  3. Create a client certificate with your root CA and certificate request. Include the number of days this certificate is considered valid.
    Store this in the bin/auth directory of the UBA Kafka Ingestion App.
    openssl x509 -req -CA new-ca-cert -CAkey new-ca-key -in client-csr -out client-cert -days <number of valid day> -CAcreateserial
  4. Create or update the local/ubakafka.conf file under the UBA Kafka Ingestion App root directory.
    [kafka]
    security_protocol = SSL
    ca_cert_file = new-ca-cert
    client_cert_file = client-cert
    client_key_file = client-key
    
  5. Restart Splunk. Run the following command from the $SPLUNK_HOME/bin directory:
    ./splunk restart

Configure a search head cluster

Perform the following steps to configure a search head cluster:

  1. Create a new client key:
    openssl genrsa -out client-key 4096
  2. Create a certificate request:
    openssl req -new -key client-key -out client-csr
  3. Create a client certificate with your root CA and certificate request. Include the number of days this certificate is considered valid.
    openssl x509 -req -CA new-ca-cert -CAkey new-ca-key -in client-csr -out client-cert -days <number of valid day> -CAcreateserial
  4. Copy the root CA, client key and client certificate to the bin/auth directory of the UBA Kafka Ingestion App of each search head.
  5. On the search head cluster deployer, create or update the local/ubakafka.conf file under the UBA Kafka Ingestion App root directory.
    [kafka]
    security_protocol = SSL
    ca_cert_file = new-ca-cert
    client_cert_file = client-cert
    client_key_file = client-key
  6. Push the bundle to the search head cluster.
    See Use the deployer to distribute apps and configuration updates in the Splunk Enterprise Distributed Search manual for more information about using the deployer to push configuration changes to search head cluster members.
Last modified on 04 September, 2024
Enable hostname verification for Kafka data ingestion  

This documentation applies to the following versions of Splunk® User Behavior Analytics Kafka Ingestion App: 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters