Splunk® User Behavior Analytics Kafka Ingestion App

Splunk UBA Kafka Ingestion App

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Enable hostname verification for Kafka data ingestion

Splunk indexers sending data to Splunk UBA with Kafka ingestion can be configured to verify the hostname(s) of the Splunk UBA nodes they are sending data to. By default, hostname verification is not configured and the following bundled cert provided with the Splunk UBA Kafka Ingestion App is used:

[kafka]
verify_hostname = true|false
* Verify kafka server certificate hostname
* Default: false

ca_cert_file = <caCertFileName>
* Filename of the CA used to sign the kafka server certificate(s) Privacy-Enhanced Mail (PEM) format file.
* The file should be put under "bin/auth" directory of the app
* Builtin certificate, ca-cert, is auto-generated, you should replace it with your own.
* Default: ca-cert

In order to enable hostname verification you must use or create your own root certification authority (CA) and configure Kafka ingestion to use it with the following steps.

You can perform the following procedures in your Splunk Enterprise deployment. If you are using Splunk Cloud Platform, you must work with your Splunk account team to perform this procedure for you.

Obtain a root CA

If you don't already have a certificate from a root certificate authority (CA), you can create a self-signed root certificate using OpenSSL. Include the number of days this certificate is considered valid and store the certificate key in a secure location.

Splunk Cloud Platform users must open a Support case to obtain a root CA.

Perform the following steps to create a self-signed CA OpenSSL:

  1. Create a self-signed root certificate using OpenSSL:
    openssl req -new -x509 -keyout new-ca-key -out new-ca-cert -days <number of valid day>

    You can skip step 1 if you already have a root certificate authority (CA).

  2. Retrieve the keystore location and keystore password from the /opt/caspida/conf/kafka/kafka.properties file:
    ssl.keystore.location=<keystore location>
    ssl.keystore.password=<keystore password>
  3. Move the current CA certificate out of the way. You will need to provide the keystore location and password you obtained in the previous step:
    keytool -keystore <keystore location> -storepass <keystore password> -alias caroot -changealias  -destalias "original-caroot"
  4. Import the new root certificate into the key store:
    keytool -keystore <keystore location> -storepass <keystore password> -alias CARoot -importcert -file new-ca-cert

Update certification configuration on each Kafka node

For each Kafka node in UBA, perform the following steps:

  1. Log into the node as the caspida user.
  2. Retrieve the keystore location, keystore password, and key password from the /opt/caspida/conf/kafka/kafka.properties file:
    ssl.keystore.location=<keystore location>
    ssl.keystore.password=<keystore password>
    ssl.key.password=<key password>
  3. Move the current server cert out of the way in the keystore:
    keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -changealias  -destalias "original-localhost"
  4. Create a new server certificate with the same hostname of the UBA node used when setting up UBA. Include the number of days this cert should be considered valid:

    The hostname must match the name used to set up the node.

    keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -genkey -keyalg RSA -validity <number of valid day> -keypass <key password> -dname CN=<hostname of the node>
    
  5. Generate a certificate request for signing the server certificate:
    keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -certreq -file cert-file -storepass <keystore password>
    
  6. Sign the new server certificate with your root certificate. Include the number of days this certificate is considered valid:
    openssl x509 -req -CA <new-ca-cert> -CAkey <new-ca-key> -in cert-file -out cert-signed -days <number of valid day> -CAcreateserial
  7. Import the new signed certificate back into the keystore:
    keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -importcert -file cert-signed
  8. Restart Caspida:
    /opt/caspida/bin/Caspida stop-kafka
    /opt/caspida/bin/Caspida start-kafka

Configure your Splunk search heads

Create and configure the local/ubakafka.conf file on your search heads to override the default settings in the ubakafka.conf file.

Configure a single search head

Perform the following steps on your Splunk search head:

  1. Copy the root CA to the auth directory under the UBA Kafka Ingestion App root directory as shown in the following example:
    $SPLUNK_HOME/etc/apps/Splunk-UBA-SA-Kafka/bin/auth/new-ca-cert
  2. Create a new local/ubakafka.conf file under the UBA Kafka Ingestion App root directory. Set the verify_hostname parameter to true, and the ca_cert_file to the new CA file name:
    [kafka]
    verify_hostname = true
    ca_cert_file = new-ca-cert
  3. Restart Splunk. Run the following command from the $SPLUNK_HOME/bin directory:
    ./splunk restart

Configure a search head cluster

Perform the following steps to configure the local/ubakafka.conf file in a search head cluster:

  1. On each search head, copy the root CA to the auth directory under the UBA Kafka Ingestion App root directory as shown in the following example:
    $SPLUNK_HOME/etc/apps/Splunk-UBA-SA-Kafka/bin/auth/new-ca-cert
  2. On the search head cluster deployer, create the $SPLUNK_HOME/etc/shcluster/apps/<app>/local/ubakafka.conf file and set verify_hostname to true and ca_cert_file to the new CA file name:
    [kafka]
    verify_hostname = true
    ca_cert_file = new-ca-cert
  3. Push the bundle to the search head cluster.
    See Use the deployer to distribute apps and configuration updates in the Splunk Enterprise Distributed Search manual for more information about using the deployer to push configuration changes to search head cluster members.
Last modified on 08 February, 2024
PREVIOUS
Change an existing data source to use Kafka ingestion
  NEXT
Configure two-way SSL communication for Kafka data ingestion

This documentation applies to the following versions of Splunk® User Behavior Analytics Kafka Ingestion App: 1.4.3


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters