Enable hostname verification for Kafka data ingestion
You can perform the following steps in your Splunk Enterprise deployment. If you are using Splunk Cloud Platform, you must work with your Splunk account team to perform these steps for you.
Splunk indexers sending data to Splunk UBA with Kafka ingestion can be configured to verify the hostname(s) of the Splunk UBA nodes they are sending data to. To enable hostname verification you must use or create your own root certification authority (CA) and configure Kafka ingestion to use that CA with the following steps:
- Obtain a root certificate authority
- Verify kafka server certificate hostname
- Configure your Splunk search heads
By default, hostname verification is not configured. The following bundled certificate that's provided with the Splunk UBA Kafka Ingestion App is used:
[kafka] verify_hostname = true|false * Default: false ca_cert_file = <caCertFileName> * Filename of the CA used to sign the kafka server certificate(s) Privacy-Enhanced Mail (PEM) format file. * The file should be put under "bin/auth" directory of the app * Builtin certificate, ca-cert, is auto-generated, you should replace it with your own. * Default: ca-cert
After you enable hostname verification for Kafka data ingestion, you can complete Kafka SSL ingestion. See Configure two-way SSL communication for Kafka data ingestion.
Using the keytool command
Several of the configuration steps include use of the keytool
command. Using this command might cause the following warning message to appear:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /opt/caspida/conf/kafka/auth/server.keystore.jks -destkeystore /opt/caspida/conf/kafka/auth/server.keystore.jks -deststoretype pkcs12".
You can safely ignore this warning message.
Obtain a root certificate authority
If you don't already have a certificate from a root certificate authority (CA), you can create a self-signed root certificate using OpenSSL. Include the number of days this certificate is considered valid and store the certificate key in a secure location.
Perform the following steps to create a self-signed CA OpenSSL:
- Create a self-signed root certificate using OpenSSL:
You can skip step 1 if you already have a root certificate authority (CA).
If you have a distributed installation of Splunk UBA, create the certificate once, and then copy to each of your UBA nodes.
openssl req -new -x509 -keyout new-ca-key -out new-ca-cert -days <number of valid day>
- Retrieve the keystore location and keystore password from the /opt/caspida/conf/kafka/kafka.properties file:
If you have a distributed installation of Splunk UBA, repeat step 2 for each of your UBA nodes.
ssl.keystore.location=<keystore location> ssl.keystore.password=<keystore password>
- Move the current CA certificate out of the way. You will need to provide the keystore location and password you obtained in the previous step:
If you have a distributed installation of Splunk UBA, repeat step 3 for each of your UBA nodes.
keytool -keystore <keystore location> -storepass <keystore password> -alias caroot -changealias -destalias "original-caroot"
- Import the new root certificate into the key store:
If you have a distributed installation of Splunk UBA, repeat step 4 for each of your UBA nodes.
keytool -keystore <keystore location> -storepass <keystore password> -alias CARoot -importcert -file new-ca-cert
Update certification configuration on each Kafka node
For each Kafka node in UBA, perform the following steps:
- Log into the node as the caspida user.
- Retrieve the keystore location, keystore password, and key password from the /opt/caspida/conf/kafka/kafka.properties file:
ssl.keystore.location=<keystore location> ssl.keystore.password=<keystore password> ssl.key.password=<key password>
- Move the current server cert out of the way in the keystore:
keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -changealias -destalias "original-localhost"
- Create a new server certificate with the same hostname of the UBA node used when setting up UBA. Include the number of days this cert should be considered valid:
The hostname must match the name used to set up the node.
keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -genkey -keyalg RSA -validity <number of valid day> -keypass <key password> -dname CN=<hostname of the node>
- Generate a certificate request for signing the server certificate:
keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -certreq -file cert-file -storepass <keystore password>
- Sign the new server certificate with your root certificate. Include the number of days this certificate is considered valid:
openssl x509 -req -CA <new-ca-cert> -CAkey <new-ca-key> -in cert-file -out cert-signed -days <number of valid day> -CAcreateserial
- Import the new signed certificate back into the keystore:
keytool -keystore <keystore location> -storepass <keystore password> -alias localhost -importcert -file cert-signed
- Restart Caspida:
/opt/caspida/bin/Caspida stop-kafka /opt/caspida/bin/Caspida start-kafka
Configure your Splunk search heads
Create and configure the local/ubakafka.conf file on your search heads to override the default settings in the ubakafka.conf file.
Configure a single search head
Perform the following steps on your Splunk search head:
- Copy the root CA to the auth directory under the UBA Kafka Ingestion App root directory as shown in the following example:
$SPLUNK_HOME/etc/apps/Splunk-UBA-SA-Kafka/bin/auth/new-ca-cert
- Create a new local/ubakafka.conf file under the UBA Kafka Ingestion App root directory. Set the
verify_hostname
parameter to true, and theca_cert_file
to the new CA file name:[kafka] verify_hostname = true ca_cert_file = new-ca-cert
- Restart Splunk. Run the following command from the $SPLUNK_HOME/bin directory:
./splunk restart
Configure a search head cluster
Perform the following steps to configure the local/ubakafka.conf file in a search head cluster:
- On each search head, copy the root CA to the auth directory under the UBA Kafka Ingestion App root directory as shown in the following example:
$SPLUNK_HOME/etc/apps/Splunk-UBA-SA-Kafka/bin/auth/new-ca-cert
- On the search head cluster deployer, create the $SPLUNK_HOME/etc/shcluster/apps/<app>/local/ubakafka.conf file and set
verify_hostname
to true andca_cert_file
to the new CA file name:[kafka] verify_hostname = true ca_cert_file = new-ca-cert
- Push the bundle to the search head cluster.
See Use the deployer to distribute apps and configuration updates in the Splunk Enterprise Distributed Search manual for more information about using the deployer to push configuration changes to search head cluster members.
Change an existing data source to use Kafka ingestion | Configure two-way SSL communication for Kafka data ingestion |
This documentation applies to the following versions of Splunk® User Behavior Analytics Kafka Ingestion App: 1.4.3, 1.4.4, 1.4.5
Feedback submitted, thanks!