Troubleshoot your Splunk UBA installation
The following are issues you might experience when installing Splunk UBA and how to resolve them.
Chrome browser reports ERR_CERT_COMMON_NAME_INVALID
You might see the "ERR_CERT_COMMON_NAME_INVALID" message in the Chrome browser.
The Subject Alternate Names are missing, and the OpenSSL configuration does not ask for Subject Alternate Names.
Generate the CSR with the Subject Alternate Names, then the CSR must be signed by the Root CA.
- Create the
- Create a file named
openssl-altname.cnfin this directory. The file must contain the following:
####### [ req ] default_bits = 2048 distinguished_name = req_distinguished_name attributes = req_attributes [ req_distinguished_name ] countryName = Country Name (2 letter code) stateOrProvinceName = State or Province Name (full name) localityName = Locality Name (eg, city) organizationName = Organization Name (eg, company) organizationalUnitName = Organizational Unit Name (eg, section) commonName = Common Name (e.g. server FQDN or YOUR name) [req_attributes] subjectAltName = Alternative DNS names, Email adresses or IPs (comma separated, e.g DNS:test.example.com, DNS:test, IP:192.168.0.1) #####
- Generate the CSR:
openssl req -sha256 -new -key my-server.key.pem -out myCACertificate.csr -config /opt/caspida/conf/deployment/templates/local_conf/ssl/openssl-altname.cnf
When prompted for the alternative DNS names, enter them separated by commas. For example:The IP address is required if you will be using it to access Splunk UBA.
DNS:test.example.com, DNS:test, IP:192.168.0.1
- Verify that the CSR contains the Subject Alternate Names:
openssl req -text -noout -verify -in myCACertificate.csr | grep DNS
- Sign this CSR with the root CA.
You see the "setup containerization failed" error
setup-containerization command in
/opt/caspida/bin/Caspida is run as part of the Splunk UBA setup or upgrade procedure. If the script fails, you will see the error message "setup containerization failed".
If you see this error:
|The root disk may be more than 90% full.||Free up additional disk space and re-run the |
|HTTP or HTTPS proxy servers are configured, causing the
||To bypass HTTP/HTTPS proxy configuration, use any text editor to set the |
You see the "Warning: rebuild-uba-images failed" message
When installing content updates, you might see the "Warning" rebuild-uba-images failed" message in Splunk UBA.
The content update installer is unable to rebuild the container images.
Perform the following steps if you see this message:
- Stop the containers.
- Rebuild the Splunk UBA images.
- Start the containers.
Rebalance HDFS storage to manage disks getting full in distributed deployments
You have disks in your multi-node Splunk UBA deployment that are filling up, such as exceeding 90% in capacity.
Splunk UBA uses HDFS for storing analytics data. Over time, the storage of this data across the data nodes in your Splunk UBA cluster can become imbalanced. Splunk UBA does not automatically rebalance HDFS storage across your cluster, meaning that some nodes may have much higher HDFS storage than others.
Perform the following tasks to examine how HDFS storage is distributed throughout your cluster.
- Log in to the management node as the caspida user.
- Run the
/opt/caspida/bin/utils/uba_health_check.shscript. The output is saved to
- Edit the saved output and search for hdfs dfsadmin.
- Examine for the DFS Used% percentage on each node. You can consider rebalancing HDFS if you have a large difference among the nodes. For example, one node is using 80% of its HDFS capacity, while another is using 55%.
- To manually rebalance HDFS storage, use the following command to specify the maximum difference in HDFS storage percentage among all nodes. For example, to specify that HDFS storage should not differ by more than 5 percent on all nodes, use the following command:
sudo -u hdfs hdfs balancer -threshold 5
HDFS rebalancing occurs on all the data nodes in your cluster. You can view your deployment configuration in the
/opt/caspida/conf/deployment/recipes/deployment-<number_of_nodes>.conf file. For example, in a seven node cluster, look in
/opt/caspida/conf/deployment/recipes/deployment-7_node.conf. The data nodes are defined in the
hadoop.datanode.host property. See Where services run in Splunk UBA in Administer Splunk User Behavior Analytics.
Request and add a new certificate to Splunk UBA to access the Splunk UBA web interface
This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.2.0, 5.2.1, 5.3.0