Splunk® User Behavior Analytics

Release Notes

Acrobat logo Download manual as PDF


This documentation does not apply to the most recent version of Splunk® User Behavior Analytics. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Known issues in Splunk UBA

This version of Splunk UBA has the following known issues and workarounds.

If no issues are listed, none have been reported.


Date filed Issue number Description
2023-10-25 UBA-18039 Unable to install UBA with 2 networking interfaces

Workaround:
1) If you encounter the following error while executing the Caspida setup, you can follow the steps below:

Error:

	Tue Oct 17 16:37:04 UTC 2023: adding worker node: <uba-hostname>
	Tue Oct 17 16:37:04 UTC 2023: joining worker: <uba-hostname>, cmd=sudo  kubeadm join <uba-ip>:6443 --token <token> --discovery-token-ca-cert-hash <hash>  --cri-socket=unix:///var/run/cri-dockerd.sock --ignore-preflight-errors Swap
	Tue Oct 17 16:42:04 UTC 2023: Failed to add worker: uba-hostname2.local, exiting

Steps: a) Manually generate the token from the management node:

sudo kubeadm token create --print-join-command

b) Run the command from the second node with sudo and the necessary flags:

sudo <output of the fist command> --cri-socket=unix:///var/run/cri-dockerd.sock --ignore-preflight-errors Swap --v=5

c) Here, if you observe the error is due to "dial tcp <uba-ip>:6443: connect: no route to host."

d) Open the 6443 port on all worker nodes:

sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp
	    sudo firewall-cmd --reload

e) Re-run the Caspida setup.

2) If you have not attached the second interface and have not run the Caspida setup yet, follow the steps below:

a) Build a server with a single interface.

b) Ensure that the interface is associated with the public firewall zone.

c) Install Splunk UBA following Splunk documentation using the node names that resolve to an IP address on this attached interface.

d) Perform post-installation sync.

e) Stop all UBA services.

f) Create a new firewall zone named "control-plane".

g) Add a new interface associated with the "control plane" zone.

h) Add inbound firewall rules permitting SSH and HTTPS.

i) Start all UBA services.

2023-06-08 UBA-17446 Upon applying the Ubuntu security patches, postgresql got removed causing UBA unable to start

Workaround:
Stop all UBA Services :
/opt/caspida/bin/Caspida stop-all

Re-install postgres package, replace <uba ext packages> with your package folder in below command. For example, for 5.0.5 its uba-ext-pkgs-5.0.5 :

sudo dpkg --force-confold --force-all -i /home/caspida/<Extracted uba external package folder>/postgresql*.deb

Start all UBA Services :

/opt/caspida/bin/Caspida start-all
2023-06-06 UBA-17437, UBA-17455 Error after enabling Splunk SSL certificate validation - Connection refused , Splunk host validation error: %s

Workaround:
1. Run the below sed command into the UBA system.
sed --in-place=".bak" 's/validateSplunkHost(hostname/validateSplunkHost(hostname + ":" + this._requestOptions.port/' /opt/caspida/web/caspida-ui/server/security/splunkLoginProvider.js 

2. Edit (if not already done) file "/etc/caspida/local/conf/uba-site.properties" add/modify property "validate.splunk.ssl.certificate=true" 3. Run the sync-cluster command

 /opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf/ 

4. Restart the caspida-jobmanager

sudo service caspida-jobmanager stop
sudo service caspida-jobmanager start

5. Restart the caspida-ui

 /opt/caspida/bin/Caspida stop-ui
 /opt/caspida/bin/Caspida start-ui
2023-05-24 UBA-17335 Threat dashboard throwing error "Cannot read properties of undefined (reading 'length')" upon changing time to local time instead of UTC

Workaround:
1. Move to directory as mentioned in below command
   cd /opt/caspida/web/zplex/server/databases/postgres
  

2. Replace code snippet in postgres.js file which will handle the issue using below command

   sed --in-place=".bak" 's/result = result.rows;/if (Array.isArray(result)) {result = [].concat(...result.map(rs => rs.rows));} else {result = 
   result.rows;}/' /opt/caspida/web/zplex/server/databases/postgres/postgres.js
   

3. Stop Caspida UI

    /opt/caspida/bin/Caspida stop-ui
    

4. Start Caspida UI

   /opt/caspida/bin/Caspida stop-ui
   
2023-05-03 UBA-17233 Model execution failure caused by ModelRegistry.json not being applied correctly for deployments with 7 nodes or more after upgrading to UBA 5.2.0

Workaround:
Note: Apply the following workaround to deployments with 7 nodes or more.

1) SSH as caspida to UBA management node (node 1)

2) Back up original file:

cp /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow/ModelRegistry.json  /opt/caspida/content/Splunk-Standard Security/modelregistry/offlineworkflow/ModelRegistry.json.ORIG

3) Copy the ModelRegistry.json.large_deployment file to the ModelRegistry.json file:

cp /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow/ModelRegistry.json.large_deployment  /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow/ModelRegistry.json

4) Sync cluster:

/opt/caspida/bin/Caspida sync-cluster /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow/

5) Restart all services:

/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2023-04-20 UBA-17188 Restricted sudo access for caspida ubasudoers file missing permissions

Workaround:
Run the following commands:
sed -i '120i\           /usr/sbin/service cri-docker *, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '130i\           /sbin/service cri-docker *, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl start kubelet.service, /usr/bin/systemctl start kubelet.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl restart kubelet.service, /usr/bin/systemctl restart kubelet.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl start docker.service, /usr/bin/systemctl start docker.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl restart docker.service, /usr/bin/systemctl restart docker.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
/opt/caspida/bin/Caspida sync-cluster /opt/caspida
2023-04-14 UBA-17151 UBA backup script fails if Redis network connection password changed from default

Workaround:
To complete a UBA backup, temporarily disable the redis password and then re-enable it afterwards

1. Stop Caspida

/opt/caspida/bin/Caspida stop

2. Open the file /etc/caspida/local/conf/custom/splunkuba-redis.conf and comment out the line requirepass

3. Sync the file across the cluster

/opt/caspida/bin/Caspida sync-cluster /etc/caspida/local

4. Start Caspida

/opt/caspida/bin/Caspida start

5. Perform uba backup

6. Go through steps 1-4 again, but for step 2, uncomment the requirepass setting

2023-04-13 UBA-17148 ImagePullBackOff Failure: Container workers sometimes gets removed from management node iptables after upgrade on AWS

Workaround:
The other nodes that run the containers should be re-added back to the iptables on the management node. Run the following on node 1:
sudo iptables -I DOCKER-USER -s <nodes> -i <external_interface> -j ACCEPT

Where <nodes> is the list of comma separated nodes that runs docker images. Get the list of worker nodes by running the following:

a) grep -w container.worker.host /etc/caspida/conf/deployment/caspida-deployment.conf | cut -d"=" -f2

b) Get the impala host by running: grep -w impala.server.host /etc/caspida/conf/deployment/caspida-deployment.conf | cut -d"=" -f2

And then <external_interface> is the output of route | grep default | awk '{print $8}'.

2023-04-05 UBA-17117 TLS/SSL Weak Message Authentication Code Cipher Suites

For 443 port
1. Stop the caspida-ui

sudo service caspida-ui stop

2. Open uiConfig.js file

vi /opt/caspida/web/caspida-ui/server/config/uiConfig.js

3. Find the cipher string

/ciphers

4. The string will look like this

ciphers: "AES128-GCM-SHA256:!RC4:HIGH:!MD5:!aNULL:!EDH:!DES:!3DES",

5. Add the following ciphers with the negate condition to deny the ciphers

ciphers: "AES128-GCM-SHA256:!RC4:HIGH:!MD5:!aNULL:!EDH:!DES:!3DES:!CBC:!SHA",

6. Save the file

7. Start the caspida-ui

sudo service caspida-ui start


For 10250 port
1. Stop the containers

/opt/caspida/bin/Caspida stop-containers

2. Open kubeadm-conf.yaml.template file

vi /opt/caspida/conf/containerization/kubeadm-conf.yaml.template

3. Find the tlsCipherSuites string

/tlsCipherSuites

4. The string will look like this

tlsCipherSuites: [ TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA ]

5. Remove the desired ciphers

tlsCipherSuites: [ TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_GCM_SHA384 ]

6. Find the tls-cipher-suites string

/tls-cipher-suites

7. The string will look like this

tls-cipher-suites: "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA"

8. Remove the desired ciphers suits

tls-cipher-suites: "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384"

9. Save the file

10. Remove containerization

/opt/caspida/bin/Caspida remove-containerization

11. Setup containerization

/opt/caspida/bin/Caspida setup-containerization

12. Stop all services

/opt/caspida/bin/Caspida stop-all

13. Start all services

/opt/caspida/bin/Caspida start-all
2023-04-03 UBA-17106 MalwareThreatDetectionModel Failure: Spark-default.conf not updated under /var/vcap/packages/spark/conf directory after upgrading from UBA 5.1.0.x to 5.2.0

Workaround:
Note: For those upgrading UBA from 5.1.0.x to 5.2.0, apply the following workaround. This is not needed if upgrading from UBA 5.0.5.x to 5.2.0

Update the spark configuration by running the following command on all the nodes:

cp -v /opt/caspida/conf/spark/spark-defaults.conf /var/vcap/packages/spark/conf/spark-defaults.conf
/opt/caspida/bin/Caspida  stop-spark
/opt/caspida/bin/Caspida  start-spark
2023-02-02 UBA-16909 "UBA_HADOOP_SIZE key is either missing or has no value" error encountered when uba-restore is run from a backup created with the --no-data option
2023-01-31 UBA-16886 Kubelet unable to fetch container log stats for inactive pods
2023-01-25 UBA-16850 Getting error about zookeeper-server service is not responding in Health monitor UI intermittently (Ubuntu only)
2023-01-18 UBA-16818 UBA UI not accessible after performing RHEL8 post-upgrade clean up tasks

Workaround:
1) On all UBA nodes re-install missing redis package:
sudo yum install redis-5.0.3-5*

2) Stop-all then Start-all UBA services:

/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2023-01-18 UBA-16819 UBA Spark Server failing to store failed model execution record in redis

Workaround:
On the spark master, run this command:
sed -i '7s|$|:/var/vcap/packages/spark/jars/*|' /opt/caspida/bin/SparkServer

Then, restart spark from the management node

/opt/caspida/bin/Caspida stop-spark
/opt/caspida/bin/Caspida start-spark
2023-01-05 UBA-16762 Benign JNDI ClassNotFoundException: Impala queries complain, since removing Log4j2 vulnerability
2022-12-22 UBA-16722 Error in upgrade log, /bin/bash: which: line 1: syntax error: unexpected end of file
2022-12-05 UBA-16617 Repeated Kafka warning message "Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order"

Workaround:
1) On zookeeper node (typically node 2 on a multi-node deployment), find all leader-epoch-checkpoint files:
locate leader-epoch-checkpoint
(can also use a find command if locate isn't available)

a) Copy result into a script, adding ">" prior to each result. i.e.

#!/bin/bash
> /var/vcap/store/kafka/AnalyticsTopic-0/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-1/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-10/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-11/leader-epoch-checkpoint
...
b) Make script executable:
chmod +x <script name>.sh
2) On node 1, run:
/opt/caspida/bin/Caspida stop-all
3) On zookeeper node, run:
./<script name>.sh
4) On node 1, run:
/opt/caspida/bin/Caspida start-all
5) Check logs to see if warn messages still show up on zookeeper node:
tail -f /var/vcap/sys/log/kafka/server.log

6) If you see the following warning repeated:

WARN Resetting first dirty offset of __consumer_offsets-17 to log start offset 3346 since the checkpointed offset 3332 is invalid. (kafka.log.LogCleanerManager$)
a) Clear cleaner-offset-checkpoint on zookeeper node by running:
> /var/vcap/store/kafka/cleaner-offset-checkpoint
b) Then on node 1, run:
/opt/caspida/bin/Caspida stop-all && /opt/caspida/bin/Caspida start-all
2022-07-26 UBA-15997 Benign error messages on CaspidaCleanup: Relations do not exist, Kafka topic does not exist on ZK path
2022-07-19 UBA-15963 krb5-libs(x86-64) = 1.18.2-* is needed by krb5-devel-1.18.2-* on Oracle Enterprise Linux and RHEL

Workaround:
[5.1.0/5.1.0.1]

krb5-libs is required by the OS and cannot be removed. It must match the version of krb5-devel. If you have internet access, install the latest krb5-devel.

  1. sudo yum install krb5-devel
  2. Rerun the INSTALL.sh command

If you do not have internet access and are okay with a lower version you can force a downgrade by running the following:

  1. sudo yum -y localinstall /home/caspida/Splunk-UBA-5.1-Packages-RHEL-8/extra_packages/rpm/hadoop/krb5-libs-1.18.2-14.el8.x86_64.rpm
  2. Rerun the INSTALL.sh command

[5.2.0]

  1. sudo yum -y localinstall /home/caspida/Splunk-UBA-5.2-Packages-RHEL-8/extra_packages/rpm/hadoop/krb5-libs-1.18.2-21.0.1.el8.x86_64.rpm
  2. sudo yum -y localinstall /home/caspida/Splunk-UBA-5.2-Packages-RHEL-8/extra_packages/rpm/hadoop/zlib-1.2.11-20.el8.x86_64.rpm
  3. Rerun the INSTALL.sh command

2022-06-30 UBA-15912 Upgrade from 5.0.5.1 to 5.1.0 or 5.2.0 (RHEL) OutputConnector re-import cert needed

Workaround:
import cacert again
2022-06-23 UBA-15885 Streaming model state will be reset on upgrade from 5.0.x to 5.1.0 or 5.2.0 - will require one week or more of data ingest to see detections from them again (Error message: Failed to deserialize object in InputStream)
2022-06-22 UBA-15882 Benign Spark error message: Could not find CoarseGrainedScheduler in spark-local.log when upgrading UBA
2022-02-14 UBA-15364 Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded"

Workaround:
Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh

You can check deployments.conf field spark.history to find out which node runs the Spark History Server.

Update the following setting to 3G: SPARK_DAEMON_MEMORY=3G

Afterwards, restart the spark services:

/opt/caspida/bin/Caspida stop-spark && /opt/caspida/bin/Caspida start-spark
2021-08-30 UBA-14755 Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist.
2020-04-07 UBA-13804 Kubernetes certificates expire after one year

Workaround:
Run the following commands on the Splunk UBA master node:
/opt/caspida/bin/Caspida remove-containerization
/opt/caspida/bin/Caspida setup-containerization
/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2019-10-07 UBA-13227 Backend anomaly and custom model names are displayed in Splunk UBA

Workaround:
Click the reload button in the web browser to force reload the UI page.
2019-08-29 UBA-13020 Anomalies migrated from test-mode to active-mode won't be pushed to ES
2019-08-06 UBA-12910 Splunk Direct - Cloud Storage does not expose src_ip field

Workaround:
When ingesting Office 365 Sharepoint/OneDrive logs through Splunk Direct - Cloud Storage, add an additional field mapping for src_ip in the final SPL to be mapped from ClientIP (| eval src_ip=ClientIP). Make sure to add src_ip in the final list of fields selected using the fields command. For example:
| fields app,change_type,dest_user,file_hash,file_size,object,object_path,object_type,parent_category,parent_hash,sourcetype,src_user,tag,src_ip
Last modified on 08 December, 2023
PREVIOUS
Welcome to Splunk UBA 5.2.0
  NEXT
Fixed issues in Splunk UBA

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.2.0


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters