Splunk® User Behavior Analytics

Release Notes

This documentation does not apply to the most recent version of Splunk® User Behavior Analytics. For documentation on the most recent version, go to the latest release.

Known issues in Splunk UBA

This version of Splunk UBA has the following known issues and workarounds.

If no issues are listed, none have been reported.


Date filed Issue number Description
2023-06-08 UBA-17446 Upon applying the Ubuntu security patches, postgresql got removed causing UBA unable to start

Workaround:
Stop all UBA Services :
/opt/caspida/bin/Caspida stop-all

Re-install postgres package, replace <uba ext packages> with your package folder in below command. For example, for 5.0.5 its uba-ext-pkgs-5.0.5 :

sudo dpkg --force-confold --force-all -i /home/caspida/<Extracted uba external package folder>/postgresql*.deb

Start all UBA Services :

/opt/caspida/bin/Caspida start-all
2023-04-20 UBA-17188 Restricted sudo access for caspida ubasudoers file missing permissions

Workaround:
Run the following commands:
sed -i '120i\           /usr/sbin/service cri-docker *, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '130i\           /sbin/service cri-docker *, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl start kubelet.service, /usr/bin/systemctl start kubelet.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl restart kubelet.service, /usr/bin/systemctl restart kubelet.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl start docker.service, /usr/bin/systemctl start docker.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl restart docker.service, /usr/bin/systemctl restart docker.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
/opt/caspida/bin/Caspida sync-cluster /opt/caspida
2023-02-02 UBA-16909 "UBA_HADOOP_SIZE key is either missing or has no value" error encountered when uba-restore is run from a backup created with the --no-data option
2023-02-02 UBA-16910, UBA-16716 UnsupportedOperationException: empty.reduceLeft in BoxPatternModel
2023-02-02 UBA-16911, UBA-16292 SuspiciousEmailDetectionModel NullPointerException
2023-01-25 UBA-16850 Getting error about zookeeper-server service is not responding in Health monitor UI intermittently (Ubuntu only)
2023-01-20 UBA-16831 Entering a UBA anomaly action rule at the UI results in error of "cannot extract elements from a scalar"

Workaround:
As a workaround, instead of searching the value the customer can directly select the port from the provided dropdown values and then ignore the error. The Anomaly action rule will be created successfully.
2023-01-18 UBA-16818 UBA UI not accessible after performing RHEL8 post-upgrade clean up tasks

Workaround:
1) On all UBA nodes re-install missing redis package:
sudo yum install redis-5.0.3-5*

2) Stop-all then Start-all UBA services:

/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2023-01-18 UBA-16819 UBA Spark Server failing to store failed model execution record in redis

Workaround:
On the spark master, run this command:
sed -i '7s|$|:/var/vcap/packages/spark/jars/*|' /opt/caspida/bin/SparkServer

Then, restart spark from the management node

/opt/caspida/bin/Caspida stop-spark
/opt/caspida/bin/Caspida start-spark
2023-01-09 UBA-16774 Caspida start-all command should check/wait for all the job agents to be available before issuing the command to restart live datasources

Workaround:
Manually restart the data sources
2022-12-05 UBA-16617 Repeated Kafka warning message "Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order"

Workaround:
1) On zookeeper node (typically node 2 on a multi-node deployment), find all leader-epoch-checkpoint files:
locate leader-epoch-checkpoint
(can also use a find command if locate isn't available)

a) Copy result into a script, adding ">" prior to each result. i.e.

#!/bin/bash
> /var/vcap/store/kafka/AnalyticsTopic-0/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-1/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-10/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-11/leader-epoch-checkpoint
...
b) Make script executable:
chmod +x <script name>.sh
2) On node 1, run:
/opt/caspida/bin/Caspida stop-all
3) On zookeeper node, run:
./<script name>.sh
4) On node 1, run:
/opt/caspida/bin/Caspida start-all
5) Check logs to see if warn messages still show up on zookeeper node:
tail -f /var/vcap/sys/log/kafka/server.log

6) If you see the following warning repeated:

WARN Resetting first dirty offset of __consumer_offsets-17 to log start offset 3346 since the checkpointed offset 3332 is invalid. (kafka.log.LogCleanerManager$)
a) Clear cleaner-offset-checkpoint on zookeeper node by running:
> /var/vcap/store/kafka/cleaner-offset-checkpoint
b) Then on node 1, run:
/opt/caspida/bin/Caspida stop-all && /opt/caspida/bin/Caspida start-all
2022-12-05 UBA-16620 After UBA upgrade to 5.1.0.1 - Receiving error message in the Health Monitor - "One or more offline models have not executed for 72h 0m" Status Code: OML-2
2022-11-17 UBA-16555 Assets Cache Update Query Does Not Support Multi-values Data and Causes Postgres Log Size Increase

Workaround:
The number of errors that gets logged can be reduced by increasing the cache size and therefore avoid cache updates.
  1. Delete contents of /var/vcap/sys/log/postgresql/
  2. Increasing the cache size setting in the file uba-env.properties: identity.resolution.assetscache.capacity
  3. Sync the config file /opt/caspida/bin/Caspida sync-cluster
  4. Restart UBA with stop-all and start-all
  5. Check the cache usage using the command irscan -R -m
  6. If the cache is nearly 95% used, then increase the setting in step 2 and then check the usage again.

2022-11-16 UBA-16508, UBA-16597 Impala killed because of OOM

Workaround:
On the management node copy /etc/default/impala to /etc/impala/conf which is the working directory where the Impala image is built:
sudo cp  /etc/default/impala  /etc/impala/conf

Edit file /opt/caspida/containerization/docker/impala/Dockerfile and add command to copy the file to the container:

vi /opt/caspida/containerization/docker/impala/Dockerfile

Insert:

COPY impala /etc/default/

Stop UBA:

/opt/caspida/bin/Caspida stop

Rebuild containers:

/opt/caspida/bin/Caspida remove-containerization
/opt/caspida/bin/Caspida setup-containerization 

Stop the impala service:

/opt/caspida/bin/Caspida stop-impala

On the impala node (node1 on deployments 1-10, node2 on 20-node deployment)get the docker images:

sudo docker images

Remove the previous impala image:

sudo docker rmi -f <IMPALA IMAGE ID>

Example:

sudo docker rmi -f  <Hostname>:5000/impala

On the management node start the impala service:

 /opt/caspida/bin/Caspida start-impala

On the impala node (node1 on deployments 1-10, node2 on 20-node deployment) validate the new UBA impala image. Get the container id (CONTAINER_ID):

sudo docker ps -f name=impala | tail -1 | awk '{ print $1 }'

Enter the following into the running container:

sudo docker exec -it <CONTAINER_ID> bash

Verify -mem_limit flag in /etc/default/impala under IMPALA_SERVER_ARGS should show the corresponding values:
For 7-Node UBA Deployment: -mem_limit=38%" For 10-Node UBA Deployment:-mem_limit=38%" For 20-Node UBA Deployment: -mem_limit=60%"

On the management node restart UBA:

/opt/caspida/bin/Caspida start
2022-11-15 UBA-16492 Hadoop Upgrade creates large backup folders. Need to finalize upgrade.
2022-11-10 UBA-16470 /var/vcap out of space on Hadoop DataNodes. Hadoop Upgrade creates large backup folders.

Workaround:
1) On the UBA management node, stop caspida services:
/opt/caspida/bin/Caspida stop

2) Check the status of hadoop upgrade:

sudo -u hdfs hdfs dfsadmin -upgrade query

Status will show "Upgrade not finalized".

3) Finalize the hadoop upgrade by running the following commands:

sudo -u hdfs hdfs dfsadmin -finalizeUpgrade
sudo service hadoop-hdfs-namenode restart

4) Restart UBA:

/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2022-10-28 UBA-16424 Custom models don't run automatically in 5.1.0.X

Workaround:
On the management node, add custom models that are not running automatically into cronjobs.

ie. 30 3 * * * /opt/caspida/bin/uba-spark/trigger-models.sh (https://trigger-models.sh/) CustomModel1, CustomModel2

Restart UBA: /opt/caspida/bin/Caspida stop /opt/caspida/bin/Caspida start

2022-09-06 UBA-16289 7-node UBA deployment has an invalid value for system.messaging.rawdatatopic.retention.time in caspidatunables-7_node.conf

Workaround:
SSH into the management node as the caspida user

1. Edit the following two files:

/etc/caspida/local/conf/deployment/uba-tuning.properties
/opt/caspida/conf/deployment/recipes/caspida/caspidatunables-7_node.conf
Correct field
system.messaging.rawdatatopic.retention.time
to be
1d
instead of
1dq

2. Sync-cluster

/opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf/deployment/
/opt/caspida/bin/Caspida sync-cluster /opt/caspida/conf/deployment/recipes/caspida/

3.Restart cluster

/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2022-08-30 UBA-16254 Offline Models failed to execute due to SuspiciousEmailDetectionModel

Workaround:
On the management node, open the file /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow/ModelRegistry.json.

Scroll down to the model with the name "SuspiciousEmailDetectionModel" and change the field "enabled" from true to false.

Run the following command on the management node: /opt/caspida/bin/Caspida sync-cluster /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow

Restart UBA: /opt/caspida/bin/Caspida stop /opt/caspida/bin/Caspida start

2022-08-23 UBA-16206 Problem with SSO settings EntityID and Issuer in UI
2022-08-21 UBA-16182 Troubleshooting UI page for the Hadoop Service App does not load

Workaround:
Run the following commands on the Splunk UBA management node:
sed -i 's/50070/9870/g' /opt/caspida/web/caspida-ui/server/proxies/plugin.js
/opt/caspida/bin/Caspida stop-ui
/opt/caspida/bin/Caspida start-ui
2022-08-09 UBA-16087 dpkg: error processing package grub-pc (--configure): installed grub-pc package post-installation script subprocess returned error exit status 1

Workaround:
1. Complete the configuration of grub-pc manually.
sudo apt install --fix-broken

2. You will be prompted to choose where to install grub-pc. Choose your boot disk. If you're not sure, you can choose them all.

3. Cleanup the remaining packages:

sudo apt autoremove

4. Disable postgres auto-start

sudo systemctl disable postgresql

5. Cleanup leftover install files:

sudo rm -fv /etc/apt/apt.conf.d/splunkuba-local

6. Restart your server:

sudo reboot
2022-07-27 UBA-16004 "Class name not accepted" java.io.InvalidClassException when starting a Splunk connector data source and Splunk defined Source Type

Workaround:
Append "com.caspida.clients.jobmanager.JobDataParams$SourceTypes" to the field "caspida.jobexecutor.allowedDeserializationClasses" in the file "/opt/caspida/conf/uba-default.properties", then restart affected data sources
2022-07-26 UBA-15997 Benign error messages on CaspidaCleanup: Relations do not exist, Kafka topic does not exist on ZK path
2022-07-19 UBA-15963 krb5-libs(x86-64) = 1.18.2-* is needed by krb5-devel-1.18.2-* on Oracle Enterprise Linux and RHEL

Workaround:
[5.1.0/5.1.0.1]

krb5-libs is required by the OS and cannot be removed. It must match the version of krb5-devel. If you have internet access, install the latest krb5-devel.

  1. sudo yum install krb5-devel
  2. Rerun the INSTALL.sh command

If you do not have internet access and are okay with a lower version you can force a downgrade by running the following:

  1. sudo yum -y localinstall /home/caspida/Splunk-UBA-5.1-Packages-RHEL-8/extra_packages/rpm/hadoop/krb5-libs-1.18.2-14.el8.x86_64.rpm
  2. Rerun the INSTALL.sh command

[5.2.0]

  1. sudo yum -y localinstall /home/caspida/Splunk-UBA-5.2-Packages-RHEL-8/extra_packages/rpm/hadoop/krb5-libs-1.18.2-21.0.1.el8.x86_64.rpm
  2. sudo yum -y localinstall /home/caspida/Splunk-UBA-5.2-Packages-RHEL-8/extra_packages/rpm/hadoop/zlib-1.2.11-20.el8.x86_64.rpm
  3. Rerun the INSTALL.sh command

2022-07-19 UBA-15961 keyutils-libs = 1.5.10-6.el8 is needed by (installed) keyutils-1.5.10-6.el8.x86_64

Workaround:
An existing package keyutils interferes with the UBA installation. Keyutils is not needed by UBA can be removed to resolve the dependencies.
  1. sudo yum remove keyutils
  2. Rerun the INSTALL.sh command

2022-06-30 UBA-15912 Upgrade from 5.0.5.1 to 5.1.0 or 5.2.0 (RHEL) OutputConnector re-import cert needed

Workaround:
import cacert again
2022-06-21 UBA-15871 UI will error out when trying to edit the name of an output connector without retyping the password "EVP_DecryptFinal_ex:bad decrypt"

Workaround:
When editing the name of an output connector, also retype the password of the splunk instance to avoid the error.
2022-06-07 UBA-15811 Large UBA deployments hit java.lang.OutOfMemoryError due to Hypergraph based Malware Threat Detection Model

Workaround:
On the management node, open the file /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow/ModelRegistry.json

Scroll down to the model with the name "MalwareThreatDetectionModel" and change the field "enabled" from true to false

Run the following command on the management node: /opt/caspida/bin/Caspida sync-cluster /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow

Restart UBA: /opt/caspida/bin/Caspida stop /opt/caspida/bin/Caspida start

2022-02-14 UBA-15364 Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded"

Workaround:
Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh

You can check deployments.conf field spark.history to find out which node runs the Spark History Server.

Update the following setting to 3G: SPARK_DAEMON_MEMORY=3G

Afterwards, restart the spark services:

/opt/caspida/bin/Caspida stop-spark && /opt/caspida/bin/Caspida start-spark
2021-08-30 UBA-14755 Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist.
2020-10-30 UBA-14287, UBA-17142 Issue while deleting datasource referencing other UBA original primary cluster
2020-04-10 UBA-13810 CSV Export of 3000 or More Anomalies Fails
2020-04-07 UBA-13804 Kubernetes certificates expire after one year

Workaround:
Run the following commands on the Splunk UBA master node:
/opt/caspida/bin/Caspida remove-containerization
/opt/caspida/bin/Caspida setup-containerization
/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2019-10-07 UBA-13227 Backend anomaly and custom model names are displayed in Splunk UBA

Workaround:
Click the reload button in the web browser to force reload the UI page.
2019-08-29 UBA-13020 Anomalies migrated from test-mode to active-mode won't be pushed to ES
2019-08-06 UBA-12910 Splunk Direct - Cloud Storage does not expose src_ip field

Workaround:
When ingesting Office 365 Sharepoint/OneDrive logs through Splunk Direct - Cloud Storage, add an additional field mapping for src_ip in the final SPL to be mapped from ClientIP (| eval src_ip=ClientIP). Make sure to add src_ip in the final list of fields selected using the fields command. For example:
| fields app,change_type,dest_user,file_hash,file_size,object,object_path,object_type,parent_category,parent_hash,sourcetype,src_user,tag,src_ip
Last modified on 11 August, 2023
Welcome to Splunk UBA 5.1.0.1   Fixed issues in Splunk UBA

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.1.0.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters