Splunk® User Behavior Analytics

Release Notes

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Known issues in Splunk UBA

This version of Splunk UBA has the following known issues and workarounds.

If no issues are listed, none have been reported.


Date filed Issue number Description
2023-11-06 UBA-18067 Ubuntu Vulnerability Mitigation for CVE-2022-1292

Workaround:
Run: sudo apt upgrade libssl1.0.2-dbg
2023-11-06 UBA-18068 Vulnerability Mitigation for CVE-2023-44487

Workaround:
[NOTE: Run the following workaround as caspida user]

1. Remove go.mod from UBA upgrade and/or install packages.

2. Upgrade node.js to 20.9.0.

a. Stop UBA on management node:
/opt/caspida/bin/Caspida stop-all

b. Remove pre-existing node.js files on all UBA nodes:

sudo rm -rvf /usr/local/bin/corepack
sudo rm -rvf /usr/local/bin/node
sudo rm -rvf /usr/local/bin/npm
sudo rm -rvf /usr/local/bin/npx
sudo rm -rvf /usr/local/share/systemtap/tapset/node.stp
sudo rm -rvf /usr/local/share/doc/node
sudo rm -rvf /usr/local/share/man/man1/node.1
sudo rm -rvf /usr/local/lib/node_modules/
sudo rm -rvf /usr/local/include/node/
c. Download the node.js 20.9.0 tarball on all UBA nodes:
wget https://nodejs.org/dist/v20.9.0/node-v20.9.0-linux-x64.tar.gz
d. Extract the tarball to /usr/local on all UBA nodes:
sudo tar xvfz node-v20.9.0-linux-x64.tar.gz -C /usr/local --strip-components 1

e. Verify node and npm versions on all UBA nodes:

node -v
--- 20.9.0
npm -v
--- 10.1.0
f. Start UBA on management node:
/opt/caspida/bin/Caspida start-all
2023-10-25 UBA-18034 ubasudoers file fails to allow caspida user access to postgresql service command

Workaround:
Run the following command to update the ubasudoers file:
sed -i 's/service postgresql \*,/service postgresql\*,/g' /opt/caspida/etc/sudoers.d/ubasudoers
2023-10-25 UBA-18039 Unable to install UBA with 2 networking interfaces

Workaround:
1) If you encounter the following error while executing the Caspida setup, you can follow the steps below:

Error:

	Tue Oct 17 16:37:04 UTC 2023: adding worker node: <uba-hostname>
	Tue Oct 17 16:37:04 UTC 2023: joining worker: <uba-hostname>, cmd=sudo  kubeadm join <uba-ip>:6443 --token <token> --discovery-token-ca-cert-hash <hash>  --cri-socket=unix:///var/run/cri-dockerd.sock --ignore-preflight-errors Swap
	Tue Oct 17 16:42:04 UTC 2023: Failed to add worker: uba-hostname2.local, exiting

Steps: a) Manually generate the token from the management node:

sudo kubeadm token create --print-join-command

b) Run the command from the second node with sudo and the necessary flags:

sudo <output of the fist command> --cri-socket=unix:///var/run/cri-dockerd.sock --ignore-preflight-errors Swap --v=5

c) Here, if you observe the error is due to "dial tcp <uba-ip>:6443: connect: no route to host."

d) Open the 6443 port on all worker nodes:

sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp
	    sudo firewall-cmd --reload

e) Re-run the Caspida setup.

2) If you have not attached the second interface and have not run the Caspida setup yet, follow the steps below:

a) Build a server with a single interface.

b) Ensure that the interface is associated with the public firewall zone.

c) Install Splunk UBA following Splunk documentation using the node names that resolve to an IP address on this attached interface.

d) Perform post-installation sync.

e) Stop all UBA services.

f) Create a new firewall zone named "control-plane".

g) Add a new interface associated with the "control plane" zone.

h) Add inbound firewall rules permitting SSH and HTTPS.

i) Start all UBA services.

2023-09-08 UBA-17849 Multiple offline rules are failing after 5.3.0 upgrade due to AnalysisException: Could not resolve column/field reference

Workaround:
If multiple offline and real-time rules are failing due to "AnalysisException: Could not resolve column/field reference: 'count'" or similar exceptions after upgrading to UBA 5.3.0, run the following commands on the management node to refresh and re-install the rules:

Run the following query from the management node (for 1, 3, 5, 7, and 10 node deployment) and 2nd node (for 20 and 20XL node deployment):

psql -d caspidadb -c "UPDATE rulepackages SET version = 0 WHERE namespace LIKE 'secteam.%';"

From the management node, run the following command:

curl -w "\n" -Ssk -H "Authorization: Bearer $(grep '^\s*jobmanager.restServer.auth.user.token=' /opt/caspida/conf/uba-default.properties | cut -d'=' -f2)" -m 60 -X "POST" "https://localhost:9002/rules/installSystemPackages"

Note: There is NO need to restart any UBA services

2023-08-14 UBA-17734 UBA 5.3.0 Ubuntu deployments contain Log4j in a dangling Docker image layer

Workaround:
On UBA 5.3.0 Ubuntu deployments, customers' security scans may detect a log4j jar in /var/vcap/store/docker/aufs/diff/<Image Layer ID>/usr/lib/impala/lib/log4j-1.2.17.jar.

They can safely resolve this by removing the entire /var/vcap/store/docker/aufs/diff/<Image Layer ID> directory since it does not correspond to any Docker image/container/volume currently used in UBA 5.3.0.

2023-07-27 UBA-17641, UBA-17602 Upgrade Error: Job for redis-server.service failed because the service did not take the steps required by its unit configuration

Workaround:
On all nodes in the cluster, run the following command to update the redis.conf to allow the configure steps to complete:
sudo sed -i 's/^supervised no$/supervised auto/g' /etc/redis/redis.conf
2023-01-31 UBA-16886 Kubelet unable to fetch container log stats for inactive pods

Workaround:
This issue pertains to a bug in cri-dockerd, which does not have a fixed version. A more permanent workaround will be released in 5.4.0. In the meantime, please find the temporary workaround below.


1. SSH as caspida user to UBA management node.

2. Take a backup of the ContainerizationCleanup.sh script:

cp /opt/caspida/containerization/bin/ContainerizationCleanup.sh /opt/caspida/containerization/bin/ContainerizationCleanup.sh.ORIG

3. Insert the following code snippet at the end of the ContainerizationCleanup.sh script before the last line: echo "$(date): $0: DONE"

#############
for node in "${!uniqHostsArr[@]}"
do
  echo "$(date): Check broken symlink of pods-log at Container Node : ${node}"
  ssh ${node} "(
    for i in \$(sudo find /var/log/pods -type l)
    do
      LOGFILE=\$(sudo ls -l \${i} | awk -F '->' '{print \$2}');
      if sudo test -f \$LOGFILE; then
         echo "$(date): INFO - symlink of pod-log is OK. symlink=\${i}";
      else
         echo "$(date): WARN - symlink of pod-log is BROKEN, so deleting it. symlink=\${i}";
         sudo rm \${i};
      fi
    done
  )"
done
#############

4. Run the following sync-cluster command:

/opt/caspida/bin/Caspida sync-cluster /opt/caspida/containerization/bin/
2022-12-22 UBA-16722 Error in upgrade log, /bin/bash: which: line 1: syntax error: unexpected end of file
2022-12-05 UBA-16617 Repeated Kafka warning message "Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order"

Workaround:
1) On zookeeper node (typically node 2 on a multi-node deployment), find all leader-epoch-checkpoint files:
locate leader-epoch-checkpoint
(can also use a find command if locate isn't available)

a) Copy result into a script, adding ">" prior to each result. i.e.

#!/bin/bash
> /var/vcap/store/kafka/AnalyticsTopic-0/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-1/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-10/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-11/leader-epoch-checkpoint
...
b) Make script executable:
chmod +x <script name>.sh
2) On node 1, run:
/opt/caspida/bin/Caspida stop-all
3) On zookeeper node, run:
./<script name>.sh
4) On node 1, run:
/opt/caspida/bin/Caspida start-all
5) Check logs to see if warn messages still show up on zookeeper node:
tail -f /var/vcap/sys/log/kafka/server.log

6) If you see the following warning repeated:

WARN Resetting first dirty offset of __consumer_offsets-17 to log start offset 3346 since the checkpointed offset 3332 is invalid. (kafka.log.LogCleanerManager$)
a) Clear cleaner-offset-checkpoint on zookeeper node by running:
> /var/vcap/store/kafka/cleaner-offset-checkpoint
b) Then on node 1, run:
/opt/caspida/bin/Caspida stop-all && /opt/caspida/bin/Caspida start-all
2022-07-26 UBA-15997 Benign error messages on CaspidaCleanup: Relations do not exist, Kafka topic does not exist on ZK path
2022-06-22 UBA-15882 Benign Spark error message: Could not find CoarseGrainedScheduler in spark-local.log when upgrading UBA
2022-02-14 UBA-15364 Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded"

Workaround:
Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh

You can check deployments.conf field spark.history to find out which node runs the Spark History Server.

Update the following setting to 3G: SPARK_DAEMON_MEMORY=3G

Afterwards, restart the spark services:

/opt/caspida/bin/Caspida stop-spark && /opt/caspida/bin/Caspida start-spark
2021-08-30 UBA-14755 Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist.
2020-04-07 UBA-13804 Kubernetes certificates expire after one year

Workaround:
Run the following commands on the Splunk UBA master node:
/opt/caspida/bin/Caspida remove-containerization
/opt/caspida/bin/Caspida setup-containerization
/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2019-10-07 UBA-13227 Backend anomaly and custom model names are displayed in Splunk UBA

Workaround:
Click the reload button in the web browser to force reload the UI page.
2019-08-06 UBA-12910 Splunk Direct - Cloud Storage does not expose src_ip field

Workaround:
When ingesting Office 365 Sharepoint/OneDrive logs through Splunk Direct - Cloud Storage, add an additional field mapping for src_ip in the final SPL to be mapped from ClientIP (| eval src_ip=ClientIP). Make sure to add src_ip in the final list of fields selected using the fields command. For example:
| fields app,change_type,dest_user,file_hash,file_size,object,object_path,object_type,parent_category,parent_hash,sourcetype,src_user,tag,src_ip
Last modified on 18 April, 2024
PREVIOUS
Welcome to Splunk UBA 5.3.0
  NEXT
Fixed issues in Splunk UBA

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.3.0


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters