Splunk® User Behavior Analytics

Release Notes

Known issues in Splunk UBA

This version of Splunk UBA has the following known issues and workarounds.

If no issues are listed, none have been reported.


Date filed Issue number Description
2024-09-24 UBA-19440 Upgrading to 5.4.1 breaks data sources

Workaround:
1. Ask customer to generate new health check to get datasource information so that we may need to manually recreate datasource.

2. Have customer upgrade to UBA 5.4.1 following our public docs.

3. IF customer encounters the issue with the datasources page inaccessible, continue with the steps below.

4. Run the following script on the command line on the postgres node to identify datasource(s) that have corrupted connectString:

#!/bin/bash
result=$(psql -d caspidadb -t  -c "select id, name, connectString from datasources")

while read -r result_line; do
  IFS="|" read -r salt name value <<< "$result_line"
  if  [ ! -z "$salt" ]  && [ ! -z "$name" ] && [ ! -z "$value" ]; then
    saltValue=$(echo "$salt" | xargs)
    value=$(echo "$value" | xargs)
    name=$(echo "$name" | xargs)
    decryptedValue=$(java -cp /opt/caspida/lib/CaspidaSecurity.jar com.caspida.common.tools.CryptoUtil -d "$saltValue" "$value" )
    echo $saltValue  $name $decryptedValue
  fi
done <<< "$result"

5. From the output, find and take note of the dataSourceId that have the corrupted connectString, as shown in the following example:

https://splunk.atlassian.net/9983d40a-7d42-4bbd-98f9-2dce90a466c8#media-blob-url=true&id=cf42c63c-7857-4b17-ac70-45aa2afd0143&collection=&contextId=3620535&mimeType=image%2Fpng&name=Screenshot%202024-10-21%20at%209.49.30%E2%80%AFAM.png&size=278602&width=1541&height=180&alt=Screenshot%202024-10-21%20at%209.49.30%E2%80%AFAM.png

6. Delete the datasource(s) using API call from UBA management node by replacing "XXXXXXXXXXX" with what was found in step 2:

curl -Ssk -X DELETE https://localhost:9002/datasources/delete?id=XXXXXXXXXXX -H "Authorization: Bearer $(grep '^\s*jobmanager.restServer.auth.user.token='/opt/caspida/conf/uba-default.properties | cut -d'=' -f2)"

7. Manually recreate datasource that was deleted.

2024-09-12 UBA-19395 False Positive Suppression Model tagging multiple times the same anomalies "False Positive"

Workaround:
The False Positive Suppression Model introduced in Splunk UBA version 5.4.0 redundantly tags anomalies as false positives, even when they have already been marked as such by the model the previous day. Users can remove these redundant tags by running the following command:
psql -d caspidadb -c "UPDATE anomalies SET tagids = (SELECT ARRAY(SELECT DISTINCT unnest(tagids))) WHERE tagids IS NOT NULL;" 

This command can also be added to a cron job to run automatically every day at 8 AM.

2024-08-22 UBA-19329 PII masking doesn't work with "Export to SplunkES"
2024-08-16 UBA-19309 Custom models created by cloning a cloned custom model sometimes do not work
2024-08-01 UBA-19266 Anomaly Action Rules get triggered even when disabled when edited
2024-04-30 UBA-18862 Error Encountered When Cloning Splunk Datasource and Selecting Source Types

Workaround:
Re-enter the password on the Connection page for the Splunk endpoint.
2024-04-26 UBA-18851 Benign Error Message on Caspida start - Ncat: Connection Refused
2024-04-03 UBA-18721 UBA identifies end user/service account are accessing hard disk volumes instead of built-in computer account

Workaround:
Disable the augmented_access rule.

Steps to disable rule:

1. remove (or move to some other location outside of UBA as a backup) the file /etc/caspida/conf/rules/user/ad/augmented_access.rule

2. sync-cluster (/opt/caspida/bin/Caspida sync-cluster /etc/caspida/conf/rules/user/ad/)

3. restart uba (/opt/caspida/bin/Caspida stop & /opt/caspida/bin/Caspida start)

2022-12-22 UBA-16722 Error in upgrade log, /bin/bash: which: line 1: syntax error: unexpected end of file
2022-06-22 UBA-15882 Benign Spark error message: Could not find CoarseGrainedScheduler in spark-local.log when upgrading UBA
2022-02-14 UBA-15364 Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded"

Workaround:
Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh

You can check deployments.conf field spark.history to find out which node runs the Spark History Server.

Update the following setting to 3G: SPARK_DAEMON_MEMORY=3G

Afterwards, restart the spark services:

/opt/caspida/bin/Caspida stop-spark && /opt/caspida/bin/Caspida start-spark
2021-08-30 UBA-14755 Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist.
2020-04-07 UBA-13804 Kubernetes certificates expire after one year

Workaround:
Run the following commands on the Splunk UBA master node:
/opt/caspida/bin/Caspida remove-containerization
/opt/caspida/bin/Caspida setup-containerization
/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2017-04-05 UBA-6341 Audit events show up in the UBA UI with 30 minute delay

Last modified on 19 November, 2024
Welcome to Splunk UBA 5.4.1   Fixed issues in Splunk UBA

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.4.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters