Known issues in Splunk UBA
This version of Splunk UBA has the following known issues and workarounds.
If no issues are listed, none have been reported.
Date filed | Issue number | Description |
---|---|---|
2024-09-24 | UBA-19440 | Upgrading to 5.4.1 breaks data sources Workaround: 1. Ask customer to generate new health check to get datasource information so that we may need to manually recreate datasource. 2. Have customer upgrade to UBA 5.4.1 following our public docs. 3. IF customer encounters the issue with the datasources page inaccessible, continue with the steps below. 4. Run the following script on the command line on the postgres node to identify datasource(s) that have corrupted #!/bin/bash result=$(psql -d caspidadb -t -c "select id, name, connectString from datasources") while read -r result_line; do IFS="|" read -r salt name value <<< "$result_line" if [ ! -z "$salt" ] && [ ! -z "$name" ] && [ ! -z "$value" ]; then saltValue=$(echo "$salt" | xargs) value=$(echo "$value" | xargs) name=$(echo "$name" | xargs) decryptedValue=$(java -cp /opt/caspida/lib/CaspidaSecurity.jar com.caspida.common.tools.CryptoUtil -d "$saltValue" "$value" ) echo $saltValue $name $decryptedValue fi done <<< "$result" 5. From the output, find and take note of the 6. Delete the datasource(s) using API call from UBA management node by replacing "XXXXXXXXXXX" with what was found in step 2: curl -Ssk -X DELETE https://localhost:9002/datasources/delete?id=XXXXXXXXXXX -H "Authorization: Bearer $(grep '^\s*jobmanager.restServer.auth.user.token='/opt/caspida/conf/uba-default.properties | cut -d'=' -f2)" 7. Manually recreate datasource that was deleted. |
2024-09-12 | UBA-19395 | False Positive Suppression Model tagging multiple times the same anomalies "False Positive" Workaround: The False Positive Suppression Model introduced in Splunk UBA version 5.4.0 redundantly tags anomalies as false positives, even when they have already been marked as such by the model the previous day. Users can remove these redundant tags by running the following command: psql -d caspidadb -c "UPDATE anomalies SET tagids = (SELECT ARRAY(SELECT DISTINCT unnest(tagids))) WHERE tagids IS NOT NULL;" This command can also be added to a cron job to run automatically every day at 8 AM. |
2024-09-09 | UBA-19388 | The monthly pg cleanup scripts didn't run as expected to perform the periodic cleanup of incremental backups and PostgreSQL files Workaround: These two scripts may require modifications to match the environment: /opt/caspida/etc/cron.monthly/remove_pg_logs /opt/caspida/etc/cron.monthly/remove_pg_walarchives Perform periodic cleanup of the backup files per Splunk Documentation. /opt/caspida/etc/cron.monthly/remove_pg_walarchives This script has the path of the WAL_ARCHIVE hardcoded to "/backup/wal_archive". It may require modifications to match the name of the third disk for the backup. psql archiving: archive_mode | on wal_level | logical wal_compression | off max_standby_archive_delay | 30s archive_timeout | 0 archive_command | test ! -f /ubabackup/wal_archive/%f && cp %p /ubabackup/wal_archive/%f backup.enabled: backup.filesystem.enabled=true backup interval: backup.filesystem.full.interval=7d backup filesystem: backup.filesystem.directory.restore=/ubabackup /opt/caspida/etc/cron.monthly/remove_pg_logs This script may require modifications to remove the postgres logs in /var/log/postgresql/ |
2024-08-16 | UBA-19309 | Custom models created by cloning a cloned custom model sometimes do not work |
2024-04-30 | UBA-18862 | Error Encountered When Cloning Splunk Datasource and Selecting Source Types Workaround: Re-enter the password on the Connection page for the Splunk endpoint. |
2024-04-26 | UBA-18851 | Benign Error Message on Caspida start - Ncat: Connection Refused |
2024-04-25 | UBA-18846, UBA-18920 | Inconsistent telemetry status when UBA is in FIPS mode |
2024-04-03 | UBA-18721 | UBA identifies end user/service account are accessing hard disk volumes instead of built-in computer account Workaround: Disable the augmented_access rule. Steps to disable rule: 1. remove (or move to some other location outside of UBA as a backup) the file /etc/caspida/conf/rules/user/ad/augmented_access.rule 2. sync-cluster (/opt/caspida/bin/Caspida sync-cluster /etc/caspida/conf/rules/user/ad/) 3. restart uba (/opt/caspida/bin/Caspida stop & /opt/caspida/bin/Caspida start) |
2022-12-22 | UBA-16722 | Error in upgrade log, /bin/bash: which: line 1: syntax error: unexpected end of file |
2022-06-22 | UBA-15882 | Benign Spark error message: Could not find CoarseGrainedScheduler in spark-local.log when upgrading UBA |
2022-02-14 | UBA-15364 | Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded" Workaround: Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh
You can check deployments.conf field spark.history to find out which node runs the Spark History Server. Update the following setting to 3G:
Afterwards, restart the spark services: /opt/caspida/bin/Caspida stop-spark && /opt/caspida/bin/Caspida start-spark |
2021-08-30 | UBA-14755 | Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist. |
2020-04-07 | UBA-13804 | Kubernetes certificates expire after one year Workaround: Run the following commands on the Splunk UBA master node: /opt/caspida/bin/Caspida remove-containerization /opt/caspida/bin/Caspida setup-containerization /opt/caspida/bin/Caspida stop-all /opt/caspida/bin/Caspida start-all |
2017-04-05 | UBA-6341 | Audit events show up in the UBA UI with 30 minute delay |
Welcome to Splunk UBA 5.4.0 | Fixed issues in Splunk UBA |
This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.4.0
Feedback submitted, thanks!