Splunk® User Behavior Analytics

Release Notes

Acrobat logo Download manual as PDF

This documentation does not apply to the most recent version of Splunk® User Behavior Analytics. Click here for the latest version.
Acrobat logo Download topic as PDF

Known Issues in Splunk UBA

This version of Splunk UBA has the following known issues and workarounds.

Date filed Issue number Description
2023-02-02 UBA-16909 "UBA_HADOOP_SIZE key is either missing or has no value" error encountered when uba-restore is run from a backup created with the --no-data option
2023-01-09 UBA-16774 Caspida start-all command should check/wait for all the job agents to be available before issuing the command to restart live datasources

Manually restart the data sources
2022-11-17 UBA-16555 Assets Cache Update Query Does Not Support Multi-values Data and Causes Postgres Log Size Increase

The number of errors that gets logged can be reduced by increasing the cache size and therefore avoid cache updates.
  1. Delete contents of /var/vcap/sys/log/postgresql/
  2. Increasing the cache size setting in the file uba-env.properties: identity.resolution.assetscache.capacity
  3. Sync the config file /opt/caspida/bin/Caspida sync-cluster
  4. Restart UBA with stop-all and start-all
  5. Check the cache usage using the command irscan -R -m
  6. If the cache is nearly 95% used, then increase the setting in step 2 and then check the usage again.

2022-09-06 UBA-16289 7-node UBA deployment has an invalid value for system.messaging.rawdatatopic.retention.time in caspidatunables-7_node.conf

SSH into the management node as the caspida user

1. Edit the following two files:

Correct field
to be
instead of

2. Sync-cluster

/opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf/deployment/
/opt/caspida/bin/Caspida sync-cluster /opt/caspida/conf/deployment/recipes/caspida/

3.Restart cluster

/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2022-08-23 UBA-16206 Problem with SSO settings EntityID and Issuer in UI
2022-07-27 UBA-16003 Redis error "Waiting for the cluster to join..." during 20-Node HA/DR sync between Primary and Standby
2022-06-21 UBA-15871 UI will error out when trying to edit the name of an output connector without retyping the password "EVP_DecryptFinal_ex:bad decrypt"

When editing the name of an output connector, also retype the password of the splunk instance to avoid the error.
2022-04-14 UBA-15607, UBA-14237 Unable to create Anomaly Table filter or AAR specifying filter for Specific Devices when specifying over 20 CIDR/s (Permanent fix for UBA-14237)
2022-04-14 UBA-15608, UBA-14502 Exporting >4.3K Anomalies table results - crashes UBA UI (Permanent fix for UBA-14502)
2022-02-14 UBA-15364 Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded"

Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh

You can check deployments.conf field spark.history to find out which node runs the Spark History Server.

Update the following setting to 3G: SPARK_DAEMON_MEMORY=3G

Afterwards, restart the spark services:

/opt/caspida/bin/Caspida stop-spark && /opt/caspida/bin/Caspida start-spark
2022-01-31 UBA-15328 Running replication setup on 20 node clusters fails with "psql: could not connect to server: Connection refused"

Contact Support for the revised replication setup scripts.
2022-01-25 UBA-15321 Upgrade script for ubuntu systems need revised commands to install external packages correctly

If the upgrade to UBA 5.0.5 failed in a lockdown environment with no internet connection, perform the following steps on the failed UBA node:
  1. Edit the file: /home/caspida/patch_uba_505/bin/utils/patch_uba.sh
  2. Replace the line ssh ${host} "${SUDOCMD} apt-get -y -f install ${ExtPkgsDir}/openjdk*.deb && exit" with ssh ${host} "${SUDOCMD} dpkg --force-confold --force-all --refuse-downgrade -i ${ExtPkgsDir}/openjdk*.deb && exit"
  3. Replace the line ${SUDOCMD} apt-get -y -f install ${ExtPkgsDir}/ca-certificates*.deb with ${SUDOCMD} dpkg --force-confnew --refuse-downgrade -i ${ExtPkgsDir}/ca-certificates*.deb
  4. Save the file.
  5. Run the upgrade script again.

2022-01-21 UBA-15311 Upgrade Standby cluster fails "ERROR: cannot execute UPDATE in a read-only transaction."

If the upgrade fails with the following error:
"ERROR:  cannot execute UPDATE in a read-only transaction."

Perform the following steps on the standby system:

  1. Run the following command to move database to read-write:
    psql -d caspidadb -c 'BEGIN; SET transaction read write; ALTER DATABASE caspidadb SET default_transaction_read_only = off; COMMIT'
    psql -d caspidadb -c 'DROP PUBLICATION IF EXISTS publication_caspida'
    psql -d caspidadb -c 'DROP SUBSCRIPTION IF EXISTS subscription_caspida'
  2. Run the /opt/caspida/bin/CaspidaCleanup command again.
  3. Run the following command to check the database version:
    psql -d caspidadb -c 'select * from dbifo'
    Set the version if it is not set correctly.
  4. Run the upgrade script again.

2022-01-17 UBA-15302 "Error from /uba/watchlistChanged" even when an entity is added successfully to the WatchList
2021-12-16 UBA-15241 Threat number mismatch between UBA and ES
2021-12-06 UBA-15164 Download Diagnostics "Parsers" for multi-node misses /var/log/caspida/jobexecutor*
2021-11-18 UBA-15139 Postgresql-client-10 missing libpq5 (>=10.17)

Libpq5 is not required for the operation for UBA and no later version is available for Ubuntu 16. It can be suppressed by performing the following:
  1. Open and edit the file /var/lib/dpkg/status
  2. Scroll to the entry for postgresql-client-10 and look for the dependency list entry:
    Depends: libpq5 (>= 10.17), postgresql-client-common (>= 182~), sensible-utils, libc6 (>= 2.15), libedit2 (>= 2.11-20080614), zlib1g (>= 1:1.1.4)
  3. Remove the entry libpq5 (>= 10.17)
  4. Save and close the file

2021-11-15 UBA-15130 Custom model not triggering

After editing and saving the custom models from the UBA UI, run the following command to sync the changes to other nodes in the cluster.

1. ssh to Node1 with the "caspida" userid 2. Execute the command: /opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf

2021-11-09 UBA-15120, UBA-15192 Customizations to Splunk_TA_nix/local/inputs.conf breaks patch_uba.sh

Any customizations to the Splunk_TA_nix/local/inputs.conf will need to be backed up temporarily prior to the upgrade, and then re-added after upgrade completes.
2021-10-14 UBA-14954, UBA-15198 Postgresql 10.17 missing libjson-perl

Prior to running the patch_uba.sh script. Customer environment on all nodes should have libjson-perl installed. Perform the following steps:
  1. Log in to the management node as the caspida user in your Splunk UBA deployment.
  2. Run the following commands:
    sudo dpkg --force-confold --force-all -i /home/caspida/uba-ext-pkgs-5.0.5/postgresql*.deb
    sudo service postgresql stop
    sudo service postgresql start
    /opt/caspida/bin/Caspida stop-all
    /opt/caspida/bin/Caspida start-all

2021-10-11 UBA-14927, UBA-15186 UBA 5.0.5 upgrade script fails to upgrade forwarder to 8.2.1

Run the following command if you have Splunk forwarding disabled on a single node:
/opt/splunk/bin/splunk version --accept-license --answer-yes --no-prompt --seed-passwd caspida123

You must use the caspida123 password if you want to set up Splunk forwarding at a later time.

If you have Splunk forwarding enabled in a multi-node environment, perform the following tasks:

  1. Make sure that the ext-uba-pkg package from the management node is copied to /home/caspida in each node in your deployment and is untarred
  2. Log in to the management node as the caspida user.
  3. Run the following command once for each node in your deployment. Replace $node with the actual name of each node.
    ssh $node "tar -C /opt -xzf /home/caspida/uba-ext-pkgs-5.0.5/splunk-8.2.1-x86_64.tgz && /opt/splunk/bin/splunk version --accept-license --answer-yes --no-prompt --seed-passwd caspida123"

2021-09-29 UBA-14894 UBA EPS drops after Splunk 8.2.1/8.2.2 upgrade on search heads used by data sources
2021-09-28 UBA-14890 ClassCastException errors in the LateralMovementDetection Model
2021-08-30 UBA-14755 Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist.
2020-10-30 UBA-14287 Issue while deleting datasource referencing other UBA original primary cluster
2020-06-29 UBA-14199, UBA-12111 Impala jdbc connections leak

  1. Create a file containing the following script on node 1 in your Splunk UBA deployment (node 2 on a 20-node Splunk UBA deployment). For example, copy and paste the script to a new file in /etc/caspida/local/conf/impala_status_check.sh:
    if test -f "$log_file"; then
       tail -n 100 $log_file > /tmp/tmp_log_file.log
       mv /tmp/tmp_log_file.log $log_file
    connection_count=$(netstat -an | grep :21050 | grep ESTABLISHED | wc -l)
    if [ "$connection_count" -gt 500 ]; then
       echo "[$now] $connection_count impala connection(s), restarting impala"
       sudo service impala-server restart
       if [ $? -eq 0 ]; then
          echo "restart succeeded"
          echo "restart failed. return code: $?"
       echo "[$now] $connection_count impala connection(s), status is good"
  2. Make the script executable:
    chmod +x /etc/caspida/local/conf/impala_status_check.sh
  3. Add the following line to cron using crontab -e:
    0 8 * * * /etc/caspida/local/conf/impala_status_check.sh >> /var/log/impala/impala_status.log 2>&1

2020-04-10 UBA-13810 CSV Export of 3000 or More Anomalies Fails
2020-04-07 UBA-13804 Kubernetes certificates expire after one year

Run the following commands on the Splunk UBA master node:
/opt/caspida/bin/Caspida remove-containerization
/opt/caspida/bin/Caspida setup-containerization
/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2019-10-07 UBA-13227 Backend anomaly and custom model names are displayed in Splunk UBA

Click the reload button in the web browser to force reload the UI page.
2019-08-29 UBA-13020 Anomalies migrated from test-mode to active-mode won't be pushed to ES
2019-08-06 UBA-12910 Splunk Direct - Cloud Storage does not expose src_ip field

When ingesting Office 365 Sharepoint/OneDrive logs through Splunk Direct - Cloud Storage, add an additional field mapping for src_ip in the final SPL to be mapped from ClientIP (| eval src_ip=ClientIP). Make sure to add src_ip in the final list of fields selected using the fields command. For example:
| fields app,change_type,dest_user,file_hash,file_size,object,object_path,object_type,parent_category,parent_hash,sourcetype,src_user,tag,src_ip
Last modified on 10 March, 2023
Welcome to Splunk UBA 5.0.5
Fixed Issues in Splunk UBA

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.5

Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters