Splunk® User Behavior Analytics

Release Notes

Acrobat logo Download manual as PDF


This documentation does not apply to the most recent version of Splunk® User Behavior Analytics. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Known Issues in Splunk UBA

This version of Splunk UBA has the following known issues and workarounds.


Date filed Issue number Description
2023-01-09 UBA-16774 Caspida start-all command should check/wait for all the job agents to be available before issuing the command to restart live datasources

Workaround:
Manually restart the data sources
2022-09-06 UBA-16289 7-node UBA deployment has an invalid value for system.messaging.rawdatatopic.retention.time in caspidatunables-7_node.conf

Workaround:
SSH into the management node as the caspida user

1. Edit the following two files:

/etc/caspida/local/conf/deployment/uba-tuning.properties
/opt/caspida/conf/deployment/recipes/caspida/caspidatunables-7_node.conf
Correct field
system.messaging.rawdatatopic.retention.time
to be
1d
instead of
1dq

2. Sync-cluster

/opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf/deployment/
/opt/caspida/bin/Caspida sync-cluster /opt/caspida/conf/deployment/recipes/caspida/

3.Restart cluster

/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2022-04-14 UBA-15607, UBA-14237 Unable to create Anomaly Table filter or AAR specifying filter for Specific Devices when specifying over 20 CIDR/s
2022-04-14 UBA-15608, UBA-14502 Exporting >4.3K Anomalies table results - crashes UBA UI (Permanent fix for UBA-14502)
2022-02-14 UBA-15364 Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded"

Workaround:
Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh

You can check deployments.conf field spark.history to find out which node runs the Spark History Server.

Update the following setting to 3G: SPARK_DAEMON_MEMORY=3G

Afterwards, restart the spark services:

/opt/caspida/bin/Caspida stop-spark && /opt/caspida/bin/Caspida start-spark
2022-01-25 UBA-15321 Upgrade script for ubuntu systems need revised commands to install external packages correctly

Workaround:
If the upgrade to UBA 5.0.5 failed in a lockdown environment with no internet connection, perform the following steps on the failed UBA node:
  1. Edit the file: /home/caspida/patch_uba_505/bin/utils/patch_uba.sh
  2. Replace the line ssh ${host} "${SUDOCMD} apt-get -y -f install ${ExtPkgsDir}/openjdk*.deb && exit" with ssh ${host} "${SUDOCMD} dpkg --force-confold --force-all --refuse-downgrade -i ${ExtPkgsDir}/openjdk*.deb && exit"
  3. Replace the line ${SUDOCMD} apt-get -y -f install ${ExtPkgsDir}/ca-certificates*.deb with ${SUDOCMD} dpkg --force-confnew --refuse-downgrade -i ${ExtPkgsDir}/ca-certificates*.deb
  4. Save the file.
  5. Run the upgrade script again.

2021-10-27 UBA-15074 Warm/Standby environment not syncing

Workaround:
If we face the below error while performing manual sync between standby systems, Below are the steps by which we can provide a fix
Traceback (most recent call last):
  File "/opt/caspida/bin/replication/rpc.py", line 105, in <module>
    f(**kwargs)
  File "/opt/caspida/bin/replication/rpc.py", line 38, in clean
    stb = _createStandby(**kwargs)
  File "/opt/caspida/bin/replication/rpc.py", line 20, in _createStandby
    co = ucoordinator.Coordinator(mode="standby", **kwargs)
  File "/opt/caspida/bin/replication/coordinator.py", line 37, in __init__
    self.properties = self.loadProperty()
  File "/opt/caspida/bin/replication/coordinator.py", line 96, in loadProperty
    for line in fd:
  File "/usr/lib64/python3.6/encodings/ascii.py", line 26, in decode
    return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 58: ordinal not in range(128)

Steps to provide Fix:

1. Check below environment properties in /etc/locale.conf and it should have a value set as en_US.UTF-8
LANG
LC_CTYPE
LC_ALL

2. if the value is not set in any of the above environment properties, try to set the en_US.UTF-8 and the source /etc/locale.conf

3. even after setting these environment properties, the issue is not resolved then perform the below steps

vi /opt/caspida/conf/uba-default.properties remove Endash from the copyright section and replace it with a hyphen. (i.e line no 3 after 59th character)

2021-09-29 UBA-14894 UBA EPS drops after Splunk 8.2.1/8.2.2 upgrade on search heads used by data sources
2021-09-28 UBA-14890 ClassCastException errors in the LateralMovementDetection Model
2021-08-30 UBA-14755 Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist.
2021-08-12 UBA-14702 Custom Models TimeSeries models have not executed since upgrade to 5.0.4.1
2021-08-03 UBA-14675 Error when upgrading the standby system

Workaround:
If the upgrade fails with the following error:
"ERROR:  cannot execute UPDATE in a read-only transaction."

Perform the following steps on the standby system:

  1. Run the following command to move database to read-write:
    psql -d caspidadb -c 'BEGIN; SET transaction read write; ALTER DATABASE caspidadb SET default_transaction_read_only = off; COMMIT'
    psql -d caspidadb -c 'DROP PUBLICATION IF EXISTS publication_caspida'
    psql -d caspidadb -c 'DROP SUBSCRIPTION IF EXISTS subscription_caspida'
  2. Run the /opt/caspida/bin/CaspidaCleanup command again.
  3. Run the following command to check the database version:
    psql -d caspidadb -c 'select * from dbifo'
    Set the version if it is not set correctly.
  4. Run the upgrade to 5.0.4 script again.

If the replication setup fails with the following command:

subprocess.CalledProcessError: Command 'psql -t -c "DROP SUBSCRIPTION IF EXISTS subscription_caspida" -d caspidadb -h gsoc-rdc-p-ue-srv01' returned non-zero exit status 1.

Perform the following steps:

  1. Run the following command on both the primary and standby systems:
    /opt/caspida/bin/Caspida stop
  2. On the primary system, run the following commands:
    psql -d caspidadb -c "ALTER SUBSCRIPTION subscription_caspida disable"
    psql -d caspidadb -c "ALTER SUBSCRIPTION subscription_caspida SET (slot_name = NONE)"
    psql -d caspidadb -c "DROP SUBSCRIPTION subscription_caspida"
  3. Run the following command on both the primary and standby systems to make sure "subscription_caspida" does not exist either system:
    psql -d caspidadb -c "select * from pg_subscription where subname='subscription_caspida'"
  4. On the primary system, run the following command:
  5. /opt/caspida/bin/replication/setup -d standby -m primary -r
  6. On the standby system, run the following command:
    /opt/caspida/bin/replication/setup -d standby -m standby -r

2021-07-30 UBA-14669 UBA - Multiple Errors Status code DS-1, Status code DS-5, Status code DS_LAGGING_WARN
2021-05-04 UBA-14516 Health Monitor - An error occurred while retrieving data - Error from /uba/monitor Invalid Json response: Error in getting the response Parameters: {"queryStatus":true,"queryDataQualityStatus":true}

Workaround:
  1. Stop all Splunk UBA services on node 1:
    /opt/caspida/bin/Caspida stop-all
  2. On each Splunk UBA node, edit the java.security file in -
    /usr/lib/jvm/java-*/jre/lib/security/java.security
    and remove TLSv1 and TLSv1.1 from the following property
     jdk.tls.disabledAlgorithms 
  3. The folder in /usr/lib/jvm will be different on different environments. For example- The absolute path for java.security file in Ubuntu is,
    /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/java.security
  4. Start all Splunk UBA services on node 1:
    /opt/caspida/bin/Caspida start-all
  5. Verify there are no more errors in UI.

2021-04-27 UBA-14508 Unable to add datasource (Connection time out) - UBA to Cloud connectivity via proxy

Workaround:
Contact Splunk support.
2021-01-11 UBA-14379 Discrepancy between Threats and notable events in ES

Workaround:
As a temporary workaround, edit the search in ES. Look for the "UEBA Threat Detected" correlation search removing '| search uba_threat_status != closed'



2020-10-30 UBA-14287, UBA-17142 Issue while deleting datasource referencing other UBA original primary cluster
2020-06-29 UBA-14199, UBA-12111 Impala jdbc connections leak

Workaround:
  1. Create a file containing the following script on node 1 in your Splunk UBA deployment (node 2 on a 20-node Splunk UBA deployment). For example, copy and paste the script to a new file in /etc/caspida/local/conf/impala_status_check.sh:
    #!/bin/bash
    log_file=$1
    if test -f "$log_file"; then
       tail -n 100 $log_file > /tmp/tmp_log_file.log
       mv /tmp/tmp_log_file.log $log_file
    fi
    connection_count=$(netstat -an | grep :21050 | grep ESTABLISHED | wc -l)
    now=$(date)
    if [ "$connection_count" -gt 500 ]; then
       echo "[$now] $connection_count impala connection(s), restarting impala"
       sudo service impala-server restart
       if [ $? -eq 0 ]; then
          echo "restart succeeded"
        else
          echo "restart failed. return code: $?"
       fi
    else
       echo "[$now] $connection_count impala connection(s), status is good"
    fi
    
  2. Make the script executable:
    chmod +x /etc/caspida/local/conf/impala_status_check.sh
    
  3. Add the following line to cron using crontab -e:
    0 8 * * * /etc/caspida/local/conf/impala_status_check.sh >> /var/log/impala/impala_status.log 2>&1
    

2020-04-10 UBA-13810 CSV Export of 3000 or More Anomalies Fails
2020-04-07 UBA-13804 Kubernetes certificates expire after one year

Workaround:
Run the following commands on the Splunk UBA master node:
/opt/caspida/bin/Caspida remove-containerization
/opt/caspida/bin/Caspida setup-containerization
/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2019-10-07 UBA-13227 Backend anomaly and custom model names are displayed in Splunk UBA

Workaround:
Click the reload button in the web browser to force reload the UI page.
2019-08-29 UBA-13020 Anomalies migrated from test-mode to active-mode won't be pushed to ES
2019-08-06 UBA-12910 Splunk Direct - Cloud Storage does not expose src_ip field

Workaround:
When ingesting Office 365 Sharepoint/OneDrive logs through Splunk Direct - Cloud Storage, add an additional field mapping for src_ip in the final SPL to be mapped from ClientIP (| eval src_ip=ClientIP). Make sure to add src_ip in the final list of fields selected using the fields command. For example:
| fields app,change_type,dest_user,file_hash,file_size,object,object_path,object_type,parent_category,parent_hash,sourcetype,src_user,tag,src_ip
Last modified on 11 July, 2023
PREVIOUS
Welcome to Splunk UBA 5.0.4.1
  NEXT
Fixed Issues in Splunk UBA

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.4.1


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters