Splunk® User Behavior Analytics

Release Notes

This documentation does not apply to the most recent version of Splunk® User Behavior Analytics. For documentation on the most recent version, go to the latest release.

Known issues in Splunk UBA

This version of Splunk UBA has the following known issues and workarounds.


Date filed Issue number Description
2023-06-08 UBA-17446 Upon applying the Ubuntu security patches, postgresql got removed causing UBA unable to start

Workaround:
Stop all UBA Services :
/opt/caspida/bin/Caspida stop-all

Re-install postgres package, replace <uba ext packages> with your package folder in below command. For example, for 5.0.5 its uba-ext-pkgs-5.0.5 :

sudo dpkg --force-confold --force-all -i /home/caspida/<Extracted uba external package folder>/postgresql*.deb

Start all UBA Services :

/opt/caspida/bin/Caspida start-all
2023-04-20 UBA-17188 Restricted sudo access for caspida ubasudoers file missing permissions

Workaround:
Run the following commands:
sed -i '120i\           /usr/sbin/service cri-docker *, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '130i\           /sbin/service cri-docker *, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl start kubelet.service, /usr/bin/systemctl start kubelet.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl restart kubelet.service, /usr/bin/systemctl restart kubelet.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl start docker.service, /usr/bin/systemctl start docker.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
sed -i '135i\           /bin/systemctl restart docker.service, /usr/bin/systemctl restart docker.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers
/opt/caspida/bin/Caspida sync-cluster /opt/caspida
2023-02-02 UBA-16909 "UBA_HADOOP_SIZE key is either missing or has no value" error encountered when uba-restore is run from a backup created with the --no-data option
2023-01-19 UBA-16827 Exporting more than 3,000 anomalies thru the UI will crash the UI.
2023-01-09 UBA-16774 Caspida start-all command should check/wait for all the job agents to be available before issuing the command to restart live datasources

Workaround:
Manually restart the data sources
2022-12-05 UBA-16617 Repeated Kafka warning message "Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order"

Workaround:
1) On zookeeper node (typically node 2 on a multi-node deployment), find all leader-epoch-checkpoint files:
locate leader-epoch-checkpoint
(can also use a find command if locate isn't available)

a) Copy result into a script, adding ">" prior to each result. i.e.

#!/bin/bash
> /var/vcap/store/kafka/AnalyticsTopic-0/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-1/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-10/leader-epoch-checkpoint
> /var/vcap/store/kafka/AnalyticsTopic-11/leader-epoch-checkpoint
...
b) Make script executable:
chmod +x <script name>.sh
2) On node 1, run:
/opt/caspida/bin/Caspida stop-all
3) On zookeeper node, run:
./<script name>.sh
4) On node 1, run:
/opt/caspida/bin/Caspida start-all
5) Check logs to see if warn messages still show up on zookeeper node:
tail -f /var/vcap/sys/log/kafka/server.log

6) If you see the following warning repeated:

WARN Resetting first dirty offset of __consumer_offsets-17 to log start offset 3346 since the checkpointed offset 3332 is invalid. (kafka.log.LogCleanerManager$)
a) Clear cleaner-offset-checkpoint on zookeeper node by running:
> /var/vcap/store/kafka/cleaner-offset-checkpoint
b) Then on node 1, run:
/opt/caspida/bin/Caspida stop-all && /opt/caspida/bin/Caspida start-all
2022-11-17 UBA-16555 Assets Cache Update Query Does Not Support Multi-values Data and Causes Postgres Log Size Increase

Workaround:
The number of errors that gets logged can be reduced by increasing the cache size and therefore avoid cache updates.
  1. Delete contents of /var/vcap/sys/log/postgresql/
  2. Increasing the cache size setting in the file uba-env.properties: identity.resolution.assetscache.capacity
  3. Sync the config file /opt/caspida/bin/Caspida sync-cluster
  4. Restart UBA with stop-all and start-all
  5. Check the cache usage using the command irscan -R -m
  6. If the cache is nearly 95% used, then increase the setting in step 2 and then check the usage again.

2022-09-06 UBA-16289 7-node UBA deployment has an invalid value for system.messaging.rawdatatopic.retention.time in caspidatunables-7_node.conf

Workaround:
SSH into the management node as the caspida user

1. Edit the following two files:

/etc/caspida/local/conf/deployment/uba-tuning.properties
/opt/caspida/conf/deployment/recipes/caspida/caspidatunables-7_node.conf
Correct field
system.messaging.rawdatatopic.retention.time
to be
1d
instead of
1dq

2. Sync-cluster

/opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf/deployment/
/opt/caspida/bin/Caspida sync-cluster /opt/caspida/conf/deployment/recipes/caspida/

3.Restart cluster

/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2022-08-23 UBA-16206 Problem with SSO settings EntityID and Issuer in UI
2022-07-27 UBA-16003 Redis error "Waiting for the cluster to join..." during 20-Node HA/DR sync between Primary and Standby
2022-06-21 UBA-15871 UI will error out when trying to edit the name of an output connector without retyping the password "EVP_DecryptFinal_ex:bad decrypt"

Workaround:
When editing the name of an output connector, also retype the password of the splunk instance to avoid the error.
2022-04-14 UBA-15608, UBA-14502 Exporting >4.3K Anomalies table results - crashes UBA UI (Permanent fix for UBA-14502)
2022-02-14 UBA-15364 Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded"

Workaround:
Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh

You can check deployments.conf field spark.history to find out which node runs the Spark History Server.

Update the following setting to 3G: SPARK_DAEMON_MEMORY=3G

Afterwards, restart the spark services:

/opt/caspida/bin/Caspida stop-spark && /opt/caspida/bin/Caspida start-spark
2022-01-31 UBA-15328 Running replication setup on 20 node clusters fails with "psql: could not connect to server: Connection refused"

Workaround:
Contact Support for the revised replication setup scripts.
2022-01-25 UBA-15321 Upgrade script for ubuntu systems need revised commands to install external packages correctly

Workaround:
If the upgrade to UBA 5.0.5 failed in a lockdown environment with no internet connection, perform the following steps on the failed UBA node:
  1. Edit the file: /home/caspida/patch_uba_505/bin/utils/patch_uba.sh
  2. Replace the line ssh ${host} "${SUDOCMD} apt-get -y -f install ${ExtPkgsDir}/openjdk*.deb && exit" with ssh ${host} "${SUDOCMD} dpkg --force-confold --force-all --refuse-downgrade -i ${ExtPkgsDir}/openjdk*.deb && exit"
  3. Replace the line ${SUDOCMD} apt-get -y -f install ${ExtPkgsDir}/ca-certificates*.deb with ${SUDOCMD} dpkg --force-confnew --refuse-downgrade -i ${ExtPkgsDir}/ca-certificates*.deb
  4. Save the file.
  5. Run the upgrade script again.

2022-01-21 UBA-15311 Upgrade Standby cluster fails "ERROR: cannot execute UPDATE in a read-only transaction."

Workaround:
If the upgrade fails with the following error:
"ERROR:  cannot execute UPDATE in a read-only transaction."

Perform the following steps on the standby system:

  1. Run the following command to move database to read-write:
    psql -d caspidadb -c 'BEGIN; SET transaction read write; ALTER DATABASE caspidadb SET default_transaction_read_only = off; COMMIT'
    psql -d caspidadb -c 'DROP PUBLICATION IF EXISTS publication_caspida'
    psql -d caspidadb -c 'DROP SUBSCRIPTION IF EXISTS subscription_caspida'
  2. Run the /opt/caspida/bin/CaspidaCleanup command again.
  3. Run the following command to check the database version:
    psql -d caspidadb -c 'select * from dbifo'
    Set the version if it is not set correctly.
  4. Run the upgrade script again.

2022-01-17 UBA-15302 "Error from /uba/watchlistChanged" even when an entity is added successfully to the WatchList
2021-12-16 UBA-15241 Threat number mismatch between UBA and ES
2021-11-18 UBA-15139 Postgresql-client-10 missing libpq5 (>=10.17)

Workaround:
Libpq5 is not required for the operation for UBA and no later version is available for Ubuntu 16. It can be suppressed by performing the following:
  1. Open and edit the file /var/lib/dpkg/status
  2. Scroll to the entry for postgresql-client-10 and look for the dependency list entry:
    Depends: libpq5 (>= 10.17), postgresql-client-common (>= 182~), sensible-utils, libc6 (>= 2.15), libedit2 (>= 2.11-20080614), zlib1g (>= 1:1.1.4)
    
  3. Remove the entry libpq5 (>= 10.17)
  4. Save and close the file

2021-11-15 UBA-15130 Custom model not triggering

Workaround:
After editing and saving the custom models from the UBA UI, run the following command to sync the changes to other nodes in the cluster.

1. ssh to Node1 with the "caspida" userid 2. Execute the command: /opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf

2021-11-09 UBA-15120, UBA-15192 Customizations to Splunk_TA_nix/local/inputs.conf breaks patch_uba.sh

Workaround:
Any customizations to the Splunk_TA_nix/local/inputs.conf will need to be backed up temporarily prior to the upgrade, and then re-added after upgrade completes.
2021-10-14 UBA-14954, UBA-15198 Postgresql 10.17 missing libjson-perl

Workaround:
Prior to running the patch_uba.sh script. Customer environment on all nodes should have libjson-perl installed. Perform the following steps:
  1. Log in to the management node as the caspida user in your Splunk UBA deployment.
  2. Run the following commands:
    sudo dpkg --force-confold --force-all -i /home/caspida/uba-ext-pkgs-5.0.5/postgresql*.deb
    sudo service postgresql stop
    sudo service postgresql start
    /opt/caspida/bin/Caspida stop-all
    /opt/caspida/bin/Caspida start-all
    

2021-10-11 UBA-14927, UBA-15186 UBA 5.0.5 upgrade script fails to upgrade forwarder to 8.2.1

Workaround:
Run the following command if you have Splunk forwarding disabled on a single node:
/opt/splunk/bin/splunk version --accept-license --answer-yes --no-prompt --seed-passwd caspida123

You must use the caspida123 password if you want to set up Splunk forwarding at a later time.

If you have Splunk forwarding enabled in a multi-node environment, perform the following tasks:

  1. Make sure that the ext-uba-pkg package from the management node is copied to /home/caspida in each node in your deployment and is untarred
  2. Log in to the management node as the caspida user.
  3. Run the following command once for each node in your deployment. Replace $node with the actual name of each node.
    ssh $node "tar -C /opt -xzf /home/caspida/uba-ext-pkgs-5.0.5/splunk-8.2.1-x86_64.tgz && /opt/splunk/bin/splunk version --accept-license --answer-yes --no-prompt --seed-passwd caspida123"
    

2021-09-29 UBA-14894 UBA EPS drops after Splunk 8.2.1/8.2.2 upgrade on search heads used by data sources
2021-09-28 UBA-14890 ClassCastException errors in the LateralMovementDetection Model
2021-08-30 UBA-14755 Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist.
2020-10-30 UBA-14287, UBA-17142 Issue while deleting datasource referencing other UBA original primary cluster
2020-06-29 UBA-14199, UBA-12111 Impala jdbc connections leak

Workaround:
  1. Create a file containing the following script on node 1 in your Splunk UBA deployment (node 2 on a 20-node Splunk UBA deployment). For example, copy and paste the script to a new file in /etc/caspida/local/conf/impala_status_check.sh:
    #!/bin/bash
    log_file=$1
    if test -f "$log_file"; then
       tail -n 100 $log_file > /tmp/tmp_log_file.log
       mv /tmp/tmp_log_file.log $log_file
    fi
    connection_count=$(netstat -an | grep :21050 | grep ESTABLISHED | wc -l)
    now=$(date)
    if [ "$connection_count" -gt 500 ]; then
       echo "[$now] $connection_count impala connection(s), restarting impala"
       sudo service impala-server restart
       if [ $? -eq 0 ]; then
          echo "restart succeeded"
        else
          echo "restart failed. return code: $?"
       fi
    else
       echo "[$now] $connection_count impala connection(s), status is good"
    fi
    
  2. Make the script executable:
    chmod +x /etc/caspida/local/conf/impala_status_check.sh
    
  3. Add the following line to cron using crontab -e:
    0 8 * * * /etc/caspida/local/conf/impala_status_check.sh >> /var/log/impala/impala_status.log 2>&1
    

2020-04-10 UBA-13810 CSV Export of 3000 or More Anomalies Fails
2020-04-07 UBA-13804 Kubernetes certificates expire after one year

Workaround:
Run the following commands on the Splunk UBA master node:
/opt/caspida/bin/Caspida remove-containerization
/opt/caspida/bin/Caspida setup-containerization
/opt/caspida/bin/Caspida stop-all
/opt/caspida/bin/Caspida start-all
2019-10-07 UBA-13227 Backend anomaly and custom model names are displayed in Splunk UBA

Workaround:
Click the reload button in the web browser to force reload the UI page.
2019-08-29 UBA-13020 Anomalies migrated from test-mode to active-mode won't be pushed to ES
2019-08-16 UBA-12964 User and device attributions time out and do not load

Workaround:
In some cases, the User Attribution section on the User Details page and the Device Attribution section on the Device Details page do not load because the Advanced Identity Lookup queries are taking a long time to complete.

Perform the following tasks on the management node to work around this issue:

  1. Edit or add the identity.resolution.attribution.query.timerange property in /etc/caspida/local/conf/uba-site.properties and set the time range of the query to a smaller number of days. The default is seven days. This example sets the time range to three days:
    identity.resolution.attribution.query.timerange=3d
  2. In distributed deployments, synchronize the cluster. Run the following command:
    /opt/caspida/bin/Caspida sync-cluster  /etc/caspida/local/conf
  3. Run the following commands to restart the Splunk UBA containers:
    /opt/caspida/bin/Caspida stop-containers
    /opt/caspida/bin/Caspida start-containers
    

2019-08-06 UBA-12910 Splunk Direct - Cloud Storage does not expose src_ip field

Workaround:
When ingesting Office 365 Sharepoint/OneDrive logs through Splunk Direct - Cloud Storage, add an additional field mapping for src_ip in the final SPL to be mapped from ClientIP (| eval src_ip=ClientIP). Make sure to add src_ip in the final list of fields selected using the fields command. For example:
| fields app,change_type,dest_user,file_hash,file_size,object,object_path,object_type,parent_category,parent_hash,sourcetype,src_user,tag,src_ip
Last modified on 11 July, 2023
Welcome to Splunk UBA 5.0.5.1   Fixed issues in Splunk UBA

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.5.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters