Known issues in Splunk UBA
This version of Splunk UBA has the following known issues and workarounds.
Date filed | Issue number | Description |
---|---|---|
2023-06-08 | UBA-17446 | Upon applying the Ubuntu security patches, postgresql got removed causing UBA unable to start Workaround: Stop all UBA Services : /opt/caspida/bin/Caspida stop-all Re-install postgres package, replace <uba ext packages> with your package folder in below command. For example, for 5.0.5 its uba-ext-pkgs-5.0.5 : sudo dpkg --force-confold --force-all -i /home/caspida/<Extracted uba external package folder>/postgresql*.deb Start all UBA Services : /opt/caspida/bin/Caspida start-all |
2023-04-20 | UBA-17188 | Restricted sudo access for caspida ubasudoers file missing permissions Workaround: Run the following commands: sed -i '120i\ /usr/sbin/service cri-docker *, \\' /opt/caspida/etc/sudoers.d/ubasudoers sed -i '130i\ /sbin/service cri-docker *, \\' /opt/caspida/etc/sudoers.d/ubasudoers sed -i '135i\ /bin/systemctl start kubelet.service, /usr/bin/systemctl start kubelet.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers sed -i '135i\ /bin/systemctl restart kubelet.service, /usr/bin/systemctl restart kubelet.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers sed -i '135i\ /bin/systemctl start docker.service, /usr/bin/systemctl start docker.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers sed -i '135i\ /bin/systemctl restart docker.service, /usr/bin/systemctl restart docker.service, \\' /opt/caspida/etc/sudoers.d/ubasudoers /opt/caspida/bin/Caspida sync-cluster /opt/caspida |
2023-02-02 | UBA-16909 | "UBA_HADOOP_SIZE key is either missing or has no value" error encountered when uba-restore is run from a backup created with the --no-data option |
2023-01-19 | UBA-16827 | Exporting more than 3,000 anomalies thru the UI will crash the UI. |
2023-01-09 | UBA-16774 | Caspida start-all command should check/wait for all the job agents to be available before issuing the command to restart live datasources Workaround: Manually restart the data sources |
2022-12-05 | UBA-16617 | Repeated Kafka warning message "Received a PartitionLeaderEpoch assignment for an epoch < latestEpoch. This implies messages have arrived out of order" Workaround: 1) On zookeeper node (typically node 2 on a multi-node deployment), find all leader-epoch-checkpoint files: locate leader-epoch-checkpoint(can also use a find command if locate isn't available) a) Copy result into a script, adding ">" prior to each result. i.e. #!/bin/bash > /var/vcap/store/kafka/AnalyticsTopic-0/leader-epoch-checkpoint > /var/vcap/store/kafka/AnalyticsTopic-1/leader-epoch-checkpoint > /var/vcap/store/kafka/AnalyticsTopic-10/leader-epoch-checkpoint > /var/vcap/store/kafka/AnalyticsTopic-11/leader-epoch-checkpoint ...b) Make script executable: chmod +x <script name>.sh2) On node 1, run: /opt/caspida/bin/Caspida stop-all3) On zookeeper node, run: ./<script name>.sh4) On node 1, run: /opt/caspida/bin/Caspida start-all5) Check logs to see if warn messages still show up on zookeeper node: tail -f /var/vcap/sys/log/kafka/server.log 6) If you see the following warning repeated: WARN Resetting first dirty offset of __consumer_offsets-17 to log start offset 3346 since the checkpointed offset 3332 is invalid. (kafka.log.LogCleanerManager$)a) Clear cleaner-offset-checkpoint on zookeeper node by running: > /var/vcap/store/kafka/cleaner-offset-checkpointb) Then on node 1, run: /opt/caspida/bin/Caspida stop-all && /opt/caspida/bin/Caspida start-all |
2022-11-17 | UBA-16555 | Assets Cache Update Query Does Not Support Multi-values Data and Causes Postgres Log Size Increase Workaround: The number of errors that gets logged can be reduced by increasing the cache size and therefore avoid cache updates.
|
2022-09-06 | UBA-16289 | 7-node UBA deployment has an invalid value for system.messaging.rawdatatopic.retention.time in caspidatunables-7_node.conf Workaround: SSH into the management node as the caspida user 1. Edit the following two files: /etc/caspida/local/conf/deployment/uba-tuning.properties /opt/caspida/conf/deployment/recipes/caspida/caspidatunables-7_node.confCorrect field system.messaging.rawdatatopic.retention.timeto be 1dinstead of 1dq 2. Sync-cluster /opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf/deployment/ /opt/caspida/bin/Caspida sync-cluster /opt/caspida/conf/deployment/recipes/caspida/ 3.Restart cluster /opt/caspida/bin/Caspida stop-all /opt/caspida/bin/Caspida start-all |
2022-08-23 | UBA-16206 | Problem with SSO settings EntityID and Issuer in UI |
2022-07-27 | UBA-16003 | Redis error "Waiting for the cluster to join..." during 20-Node HA/DR sync between Primary and Standby |
2022-06-21 | UBA-15871 | UI will error out when trying to edit the name of an output connector without retyping the password "EVP_DecryptFinal_ex:bad decrypt" Workaround: When editing the name of an output connector, also retype the password of the splunk instance to avoid the error. |
2022-04-14 | UBA-15608, UBA-14502 | Exporting >4.3K Anomalies table results - crashes UBA UI (Permanent fix for UBA-14502) |
2022-02-14 | UBA-15364 | Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded" Workaround: Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh
You can check deployments.conf field spark.history to find out which node runs the Spark History Server. Update the following setting to 3G:
Afterwards, restart the spark services: /opt/caspida/bin/Caspida stop-spark && /opt/caspida/bin/Caspida start-spark |
2022-01-31 | UBA-15328 | Running replication setup on 20 node clusters fails with "psql: could not connect to server: Connection refused" Workaround: Contact Support for the revised replication setup scripts. |
2022-01-25 | UBA-15321 | Upgrade script for ubuntu systems need revised commands to install external packages correctly Workaround: If the upgrade to UBA 5.0.5 failed in a lockdown environment with no internet connection, perform the following steps on the failed UBA node:
|
2022-01-21 | UBA-15311 | Upgrade Standby cluster fails "ERROR: cannot execute UPDATE in a read-only transaction." Workaround: If the upgrade fails with the following error: "ERROR: cannot execute UPDATE in a read-only transaction." Perform the following steps on the standby system:
|
2022-01-17 | UBA-15302 | "Error from /uba/watchlistChanged" even when an entity is added successfully to the WatchList |
2021-12-16 | UBA-15241 | Threat number mismatch between UBA and ES |
2021-11-18 | UBA-15139 | Postgresql-client-10 missing libpq5 (>=10.17) Workaround: Libpq5 is not required for the operation for UBA and no later version is available for Ubuntu 16. It can be suppressed by performing the following:
|
2021-11-15 | UBA-15130 | Custom model not triggering Workaround: After editing and saving the custom models from the UBA UI, run the following command to sync the changes to other nodes in the cluster. 1. ssh to Node1 with the "caspida" userid
2. Execute the command: /opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf |
2021-11-09 | UBA-15120, UBA-15192 | Customizations to Splunk_TA_nix/local/inputs.conf breaks patch_uba.sh Workaround: Any customizations to the Splunk_TA_nix/local/inputs.conf will need to be backed up temporarily prior to the upgrade, and then re-added after upgrade completes. |
2021-10-14 | UBA-14954, UBA-15198 | Postgresql 10.17 missing libjson-perl Workaround: Prior to running the patch_uba.sh script. Customer environment on all nodes should have libjson-perl installed. Perform the following steps:
|
2021-10-11 | UBA-14927, UBA-15186 | UBA 5.0.5 upgrade script fails to upgrade forwarder to 8.2.1 Workaround: Run the following command if you have Splunk forwarding disabled on a single node: /opt/splunk/bin/splunk version --accept-license --answer-yes --no-prompt --seed-passwd caspida123 You must use the caspida123 password if you want to set up Splunk forwarding at a later time. If you have Splunk forwarding enabled in a multi-node environment, perform the following tasks:
|
2021-09-29 | UBA-14894 | UBA EPS drops after Splunk 8.2.1/8.2.2 upgrade on search heads used by data sources |
2021-09-28 | UBA-14890 | ClassCastException errors in the LateralMovementDetection Model |
2021-08-30 | UBA-14755 | Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist. |
2020-10-30 | UBA-14287, UBA-17142 | Issue while deleting datasource referencing other UBA original primary cluster |
2020-06-29 | UBA-14199, UBA-12111 | Impala jdbc connections leak Workaround:
|
2020-04-10 | UBA-13810 | CSV Export of 3000 or More Anomalies Fails |
2020-04-07 | UBA-13804 | Kubernetes certificates expire after one year Workaround: Run the following commands on the Splunk UBA master node: /opt/caspida/bin/Caspida remove-containerization /opt/caspida/bin/Caspida setup-containerization /opt/caspida/bin/Caspida stop-all /opt/caspida/bin/Caspida start-all |
2019-10-07 | UBA-13227 | Backend anomaly and custom model names are displayed in Splunk UBA Workaround: Click the reload button in the web browser to force reload the UI page. |
2019-08-29 | UBA-13020 | Anomalies migrated from test-mode to active-mode won't be pushed to ES |
2019-08-16 | UBA-12964 | User and device attributions time out and do not load Workaround: In some cases, the User Attribution section on the User Details page and the Device Attribution section on the Device Details page do not load because the Advanced Identity Lookup queries are taking a long time to complete. Perform the following tasks on the management node to work around this issue:
|
2019-08-06 | UBA-12910 | Splunk Direct - Cloud Storage does not expose src_ip field Workaround: When ingesting Office 365 Sharepoint/OneDrive logs through Splunk Direct - Cloud Storage, add an additional field mapping for src_ip in the final SPL to be mapped from ClientIP (| eval src_ip=ClientIP ). Make sure to add src_ip in the final list of fields selected using the fields command. For example:
| fields app,change_type,dest_user,file_hash,file_size,object,object_path,object_type,parent_category,parent_hash,sourcetype,src_user,tag,src_ip |
Welcome to Splunk UBA 5.0.5.1 | Fixed issues in Splunk UBA |
This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.5.1
Feedback submitted, thanks!