Known Issues in Splunk UBA
This version of Splunk UBA has the following known issues and workarounds.
Date filed | Issue number | Description |
---|---|---|
2023-01-09 | UBA-16774 | Caspida start-all command should check/wait for all the job agents to be available before issuing the command to restart live datasources Workaround: Manually restart the data sources |
2022-09-06 | UBA-16289 | 7-node UBA deployment has an invalid value for system.messaging.rawdatatopic.retention.time in caspidatunables-7_node.conf Workaround: SSH into the management node as the caspida user 1. Edit the following two files: /etc/caspida/local/conf/deployment/uba-tuning.properties /opt/caspida/conf/deployment/recipes/caspida/caspidatunables-7_node.confCorrect field system.messaging.rawdatatopic.retention.timeto be 1dinstead of 1dq 2. Sync-cluster /opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf/deployment/ /opt/caspida/bin/Caspida sync-cluster /opt/caspida/conf/deployment/recipes/caspida/ 3.Restart cluster /opt/caspida/bin/Caspida stop-all /opt/caspida/bin/Caspida start-all |
2022-04-14 | UBA-15607, UBA-14237 | Unable to create Anomaly Table filter or AAR specifying filter for Specific Devices when specifying over 20 CIDR/s |
2022-04-14 | UBA-15608, UBA-14502 | Exporting >4.3K Anomalies table results - crashes UBA UI (Permanent fix for UBA-14502) |
2022-02-14 | UBA-15364 | Spark HistoryServer running out of memory for large deployments with error: "java.lang.OutOfMemoryError: GC overhead limit exceeded" Workaround: Open the following file to edit on the Spark History Server: /var/vcap/packages/spark/conf/spark-env.sh
You can check deployments.conf field spark.history to find out which node runs the Spark History Server. Update the following setting to 3G:
Afterwards, restart the spark services: /opt/caspida/bin/Caspida stop-spark && /opt/caspida/bin/Caspida start-spark |
2022-01-25 | UBA-15321 | Upgrade script for ubuntu systems need revised commands to install external packages correctly Workaround: If the upgrade to UBA 5.0.5 failed in a lockdown environment with no internet connection, perform the following steps on the failed UBA node:
|
2021-10-27 | UBA-15074 | Warm/Standby environment not syncing Workaround: If we face the below error while performing manual sync between standby systems, Below are the steps by which we can provide a fix Traceback (most recent call last): File "/opt/caspida/bin/replication/rpc.py", line 105, in <module> f(**kwargs) File "/opt/caspida/bin/replication/rpc.py", line 38, in clean stb = _createStandby(**kwargs) File "/opt/caspida/bin/replication/rpc.py", line 20, in _createStandby co = ucoordinator.Coordinator(mode="standby", **kwargs) File "/opt/caspida/bin/replication/coordinator.py", line 37, in __init__ self.properties = self.loadProperty() File "/opt/caspida/bin/replication/coordinator.py", line 96, in loadProperty for line in fd: File "/usr/lib64/python3.6/encodings/ascii.py", line 26, in decode return codecs.ascii_decode(input, self.errors)[0] UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 58: ordinal not in range(128) Steps to provide Fix: 1. Check below environment properties in /etc/locale.conf and it should have a value set as en_US.UTF-8 LANG LC_CTYPE LC_ALL 2. if the value is not set in any of the above environment properties, try to set the en_US.UTF-8 and the source /etc/locale.conf 3. even after setting these environment properties, the issue is not resolved then perform the below steps vi /opt/caspida/conf/uba-default.properties
remove Endash from the copyright section and replace it with a hyphen. (i.e line no 3 after 59th character) |
2021-09-29 | UBA-14894 | UBA EPS drops after Splunk 8.2.1/8.2.2 upgrade on search heads used by data sources |
2021-09-28 | UBA-14890 | ClassCastException errors in the LateralMovementDetection Model |
2021-08-30 | UBA-14755 | Replication.err logging multiple errors - Cannot delete snapshot s_new from path /user: the snapshot does not exist. |
2021-08-12 | UBA-14702 | Custom Models TimeSeries models have not executed since upgrade to 5.0.4.1 |
2021-08-03 | UBA-14675 | Error when upgrading the standby system Workaround: If the upgrade fails with the following error: "ERROR: cannot execute UPDATE in a read-only transaction." Perform the following steps on the standby system:
If the replication setup fails with the following command: subprocess.CalledProcessError: Command 'psql -t -c "DROP SUBSCRIPTION IF EXISTS subscription_caspida" -d caspidadb -h gsoc-rdc-p-ue-srv01' returned non-zero exit status 1. Perform the following steps:
|
2021-07-30 | UBA-14669 | UBA - Multiple Errors Status code DS-1, Status code DS-5, Status code DS_LAGGING_WARN |
2021-05-04 | UBA-14516 | Health Monitor - An error occurred while retrieving data - Error from /uba/monitor Invalid Json response: Error in getting the response Parameters: {"queryStatus":true,"queryDataQualityStatus":true} Workaround:
|
2021-04-27 | UBA-14508 | Unable to add datasource (Connection time out) - UBA to Cloud connectivity via proxy Workaround: Contact Splunk support. |
2021-01-11 | UBA-14379 | Discrepancy between Threats and notable events in ES Workaround: As a temporary workaround, edit the search in ES. Look for the "UEBA Threat Detected" correlation search removing '| search uba_threat_status != closed'
|
2020-10-30 | UBA-14287, UBA-17142 | Issue while deleting datasource referencing other UBA original primary cluster |
2020-06-29 | UBA-14199, UBA-12111 | Impala jdbc connections leak Workaround:
|
2020-04-10 | UBA-13810 | CSV Export of 3000 or More Anomalies Fails |
2020-04-07 | UBA-13804 | Kubernetes certificates expire after one year Workaround: Run the following commands on the Splunk UBA master node: /opt/caspida/bin/Caspida remove-containerization /opt/caspida/bin/Caspida setup-containerization /opt/caspida/bin/Caspida stop-all /opt/caspida/bin/Caspida start-all |
2019-10-07 | UBA-13227 | Backend anomaly and custom model names are displayed in Splunk UBA Workaround: Click the reload button in the web browser to force reload the UI page. |
2019-08-29 | UBA-13020 | Anomalies migrated from test-mode to active-mode won't be pushed to ES |
2019-08-06 | UBA-12910 | Splunk Direct - Cloud Storage does not expose src_ip field Workaround: When ingesting Office 365 Sharepoint/OneDrive logs through Splunk Direct - Cloud Storage, add an additional field mapping for src_ip in the final SPL to be mapped from ClientIP (| eval src_ip=ClientIP ). Make sure to add src_ip in the final list of fields selected using the fields command. For example:
| fields app,change_type,dest_user,file_hash,file_size,object,object_path,object_type,parent_category,parent_hash,sourcetype,src_user,tag,src_ip |
Welcome to Splunk UBA 5.0.4.1 | Fixed Issues in Splunk UBA |
This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.4.1
Feedback submitted, thanks!