System requirements for Splunk UBA
Install Splunk UBA with assistance from Splunk Professional Services.
Hardware requirements
You can install Splunk UBA on a physical server, a virtual machine, or in the cloud. Hardware requirements for UBA are the same no matter where you install UBA. UBA installations are supported in any virtual machine or cloud environment, so long as the underlying hardware and operating system requirements are met.
Install Splunk UBA on its own hardware stack. Do not install Splunk UBA on the same machines as Splunk Enterprise.
You can use Splunk Professional Services resources to assist with your UBA installation.
Verify the following hardware requirements before installing Splunk UBA:
- Disk space and memory requirements for installing Splunk UBA
- (Optional) Plan for configuring Splunk UBA warm standby
- (Optional) Add additional disks for offline Splunk UBA backups
- Supported AWS server instance types
- Disk subsystem IOPS requirements
- Network interface requirements
- Install third-party agents after you install Splunk UBA
- Directories created or modified on the disk
Disk space and memory requirements for installing Splunk UBA
Every machine in your Splunk UBA deployment must meet the following requirements. Not all machines in your deployment need to have matching specifications, but they must all meet the minimum requirements.
- 16 CPU cores. If a machine has more than 16 cores, the additional cores are not used by Splunk UBA.
- 64GB RAM. If a machine has more than 64GB RAM, the additional RAM is used by Splunk UBA as needed.
- Disk 1 - 100GB dedicated disk space for Splunk UBA installation on bare metal systems running RHEL or OEL. The AMI images are pre-configured with 100GB root disks for Splunk UBA installation.
- Disk 2 - 1TB (1024GB) additional disk space for metadata storage.
- Disk 3 - 1TB (1024GB) additional disk space for each node running Spark services.
See Where services run in Splunk UBA in the Administer Splunk User Behavior Analytics manual for more information about how to determine where Splunk UBA services are running in your deployment. See Directories created or modified on the disk for more information about Splunk UBA directories and space requirements.
Do not manually mount the disks before installing Splunk UBA. During the installation procedure, the mount-a
command will properly mount the disks for you.
The following table summarizes the disk requirements per deployment.
Splunk UBA Deployment | Nodes Requiring 100GB Disk Space for Disk 1 | Nodes Requiring a 1TB Disk for Disk 2 | Nodes Requiring a 1TB Disk for Disk 3 |
---|---|---|---|
1 Node | Node 1 | Node 1 | Node 1 |
3 Nodes | All Nodes | All Nodes | Nodes 1, 3 |
5 Nodes | All Nodes | All Nodes | Nodes 1, 4, 5 |
7 Nodes | All Nodes | All Nodes | Node 7 |
10 Nodes | All Nodes | All Nodes | Nodes 9, 10 |
20 Nodes | All Nodes | All Nodes | Nodes 17, 18, 19, 20 |
(Optional) Plan for configuring Splunk UBA warm standby
Configure warm standby in your deployment for high availability and disaster recovery. Allocate additional servers for a warm standby solution, where you can manually failover Splunk UBA to a full backup system. The backup system must have the same number of nodes as the active system. See Configure warm standby in Splunk UBA in Administer Splunk User Behavior Analytics.
(Optional) Add additional disks for offline Splunk UBA backups
Use the backup and restore scripts located in /opt/caspida/bin/utils
to migrate your Splunk UBA deployment to the next larger size on the same operating system. For example, you can migrate from 5 nodes to 7 nodes, or 10 nodes to 20 nodes. If you want to migrate from 7 nodes to 20 nodes, migrate from 7 nodes to 10 nodes first, then from 10 nodes to 20 nodes.
Add an additional disk to the Splunk UBA management node mounted as /backup
for the Splunk UBA backups used to restore Splunk UBA during the migration process.
The size of the additional disk must follow these guidelines:
- The disk size must be at least half the size of your deployment in terabytes. For example, a 10-node system requires a 5TB disk.
- If you are creating archives, allow for an additional 50 percent of the backup disk size. For example, a 10-node system requires a 5TB disk for backups, and an additional 2.5TB if for archives, so you would need a 7.5TB disk for archived backups.
The table summarizes the minimum disk size requirements for Splunk UBA backups per deployment:
Number of Splunk UBA Nodes | Minimum Disk Size for Backup (without archives) | Minimum Disk Size for Backup (with archives) |
---|---|---|
1 Node | 1TB | 1.5TB |
3 Nodes | 1TB | 1.5TB |
5 Nodes | 2TB | 3TB |
7 Nodes | 4TB | 6TB |
10 Nodes | 5TB | 7.5TB |
20 Nodes | 10TB | 15TB |
If you have previous backups on the same disk, be sure to also take this into account when determining available disk space. See Prepare to backup Splunk UBA in Administer Splunk User Behavior Analytics.
Supported AWS server instance types
If you run Splunk UBA on an AWS instance:
- AWS measures CPU power on Elastic Compute Cloud (EC2) instances in virtual CPUs (vCPUs), not real CPUs.
- Each vCPU is a hyper thread of an Intel Xeon core on most AWS instance types. See Amazon EC2 Instance Types on the AWS website.
- As a hyper thread of a core, a vCPU acts as a core, but the physical core must schedule its workload among other workloads of other vCPUs that the physical core handles.
Installation of Splunk UBA on Amazon Web Services (AWS) servers is supported on the following instance types:
- m4.4xlarge
- m5.4xlarge
- m5a.4xlarge
- m5.8xlarge
- m6a.4xlarge
- m6i.4xlarge
The m6a.4xlarge and m6i.4xlarge instance types have less memory than the other supported instance types. This memory issue is surfaced as a warning when running UBA pre-check scripts. You can ignore this warning.
All Splunk UBA nodes in your AWS environment must use gp3 (3000 IOPS) or higher performant volumes for storage.
Disk subsystem IOPS requirements
For all new Splunk UBA deployments, the disk subsystem for each Splunk UBA server must support an average Input/Output Operations per second (IOPS) of 1200 IOPS. Existing deployments on 800 IOPS servers can be upgraded without having to upgrade the disks.
IOPS are a measurement of how much data throughput a hard drive can sustain. Because a hard drive reads and writes at different speeds, there are IOPS numbers for disk reads and writes. The average IOPS is the average between those two figures. See Disk subsystem in the Capacity Planning Manual for Splunk Enterprise for more about IOPS.
Network interface requirements
Splunk UBA requires at least one 1Gb ethernet interface on each node.
Configure each Splunk UBA node with at least one control plane interface and one data plane interface. Configure the control plane interfaces on one subnet, and the data plane interfaces on a separate subnet.
Connect all interfaces on the data plane network with at least one 10GbE or better ethernet interface. For larger clusters, use 25GbE, 40GbE or 50GbE network interfaces.
When installing Splunk UBA, ensure that the interface on which you are setting up Splunk UBA is associated with the "public" zone of the firewall and has the default route set for the interface.
Use the following command to check the firewall zone of the specific interface:
sudo firewall-cmd --get-zone-of-interface=<interface name>
Ensure that the default route is present for the specific interface:
sudo ip route | grep "default"
Example:
[caspida@uba-host-01 ~]$ sudo ip route | grep "default" default via 10.159.248.1 dev ens34 proto dhcp src 10.159.249.161 metric 101
Install third-party agents after you install Splunk UBA
Third-party agents, such as anti-virus software, can block a successful installation of Splunk UBA.
Installing security agents similar to b9daemon (Carbon Black) can cause high CPU utilizations and detrimentally impact UBA functionality.
Install any desired third-party agents only after Splunk UBA is installed, and monitor Splunk UBA for any possible interference after the agents are installed and running.
If CPU utilization spikes when third-party agents are running on your UBA nodes, exclude the /var/vcap/store/docker/overlay2/
directory in the third-party agents setting.
Directories created or modified on the disk
Splunk UBA creates or modifies the following directories on the disk during installation.
Directory | Disk | Description of Contents | Updated During Upgrade? | Recommended Space |
---|---|---|---|---|
/home/caspida | Disk 1 | Contains the Splunk UBA installation and upgrade .tgz files.
|
Yes | 20 GB |
/opt/caspida | Disk 1 | Contains the Splunk UBA software. | Yes | 10 GB |
/opt/splunk | Disk 1 | Contains the Splunk forwarder to send data to the Splunk platform. | Yes | 10 GB |
/etc/caspida/local/conf | Disk 1 | Contains custom configuration files affecting your local environment. | No | 1 GB |
/var/vcap | Disk 2 | Contains the following notable sub-directories:
|
Yes | 1 TB |
/var/vcap2 | Disk 3 | Contains the runtime data for Spark services. | Yes | 1 TB |
/var | Disk 1 | Contains various support files required by Splunk UBA. The /var/lib directory must have a minimum of 20 GB.
|
Yes | 50 GB |
Operating system requirements
You must install Splunk UBA on a server that uses one of the following operating systems:
Installing Splunk UBA on hardened operating systems is not supported.
Operating System | Can I upgrade Splunk UBA on this OS version? | Can I install Splunk UBA on this OS version? | Kernel Version Tested | Installation Type and Description |
---|---|---|---|---|
Red Hat Enterprise Linux (RHEL) 8.6 | Yes | Yes | kernel-4.18.0-372.32.1.el8_6.x86_64 | Install on supported hardware. Obtain the software from Splunk UBA Software Installation Package on Splunkbase. |
Oracle Enterprise Linux (OEL) 8.7 | Yes | Yes | 5.4.17-2136.315.5.el8uek.x86_64, 4.18.0-425.3.1.el8 if security patch applied | |
Ubuntu 20.04 LTS | Yes | No | 5.4.0-137-generic | This operating system is bundled with the AMI installation package.
|
Splunk UBA requires that the operating system and underlying component versions match exactly on all nodes in your deployment. Updating the operating system or any components in your deployment can break dependencies that will cause Splunk UBA to stop working and is not recommended. If you must update the operating system before the next release of Splunk UBA, do so in a test environment and verify that everything is working properly before applying the upgrade to your production environment.
Additional RHEL requirements
Make sure your RHEL server has access to the RHEL repositories, and the license includes the following subscription names:
- Red Hat Enterprise Linux Server
- Red Hat Enterprise Linux Server - Extended Update Support (EUS)
The RHEL EUS subscription enables you to remain with previous stable releases of RHEL for up to approximately 24 months.
On December 31, 2021, Red Hat's CentOS Linux reached End Of Life (EOL). Per Red Hat, Inc, CentOS Linux users must migrate to a new operating system to continue receiving updates, patches, and new features. Red Hat also encourages customers to migrate to RHEL. Additionally, Red Hat made the new "CentOS Stream" operating system a non-production, pre-build version of RHEL, with no long-term support model. Splunk UBA does not include CentOS Stream as a supported operating system. Customers must migrate to and adopt a supported production Linux distro of RHEL, Ubuntu, or OEL as a base OS for UBA version 5.2.0.
User access requirements
If you are installing Splunk UBA using an AMI image, perform all tasks as the caspida
user and use sudo
for tasks requiring root-level privileges.
If you are installing Splunk UBA on a supported Linux platform, you must be able to do the following:
- Be able to log in as
root
, or log in as a different user and usesu
orsudo
to have root privileges. This is required for preparing the servers prior to installing the Splunk UBA software. - Create the
caspida
user with the appropriate privileges. Thecaspida
user is required to install the Splunk UBA software. - All user and group authentication must be performed locally on each Splunk UBA host. Authenticating users and groups using a centralized controller or user and group management system is not supported.
Verify availability of a specific system ID for Impala
Perform the following steps to check if a UBA user is occupying UID 1010 and GID 1010. And if there is a user occupying 1010, migrate the user to another ID.
- SSH to the host as caspida user.
- Verify if any user or group other than "impala" has occupied id 1010. Run the following command:
If the id is already assigned to a user or group, the command will output information about that user or group, including their name, group memberships, and more. If the UID is not assigned to any user or group, the command will return an error message indicating that the ID is not found.
id 1010
- If the id is assigned to any user, follow the steps to migrate that user to an ID other than 1010.
- Create a backup of the user's home directory and any other files or directories that are owned by the user. Find the files/directories by following command:
sudo find / -uid 1010 -print
- Run the following command replacing
new_uid
to a different value than 1010 and replacing the username placeholder to the value retrieved in step 2.Changing the UID of a user can have unintended consequences if the user has ownership of system files or directories. Make a backup of the system before performing this operation.
sudo usermod -u <new_uid> <username>
- Verify that the user's files and directories have been updated. Run the following command to find any files or directories associated with the old UID:
sudo find / -uid 1010 -print
- If any files or directories are found with the old UID, update their ownership to the new UID using the following command:
sudo chown -R <username>:<username> /path/to/directory
- If the ID is assigned to any group, follow the steps to migrate that group to an ID other than 1010.
- Create a backup of any files or directories that are owned by the group. Find the files/directories by following command:
sudo find / -gid 1010 -print
- Run the following command replacing
new_gid
to a different value than 1010 and replacing thegroup_name
placeholder to the value retrieved step 2.Changing the GID of a group can have unintended consequences if the group has ownership of system files or directories. Make a backup of the system before performing this operation.
sudo groupmod -g <new_gid> <group_name>
- Verify that any files and directories owned by group have been updated. Run the following command to find any files or directories associated with old GID:
sudo find / -gid 1010 -print
- If any files or directories are found with the GID 1010, update their group ownership to the new GID using the following command:
sudo chgrp -R <group_name> /path/to/directory
Validate the UMASK value
Ensure the UMASK value of the root user is set to 0002 or 0022, or grant read permissions for newly created files and directories to the caspida user.
Complete the following steps to validate the UMASK value:
- Check the UMASK value of the root user by running the following command. The value must be 0002 or 0022:
umask
- Verify the UMASK value in the
/etc/login.defs
file:grep -i "^UMASK" /etc/login.defs
The umask value specified in
/etc/login.defs
applies as the default for all users. - Validate the permissions for new files and directories:
- As the caspida user, create a new file or directory using sudo to observe the permissions:
sudo touch testfile.txt sudo mkdir testdirectory
- Next, check the permissions of the created files and directories. The read permission for
caspida(other)
users is required:ls -l testfile.txt ls -ld testdirectory
To set the required umask value, edit the /etc/login.defs
file and set the UMASK value to 022.
If the caspida user does not have read permissions, update the UMASK value accordingly. Failure to provide the required permission to the caspida user will result in a UBA installation or upgrade failure.
Networking requirements
Perform the following tasks or verify specific information to meet the networking requirements for installing Splunk UBA:
- Assign static IP addresses to Splunk UBA servers
- Inbound networking port requirements
- Splunk platform port requirements
- Modify firewalls and proxies
Assign static IP addresses to Splunk UBA servers
Assign static IP addresses to Splunk UBA servers.
Inbound networking port requirements
Splunk UBA requires the following ports to be open to the outside world for other services to interact with Splunk UBA.
Service | Port |
---|---|
SSH | 22 |
HTTPS | 443 |
Splunk UBA requires the following ports to be open internally among the nodes in a distributed Splunk UBA cluster to allow specific services to interact with each other.
Service | Port |
---|---|
SSH | 22 |
Redis | 6379, 16379 |
PostgreSQL | 5432 |
Zookeeper | 2181, 2888, 3888 |
Apache Kafka | 9092, 9901, 9093 (for Kafka ingestion), 32768 - 65535 (for JMX) |
Job Manager | 9002 |
Time Series Database | 8086 |
Apache Impala | 21000, 21050, 25000, 25010, 25020 |
Apache Spark | 7077, 8080, 8081 |
Hadoop Namenode | 8020 |
Hadoop Namenode WebUI | 9870 |
Hadoop Yarn ResourceManager | 8090 |
Hadoop Data Transfer Port | 9866 |
Hadoop Datanodes | 9867, 9864 |
Hadoop Secondary namenode | 9868 |
Hive Metastore | 9090, 9095 |
Kubernetes/etcd | 2379, 2380, 5000, 6443, 10250, 10251, 10252, 10255, 30000 - 32767 |
For more details on services in Splunk UBA, see Monitoring the health of your Splunk UBA deployment in Administer Splunk User Behavior Analytics.
Splunk platform port requirements
The following ports must be open on the Splunk platform to interact with Splunk UBA:
Service | Port |
---|---|
HTTPS authentication | 443 |
HTTP authentication | 80 |
REST services that enable Splunk UBA to communicate with Splunk search heads | 8089 |
Port used to send alerts to Splunk Enterprise Security (ES) | User-defined (for example, 10008) |
Port used by Splunk UF to send UBA host logs to Splunk Indexers | 9997 (or other custom ingest port) |
Modify firewalls and proxies
Modify firewalls and proxies to support the inbound and outbound port requirements defined in this document so that requests to internal services do not attempt to travel externally.
- Set the
no_proxy
environment variable for general HTTP communication between nodes. - Set the
NO_PROXY
environment variable for Splunk UBA's time series database (influxdb). SetNO_PROXY
to the same values asno_proxy
.
Perform the following tasks to configure your firewall and proxy settings:
- If you use an HTTP or HTTPS proxy, exclude
localhost
and the IP addresses and names of the Splunk UBA servers from the proxy. For example, in a 3-node cluster, add the following configuration to the/etc/environment
file:# Proxy host/port for reference. These variables are not used below. PROXY_IP="1.2.3.4" PROXY_PORT="3128" # Set the proxy variables based on the values above. Both upper and lower case, different services look for different casing. HTTP_PROXY="http://1.2.3.4:3128" http_proxy="http://1.2.3.4:3128" HTTPS_PROXY="https://1.2.3.4:3128" https_proxy="https://1.2.3.4:3128" # Exclude loopback addresses from the proxy # Note: CIDR ranges aren't supported by older tools so specify both IP and CIDR # # Proxy values to be set: # localhost: "localhost,127.0.0.1,127.0.1.1,127.0.0.0/8" # UBA Containers: 10.96.0.0/12,10.244.0.0/16,172.17.0.1,172.17.0.2,172.17.0.0/16" # Site Specific hosts by shortname, fqdn, ip: "ubanode01,ubanode01.mydomain.local,10.10.10.1",ubanode02,ubanode02.mydomain.local,10.10.10.2"" # Set NO_PROXY and no_proxy NO_PROXY="localhost,127.0.0.1,127.0.1.1,127.0.0.0/8,10.96.0.0/12,10.244.0.0/16,172.17.0.1,172.17.0.2,172.17.0.0/16,ubanode01,ubanode01.mydomain.local,10.10.10.1,ubanode02,ubanode02.mydomain.local,10.10.10.2,ubanode03,ubanode03.mydomain.local,10.10.10.3" no_proxy="localhost,127.0.0.1,127.0.1.1,127.0.0.0/8,10.96.0.0/12,10.244.0.0/16,172.17.0.1,172.17.0.2,172.17.0.0/16,ubanode01,ubanode01.mydomain.local,10.10.10.1,ubanode02,ubanode02.mydomain.local,10.10.10.2,ubanode03,ubanode03.mydomain.local,10.10.10.3"
- Use the following commands to stop and restart all Splunk UBA services for the changes in
/etc/environment
to take effect:/opt/caspida/bin/Caspida stop-all /opt/caspida/bin/Caspida start-all
- Verify that the nslookup localhost command returns a
127.x.x.x
IP address. For example:$ nslookup localhost Server: 10.160.20.4 Address: 10.160.20.4#53 Name: localhost.sv.splunk.com Address: 127.0.0.1
Configure host name lookups and DNS
Configure your environment so that Splunk UBA can resolve host names properly.
- Configure the name switching service.
- Configure the DNS resolver.
- Verify the network interface configuration.
- Configure local DNS using the /etc/hosts file.
- Verify your name lookup and DNS settings.
Configure the name switching service
The name switching service in Linux environments determines the order in which services are queried for host name lookups. Use cat /etc/nsswitch.conf
to verify that your name switching service is using files
before DNS. Check the hosts
line in the output:
- If you see
files dns
it means that/etc/hosts
will be queried before checking DNS. - If you see
dns files
it means that DNS will be queried before the/etc/hosts
file.
Also make sure myhostname
is the last item on the hosts
line so that the system can determine its own host name from the local config files.
$ cat /etc/nsswitch.conf # /etc/nsswitch.conf # # Example configuration of GNU Name Service Switch functionality. # If you have the `glibc-doc-reference' and `info' packages installed, try: # `info libc "Name Service Switch"' for information about this file. passwd: compat group: compat shadow: compat gshadow: files hosts: files dns myhostname ...
Configure the DNS resolver
Some Splunk UBA services use DNS during installation and while the product is running. All nodes in your Splunk UBA deployment must point to the same DNS server. Verify this is the case in the /etc/resolv.conf
file on each node. Use the following command to check if /etc/resolv.conf
exists on your system:
ls -lH /etc/resolv.conf
If the file does not exist, create the file by performing the following tasks:
- Run the following command:
sudo systemctl enable resolvconf
- Restart the server.
- Run the
ls -lH /etc/resolv.conf
command again to verify that the/etc/resolv.conf
exists.
Verify the network interface configuration
Verify that the network interface configuration has a dns-search
value configured to match your domain, such as mgmt.corp.local
. Check the /etc/resolv.conf
file to see if search mgmt.corp.local
is present so that any shortname lookups for other local nodes are resolved correctly.
- On Ubuntu systems, the configuration is located in
/etc/network/interfaces
as:dns-search mgmt.corp.local
- On RHEL and OEL systems, the configuration may be located in
/etc/sysconfig/network-scripts/ifcfg-eth0
as:DOMAIN=mgmt.corp.local
More recent RHEL, and OEL systems may use a different slot-based naming scheme. The exact name may vary depending on your specific environment.
Be consistent with your naming conventions and use either all fully qualified domain names (FQDN) such has host.example.com
or all short names such as host
. Do not use FQDNs in some places and short names in others.
Configure local DNS using the /etc/hosts file
Verify that the /etc/hosts
file identifies each node in your Splunk UBA cluster using the following format:
<IP address> <FQDN> <short name> <alias>
For example:
192.168.10.1 spluba01.mgmt.corp.local spluba01 ubanode01 192.168.10.2 spluba02.mgmt.corp.local spluba02 ubanode02 192.168.10.3 spluba03.mgmt.corp.local spluba03 ubanode03 192.168.10.4 spluba04.mgmt.corp.local spluba04 ubanode04 192.168.10.5 spluba05.mgmt.corp.local spluba05 ubanode05
In this example, host spluba01
has an IP address of 192.168.10.1
and its FQDN is spluba01.mgmt.corp.local
. Anything after the first three field is considered an alias, and is optional. In this example, we use ubanode1
is used to identify node number 1, ubanode2
is used to identify node number 2, and so on.
If your environment contains both internal and external IP addresses, be sure to use the internal IP address when configuring Splunk UBA. You can use the ip route command to help you determine this.
Formatting your /etc/hosts
file this way in conjunction with using files before DNS in /etc/nsswitch.conf
means that both short names and FQDNs can be obtained without any DNS lookups.
If you choose to not include the FQDN in the /etc/hosts
file, you must add the domain name into the /etc/resolv.conf
file in order for DNS to work properly in your environment.
Verifying your name lookup and DNS settings
Test your name lookup and DNS settings to make sure you get the expected output.
- Use various
hostname
commands and verify the expected output. For example, from thespluba01.mgmt.corp.local
node:$ hostname spluba01 $ hostname -s spluba01 $ hostname --fqdn spluba01.mgmt.corp.local
- Use the
ping <short name>
command from each Splunk UBA node to all other Splunk UBA nodes and verify that all nodes can be reached. - Use the
ping <FQDN>
command from each Splunk UBA node to all other Splunk UBA nodes and verify that all nodes can be reached.
Supported web browsers
Open Splunk UBA in the latest versions of any of the following web browsers. Splunk UBA does not support other web browsers, such as Internet Explorer.
- Mozilla Firefox
- Google Chrome
- Apple Safari
Supported single sign-on identity providers
Splunk UBA supports single sign-on integration with the following identity providers:
- Ping Identity
- Okta
- Microsoft ADFS
- OneLogin
See Configure authentication using single sign-on in Administer Splunk User Behavior Analytics.
Requirements for connecting to and getting data from the Splunk platform
To send data from Splunk platform to Splunk UBA, you must have specific Splunk platform versions and a properly configured user account. See Splunk UBA product compatibility matrix in the Plan and scale your Splunk UBA Deployment manual.
Red Hat Enterprise Linux 8.x cryptographic policies
When installed on RHEL 8.x operating systems, Splunk UBA uses a 2048 bit RSA encryption key. The Splunk platform that communicates with Splunk UBA must also use a 2048 bit encryption key. If the Splunk platform uses a 1024 bit encryption key, you will see the following error in the job executor logs:
Caused by: java.security.cert.CertPathValidatorException: Algorithm constraints check failed on keysize limits. RSA 1024bit key used with certificate
To resolve this issue, do any one of the following:
- Use a stronger certificate of at least 2048 bits in length on the Splunk platform.
- Set the java.security.disableSystemPropertiesFile property to true in the $JAVA_HOME/jre/lib/security/java.security file in Splunk UBA.
java.security.disableSystemPropertiesFile property = true
- Perform the following steps to reduce the strength of the RHEL 8 cryptographic policies in Splunk UBA.
- Run the following command on the Splunk UBA management node:
update-crypto-policies --set LEGACY
- Run the following commands on the Splunk UBA management node to stop and restart Splunk UBA:
/opt/caspida/bin/Caspida stop-all /opt/caspida/bin/Caspida start-all
- Run the following command on the Splunk UBA management node:
Requirements for the Splunk Enterprise user account
If you create a custom role, and the user in that role handles information related to data sources, make sure that custom role has permissions to edit_roles or admin_all_objects. These permissions allow the custom role user to drill down into event queries.
Verify that you have a Splunk Enterprise user account with:
- Capabilities to perform real-time search, perform REST API calls, and access to the data. The
admin
role in Splunk Enterprise has the required capabilities by default. If you use a different role, you need thert_search
,edit_forwarders
,list_forwarders
, andedit_uba_settings
capabilities. Add these capabilities to a role in Splunk Web. See Add and edit roles with Splunk Web in Securing Splunk Enterprise. - Configure the search job limits for the Splunk Enterprise user account and role so that they are twice the number of maximum allowed data sources for your deployment.
Size of cluster Max number of data sources User-level concurrent search job limit User-level concurrent real-time search job limit Role-level concurrent search job limit Role-level concurrent real-time search job limit 1 node 6 12 12 12 12 3 nodes 10 20 20 20 20 5 nodes 12 24 24 24 24 7 nodes 24 48 48 48 48 10 nodes 32 64 64 64 64 20 nodes 64 128 128 128 128 - Configure the Splunk Enterprise user account to have sufficient disk usage quota (for example, 40GB).
Send data to and receive data from Splunk Enterprise Security
To send and receive data from Splunk Enterprise Security, you must have the Splunk add-on for Splunk UBA installed and enabled on your search head with the ueba
index deployed to your indexers. See Splunk UBA product compatibility matrix in the Plan and Scale your Splunk UBA Deployment manual for information about version compatibility among products.
Splunk Cloud customers must contact Splunk Support to fully integrate with Splunk UBA. The Splunk Cloud admin role cannot perform Splunk UBA setup.
Send data from Splunk Enterprise directly to Kafka in Splunk UBA
Use the Splunk UBA Kafka Ingestion App to send data from large data sets in Splunk Enterprise directly to Kafka in Splunk UBA. Sending data directly to Kafka offloads the processing task from the search heads to the indexers. See Requirements for Kafka data ingestion in the Splunk UBA Kafka Ingestion App manual.
Monitor Splunk UBA directly from Splunk Enterprise
Use the Splunk UBA Monitoring App to monitor the health of Splunk UBA and investigate Splunk UBA issues directly from Splunk Enterprise. See Splunk UBA Monitoring app requirements in the Splunk UBA Monitoring App manual.
Installing Splunk UBA in environments with no Internet access
Some environments require Splunk UBA to be installed without access to the Internet. In such cases, the functionality of Splunk UBA will be limited in the following areas:
- Splunk UBA pages that normally show visual geographical location information about a device will show warnings that the Google Maps API cannot be reached. Perform the following tasks to disable Splunk UBA from using geographical location and displaying the warning:
- In Splunk UBA, select Manage > Settings.
- Select Geo Location.
- Deselect the checkbox in the Show Geo Maps field.
- Click OK.
- Clicking the Learn more link on any Splunk UBA page will open a new tab with a link to
quickdraw.splunk.com
. This is the URL used to generate the correct help link to the Splunk UBA documentation.
Splunk UBA installation checklist | Check system status before and after installation |
This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.2.0, 5.2.1
Feedback submitted, thanks!