Splunk® User Behavior Analytics

Install and Upgrade Splunk User Behavior Analytics

This documentation does not apply to the most recent version of Splunk® User Behavior Analytics. For documentation on the most recent version, go to the latest release.

Install Splunk UBA on several Linux servers

Install Splunk UBA on several servers with Oracle Enterprise Linux (OEL), Red Hat Enterprise Linux (RHEL), or CentOS installed.

Follow these instructions to perform a bare metal installation of Splunk UBA 5.0.0 for the first time. If you already have Splunk UBA, do not follow the instructions on this page. Instead, follow the appropriate upgrade instructions to obtain your desired release. See How to install or upgrade to this release of Splunk UBA.

Prerequisites for installing Splunk UBA on several Linux servers

  • You must install Splunk UBA on a server that is running a supported operating system. See Operating system requirements.
  • Make sure your Red Hat Enterprise Linux license includes the proper subscription names. See Additional RHEL requirements.
  • Determine the interface of your system network configuration, for example eth0 or en0. You will need this information later in the installation process.
  • The yum-config-manager command must be available on your system. If it is not, install the yum-utils package by running the following command:
    yum install yum-utils
  • The firewalld package must be installed on your system. Use firewall-cmd --state or systemctl status firewalld to check if firewalld is installed. Use the following command to install firewalld if you don't have it:
    yum install firewalld

Configure permissions for and prepare the caspida user

Enable sudo permissions for the caspida user.

  1. Edit the /etc/sudoers file.
  2. If the following line exists, comment the line Defaults requiretty.
  3. Add the following lines at the end of the /etc/sudoers file.
    caspida ALL=(ALL) NOPASSWD:ALL
    Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
    
    The /etc/sudoers file is read sequentially, so placing these lines at the end ensures that there is no impact to the caspida user from any existing accounts or group permissions.
  4. Add the caspida user to the systems. The caspida user needs to have the same UID and GID on all the systems. Pick a UID and GID that is available on all the systems. For example, assuming UID and GID 2018 is available on all nodes:
    groupadd --gid 2018 caspida
    useradd --uid 2018 --gid 2018 -m -d /home/caspida -c "Caspida User" -s /bin/bash caspida
    
  5. Set the password for caspida user:
    passwd caspida
    

Obtain the installation package

Download the following Splunk UBA software and RHEL packages.

Splunk UBA 5.0.0 requires files from the Splunk UBA 5.0.5 installation package in order to complete the installation on RHEL, OEL, or CentOS 7.8 or later. Follow the installation instructions carefully and make sure you do not skip the steps to obtain and extract files from the Splunk UBA 5.0.5 installation package. At the end of the installation, you will be running Splunk UBA 5.0.0. You can then upgrade to the appropriate Splunk UBA version.

  1. Obtain the Splunk UBA 5.0.0 software:
    1. Go to the Splunk UBA RHEL 7.x Software for Bare Metal Installation page on Splunkbase.
    2. Download the file to the /home/caspida directory. The name of the package is splunk-uba-rhel-7x-software-for-bare-metal-installation_50.tgz.
  2. Obtain the Splunk UBA 5.0.5 software:
    1. Go to the Splunk UBA Software Update page on Splunkbase.
    2. Select 5.0.5 from the Version drop-down list.
    3. Download the file to the /home/caspida directory. The name of the archive file is splunk-uba-software-update_505.tgz.

Use these packages for all supported Linux environments. The package name contains RHEL but can be used for OEL and CentOS environments.

Prepare all servers in your distributed Linux environment

Perform these steps on every server node in the distributed deployment.

  1. From the command line, log in to the server as the root user, or log in as a different user then use su or sudo to gain root user privileges.
  2. Find the additional 1TB disk or disks using the fdisk -l command. The nodes that are running Spark services should have two 1TB disks. See Disk space and memory requirements for a summary of where Spark is running per deployment.
    For example, you may see two disks named /dev/sdb and /dev/sdc.
  3. Partition and format the partition on each disk found in step 2.
    1. Partition and format the partition on the /dev/sdb disk using the following series of commands. Verify that the align-check opt 1 command returns 1 aligned.
      parted -a optimal /dev/sdb
        mklabel gpt
        mkpart primary ext4 2048s 100%
        align-check opt 1
        quit
      
    2. Format the partition using the mkfs command.
      mkfs -t ext4 /dev/sdb1
    3. On nodes that have a second disk, repeat the commands to partition and format the partition on /dev/sdc:
      parted -a optimal /dev/sdc
        mklabel gpt
        mkpart primary ext4 2048s 100%
        align-check opt 1
        quit
      
    4. Format the partition using the mkfs command. When prompted, confirm that you want to continue.
      mkfs -t ext4 /dev/sdc1
  4. Get the block ID for each disk using the blkid command. For example, to get the block IDs for /dev/sdb1 and /dev/sdc1 in our example:
    blkid -o value -s UUID /dev/sdb1
    blkid -o value -s UUID /dev/sdc1
    
    An example block ID might be: 5c00b211-e751-4661-91c4-60d9f9315857.
  5. Create new /var/vcap and /var/vcap2 directories. For example, on a node with a single 1TB disk:
    mkdir -p /var/vcap

    Or on a node with two 1TB disks:

    mkdir -p /var/vcap /var/vcap2
  6. Add the block ID for the /var/vcap partition to the /etc/fstab directory. For example, on a node with a single 1TB disk:
    UUID=e1af8814-9b12-4c69-a947-18af370c7dd1 /var/vcap  ext4  defaults  0 0
    

    On a node with two 1TB disks:

    UUID=e1af8814-9b12-4c69-a947-18af370c7dd1 /var/vcap  ext4  defaults  0 0
    UUID=f142f182-27c6-4002-b0bb-941fbedce17d /var/vcap2  ext4  defaults  0 0
    
  7. Mount the file systems using the mount -a command.
  8. Verify that the 1TB disks are mounted correctly using the df -h command. For example:
    root# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    ...
    /dev/sdc1       493G   77M  467G   1% /var/vcap2
    /dev/sdb1       985G   43G  892G   5% /var/vcap
    ...
    
  9. Inherit the permissions for the root user. On a node with a single 1TB disk:
    chmod 755 /var/vcap
    chown root:root /var/vcap
    

    Or on a node with two 1TB disks:

    chmod 755 /var/vcap /var/vcap2
    chown root:root /var/vcap /var/vcap2
    
  10. Make a directory for caspida software packages.

    This should be different from caspida home directory (/home/caspida).

    mkdir /opt/caspida
    chown caspida:caspida /opt/caspida
    chmod 755 /opt/caspida
  11. Set the following environment variables in the /etc/locale.conf file:
    export LC_ALL="en_US.UTF-8"
    export LC_CTYPE="en_US.UTF-8"
    
  12. If your environment contains both internal and external IP addresses, be sure to use the internal IP address when configuring Splunk UBA. You can use the ip route command to help you determine this.
  13. Verify that the host name resolves using the nslookup <host name> command. If it does not, verify your host name lookup and DNS settings. See Configure host name lookups and DNS. If nslookup command is not available, install bind-utils:
    yum install bind-utils
  14. Verify that the system date, time and time zone are correct using the timedatectl command, as shown below. The time zone in Splunk UBA should match the time zone configured in Splunk Enterprise.
    root# timedatectl status
          Local time: Mon 2019-04-08 14:30:02 UTC
      Universal time: Mon 2019-04-08 14:30:02 UTC
            RTC time: Mon 2019-04-08 14:30:01
           Time zone: UTC (UTC, +0000)
         NTP enabled: yes
    NTP synchronized: yes
     RTC in local TZ: no
          DST active: n/a
    

    Use the timedatectl command to change the time zone. For example, to change the time zone to UTC:

    timedatectl set-timezone UTC
    Refer to the documentation for your specific operating system to configure NTP synchronization. Use the ntpq -p command to verify that NTP is pointing to the desired server.
  15. Modify /etc/sysconfig/selinux and set SELINUX=permissive.
    With SELINUX set to enforced, certain actions during installation and upgrade (for example, access to particular files) can be blocked. Set SELINUX to permissive to allow Splunk UBA the necessary access so that actions are not blocked, but instead logged in the audit logs.
  16. Verify that /proc/sys/net/bridge/bridge-nf-call-iptables exists on your system and the content of bridge-nf-call-iptables is 1. Run the following command to verify:
    cat /proc/sys/net/bridge/bridge-nf-call-iptables
    Your situation Take this action
    /proc/sys/net/bridge/bridge-nf-call-iptables exists on your system and the content is 1.
    1. Run the following command to make sure this setting is preserved through any reboot operations:
      echo net.bridge.bridge-nf-call-iptables=1 > /etc/sysctl.d/splunkuba-bridge.conf
    2. Go to Step 17.
    /proc/sys/net/bridge/bridge-nf-call-iptables exists on your system but the content is not 1.
    1. Run the following commands to set the content of the bridge-nf-call-iptables:
      sysctl -w net.bridge.bridge-nf-call-iptables=1
    2. Run the following command to ensure that the settings persist through any reboot operations:
      echo net.bridge.bridge-nf-call-iptables=1 > /etc/sysctl.d/splunkuba-bridge.conf
    3. Go to Step 17.
    /proc/sys/net/bridge/bridge-nf-call-iptables does not exist on your system.
    1. Run the following commands to create the file and ensure that it is loaded on reboot:
      modprobe br_netfilter
      echo br_netfilter > /etc/modules-load.d/br_netfilter.conf
      
    2. Run the following commands to create and set the content of the bridge-nf-call-iptables:
      sysctl -w net.bridge.bridge-nf-call-iptables=1
    3. Run the following command to ensure that the settings persist through any reboot operations:
      echo net.bridge.bridge-nf-call-iptables=1 > /etc/sysctl.d/splunkuba-bridge.conf
    4. Go to Step 17.
  17. Check the permissions of /etc/sysctl.d/splunkuba-bridge.conf and verify that it is readable by the caspida user. For example:
    [caspida@ubanode1 ~]$ ls -l /etc/sysctl.d/splunkuba-bridge.conf
    -rw-r--r--. 1 root root 37 Aug 18 2020 /etc/sysctl.d/splunkuba-bridge.conf
    
  18. Verify that IPv6 drivers are available. To do this, check that /proc/sys/net/ipv6/ exists. For example:
    root# ls -l /proc/sys/net/ipv6/
    total 0
    -rw-r--r-- 1 root root 0 Mar 12 16:52 anycast_src_echo_reply
    -rw-r--r-- 1 root root 0 Mar 12 16:52 auto_flowlabels
    -rw-r--r-- 1 root root 0 Mar 12 16:52 bindv6only
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 conf
    -rw-r--r-- 1 root root 0 Mar 12 16:52 flowlabel_consistency
    -rw-r--r-- 1 root root 0 Mar 12 16:52 flowlabel_state_ranges
    -rw-r--r-- 1 root root 0 Mar 12 16:52 fwmark_reflect
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 icmp
    -rw-r--r-- 1 root root 0 Mar 12 16:52 idgen_delay
    -rw-r--r-- 1 root root 0 Mar 12 16:52 idgen_retries
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_high_thresh
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_low_thresh
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_secret_interval
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_time
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip_nonlocal_bind
    -rw-r--r-- 1 root root 0 Mar 12 16:52 mld_max_msf
    -rw-r--r-- 1 root root 0 Mar 12 16:52 mld_qrv
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 neigh
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 route
    -rw-r--r-- 1 root root 0 Mar 12 16:52 xfrm6_gc_thresh
    

    If the IPv6 drivers exist, skip to the next step.


    If IPv6 drivers do not exist on your system, check if /etc/default/grub contains ipv6.disable=1. IPv6 drivers will not be available on a system if ipv6.disable=1 exists in /etc/default/grub. If ipv6.disable=1 is not present in /etc/default/grub and IPv6 drivers do not exist, consult with your system or network administrators. You will not be able to continue with the installation.


    If /etc/default/grub contains ipv6.disable=1, perform the following tasks as root:

    1. Remove ipv6.disable=1 from /etc/default/grub.
    2. Recreate the grub config:
      find /boot -name grub.cfg -exec grub2-mkconfig -o '{}' \;
    3. Reboot the machines. After the system comes up, make sure /proc/sys/net/ipv6 exists.

    To disable IPv6 functionality for security, networking or performance reasons, create the /etc/sysctl.d/splunkuba-ipv6.conf file as root. This file should contain the following content:

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1
    
    This procedure keeps the IPv6 drivers but disables the IPv6 addressing.
  19. Create the /etc/security/limits.d/caspida.conf file and add the following security limits for the caspida user to this file:
    caspida soft nproc unlimited
    caspida soft nofile 32768
    caspida hard nofile 32768
    caspida soft core unlimited
    caspida soft stack unlimited
    caspida soft memlock unlimited
    caspida hard memlock unlimited
    

    Make sure the root account does not have any security limits.

  20. If you are not using IPv6 on your network, edit the /etc/yum.conf file and add the following entry so that only IPv4 addresses are used by yum/rpm:
    ip_resolve=4
  21. Update to the latest kernel. See Operating system requirements for the specific kernel versions.
    Operating System Update Instructions
    OEL 7.7, 7.8, 7.9 Perform all of the following tasks on each Splunk UBA node:
    1. Run the following command to install yum-utils:
      sudo yum install yum-utils -y
    2. Run the following commands to skip certain packages if they are not available:
      sudo yum-config-manager --disable pgdg94
      sudo yum-config-manager --disable nodesource
      sudo yum-config-manager --disable rhel-7-server-rt-beta-rpms
      
    3. Obtain http://yum.oracle.com/public-yum-ol7.repo and download it to /home/caspida.
      sudo wget http://yum.oracle.com/public-yum-ol7.repo /home/caspida
    4. Copy public-yum-ol7.repo to /etc/yum.repos.d:
      sudo cp /home/caspida/public-yum-ol7.repo /etc/yum.repos.d/public-yum-ol7.repo
    5. Enable the repositories:
      sudo yum-config-manager --enable ol7_UEKR5
      sudo yum-config-manager --enable ol7_addons
      
    6. Update the kernel to the latest versions.
      • Run the following command if you are using OEL 7.7:
        sudo yum update --releasever=7.7 --exclude="zookeeper redis-server redis-tools influxdb nodejs nodejs-docs postgres*" -y
      • Run the following command if you are using OEL 7.8:
        sudo yum update --releasever=7.8 --exclude="zookeeper redis-server redis-tools influxdb nodejs nodejs-docs postgres*" -y
      • Run the following command if you are using OEL 7.9:
        sudo yum update --releasever=7.9 --exclude="zookeeper redis-server redis-tools influxdb nodejs nodejs-docs postgres*" -y
    RHEL 7.7, 7.8, or 7.9 Perform all of the following tasks on each Splunk UBA node:
    1. Enable the repos for required packages:
      • Run the following commands if you are using RHEL 7.7:
        sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
        sudo subscription-manager repos --enable=rhel-7-server-eus-rpms
        sudo subscription-manager repos --enable=rhel-7-server-rpms
        sudo subscription-manager repos --enable=rhel-7-server-optional-rpms
        
      • Run the following commands if you are using RHEL 7.8 or 7.9:
        sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
        sudo subscription-manager repos --enable=rhel-7-server-rpms
        sudo subscription-manager repos --enable=rhel-7-server-optional-rpms
        
    2. Run the following command to install yum-utils:
      sudo yum install yum-utils -y
    3. Run the following commands to skip certain packages if they are not available:
      sudo yum-config-manager --disable pgdg94
      sudo yum-config-manager --disable nodesource
      sudo yum-config-manager --disable rhel-7-server-rt-beta-rpms
      
    4. Update the kernel to the latest versions.
      • Run the following command if you are using RHEL 7.7:
        sudo yum update --releasever=7.7 --exclude="zookeeper redis-server redis-tools influxdb nodejs nodejs-docs postgres*" -y
      • Run the following command if you are using RHEL 7.8:
        sudo yum update --releasever=7.8 --exclude="zookeeper redis-server redis-tools influxdb nodejs nodejs-docs postgres*" -y
      • Run the following command if you are using RHEL 7.9:
        sudo yum update --releasever=7.9 --exclude="zookeeper redis-server redis-tools influxdb nodejs nodejs-docs postgres*" -y
    CentOS
    (latest kernel)

    Run the following command to update your CentOS kernel to the latest available version:

    Running this command updates your CentOS kernel to the latest available version. You can't select a specific version. Run this command only if you are sure you want the latest available verison of CentOS

    sudo yum update -y
  22. If you have any firewall configuration enabled, disable the configuration and verify that port 9002 is open. Run the following command:
    systemctl disable firewalld
    You can re-enable your firewall settings after the setup is complete.
  23. Restart the system.
    init 6
  24. After the system restarts, use the following command to verify that the host name matches your host name lookup and DNS settings. See Configure host name lookups and DNS.
    hostname --fqdn

Install Splunk UBA on each Linux server

Perform these steps on every server node in the distributed deployment to install Splunk UBA. If you are running the commands from the /home/caspida directory, you can omit the /home/caspida portion of the commands.

  1. Log in to the command line as the caspida user using SSH.
  2. Verify that the caspida user has umask permissions set to 0022 or 0002.
    umask
    If the returned values are not supported, edit the ~/.bash_profile and the ~/.bashrc files and append umask 0022
  3. Verify that the splunk-uba-rhel-7x-software-for-bare-metal-installation_50.tgz and splunk-uba-software-update_505.tgz files that you downloaded in Obtain the installation package are in the /home/caspida directory. If not, copy the files to the /home/caspida directory.
  4. Untar the file for Splunk UBA RHEL Software for Bare Metal Installation in the /home/caspida directory.
    tar xvzf /home/caspida/splunk-uba-rhel-7x-software-for-bare-metal-installation_50.tgz
  5. Run the following command to untar the Splunk UBA platform software to the /opt/caspida directory:
    tar xvzf /home/caspida/splunk-uba-rhel-7x-software-for-bare-metal-installation_50/Splunk-UBA-Platform-5.0.0-20191015-000100.tgz -C /opt/caspida/
  6. Follow the instructions in the table for your operating system to untar the Splunk UBA packages to the /home/caspida directory.
    Your operating system Instructions
    CentOS

    Run the following command to check the operating system version:

    cat /etc/centos-release
    • If your CentOS version is earlier than 7.8, run the following command to untar the Splunk UBA packages.
      tar xvzf /home/caspida/splunk-uba-rhel-7x-software-for-bare-metal-installation_50/Splunk-UBA-5.0-Packages-RHEL-7.7.tgz -C /home/caspida/
    • If your CentOS version is 7.8 or later, run the following commands to untar the Splunk UBA packages.
      tar xvzf /home/caspida/splunk-uba-rhel-7x-software-for-bare-metal-installation_50/Splunk-UBA-5.0-Packages-RHEL-7.7.tgz -C /home/caspida/
      tar xzvf /home/caspida/splunk-uba-software-update_505.tgz
      tar xvzf /home/caspida/Splunk-UBA-5.0-Overlay-Packages-RHEL-7.9.tgz -C /home/caspida/Splunk-UBA-5.0-Packages-RHEL-7.7
      
    OEL

    Run the following command to check the operating system version:

    cat /etc/oracle-release
    • If your OEL version is earlier than 7.8, run the following command to untar the Splunk UBA packages.
      tar xvzf /home/caspida/splunk-uba-rhel-7x-software-for-bare-metal-installation_50/Splunk-UBA-5.0-Packages-RHEL-7.7.tgz -C /home/caspida/
    • If your OEL version is 7.8 or later, run the following commands to untar the Splunk UBA packages.
      tar xvzf /home/caspida/splunk-uba-rhel-7x-software-for-bare-metal-installation_50/Splunk-UBA-5.0-Packages-RHEL-7.7.tgz -C /home/caspida/
      tar xzvf /home/caspida/splunk-uba-software-update_505.tgz
      tar xvzf /home/caspida/Splunk-UBA-5.0-Overlay-Packages-RHEL-7.9.tgz -C /home/caspida/Splunk-UBA-5.0-Packages-RHEL-7.7
      
    RHEL

    Run the following command to check the operating system version:

    cat /etc/redhat-release
    • If your RHEL version is earlier than 7.8, run the following command to untar the Splunk UBA packages.
      tar xvzf /home/caspida/splunk-uba-rhel-7x-software-for-bare-metal-installation_50/Splunk-UBA-5.0-Packages-RHEL-7.7.tgz -C /home/caspida/
    • If your RHEL version is 7.8 or later, run the following commands to untar the Splunk UBA packages.
      tar xvzf /home/caspida/splunk-uba-rhel-7x-software-for-bare-metal-installation_50/Splunk-UBA-5.0-Packages-RHEL-7.7.tgz -C /home/caspida/
      tar xzvf /home/caspida/splunk-uba-software-update_505.tgz
      tar xvzf /home/caspida/Splunk-UBA-5.0-Overlay-Packages-RHEL-7.9.tgz -C /home/caspida/Splunk-UBA-5.0-Packages-RHEL-7.7
      

    The overlay package contains missing dependencies required to complete the installation for Splunk UBA 5.0.0. When you complete the installation process, you will be running Splunk UBA 5.0.0. You can then upgrade to the desired 5.0.x version.

  7. Run the installation script.
    /opt/caspida/bin/installer/redhat/INSTALL.sh /home/caspida/Splunk-UBA-5.0-Packages-RHEL-7.7
    
    The log file is /var/log/caspida/install.log.
  8. Generate SSH keys using the ssh-keygen -t rsa command. Press enter for all the prompts and accept all default values. For example:
    [caspida@ubahost-001]$ ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/caspida/.ssh/id_rsa):
    Created directory '/home/caspida/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /home/caspida/.ssh/id_rsa.
    Your public key has been saved in /home/caspida/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:Ohe1oSpUtNT8siJzvn2lFLrHmVH7JGKke+c/5NRFb/g caspida@ubahost-001
    
  9. Add the SSH keys to the server and adjust the permissions to allow the server to access them.
    cat /home/caspida/.ssh/id_rsa.pub >> /home/caspida/.ssh/authorized_keys
    chmod 600 /home/caspida/.ssh/authorized_keys
  10. Copy the SSH keys from every server from /home/caspida/.ssh/id_rsa.pub into the /home/caspida/.ssh/authorized_keys file on every server in the distributed deployment. After this is complete, each authorized_keys file on each server should have all the SSH keys for every server in the deployment listed.
  11. Test to verify that the SSH connections do not require a password. Connect to each server without a password with the host name or internal IP using SSH to create trusted connections between the servers. After you confirm the connection does not require a password, use exit to terminate the SSH connection. You must complete this step before continuing with setup.
    ssh <node1>; exit
    ssh <node2>; exit
    ssh <node3>; exit
  12. When prompted, confirm that you want to continue. For example, the sample output will look similar to the following.
    [caspida@ubahost-001]$ ssh uba1
    The authenticity of host 'uba1 (172.31.35.204)' can't be established.
    ECDSA key fingerprint is af:12:54:60:f5:36:c2:36:9d:56:b2:52:9f:cb:73:bc.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'uba1,172.31.35.204' (ECDSA) to the list of known hosts.
    

Complete the distributed Linux Splunk UBA installation on the management server

After completing the previous steps on all server nodes in the distributed deployment, perform the following steps on the management server. For example, uba-node01.

  1. Check the system status with the uba_pre_check.sh shell script. You must specify the host name of each Splunk UBA node in the command, separated by spaces. For example, run the following command in a 3-node deployment and be sure to replace <node1> <node2> <node3> with the actual host names of your Splunk UBA nodes.
    /opt/caspida/bin/utils/uba_pre_check.sh <node1> <node2> <node3>
    See Check system status before and after installation for more information about the script.
  2. Run the setup script.
    /opt/caspida/bin/Caspida setup
    1. When prompted, accept the license agreement and confirm removal of existing metadata.
    2. When prompted, type a comma-separated list of host names for your distributed installation. For example, specify the following in a 3-node deployment and be sure to replace <node1> <node2> <node3> with the actual host names of your Splunk UBA nodes:
      <node1>,<node2>,<node3>
    3. When prompted, confirm that you want to continue setting up Splunk UBA.
    4. The log file is /var/log/caspida/caspida.out
  3. After setup completes:
    1. Open a web browser and log in to the public IP address with the default admin credentials to confirm a successful installation. The default username is admin and password is changeme. See Secure the default account after installing Splunk UBA for information about the default accounts provided with Splunk UBA and how to secure them.
    2. See Verify successful installation for more information about verifying a successful installation.
Last modified on 16 March, 2023
Install Splunk UBA on several m5 Amazon Web Services instances   Verify successful installation

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.5, 5.0.5.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters