Splunk® User Behavior Analytics

Install and Upgrade Splunk User Behavior Analytics

Acrobat logo Download manual as PDF


This documentation does not apply to the most recent version of Splunk® User Behavior Analytics. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Install Splunk UBA on several VMware virtual machines

Install Splunk UBA on several VMware virtual machines.

Follow these instructions to install Splunk UBA 5.0.0 or 5.0.3 for the first time using the OVA image. If you already have Splunk UBA, do not follow the instructions on this page. Instead, follow the appropriate upgrade instructions to obtain your desired release. See How to install or upgrade to this release of Splunk UBA.

Prerequisites for installing Splunk UBA on several VMware virtual machines

  • Each server in the deployment must have a second 1TB disk.
  • All Spark nodes must have an additional 1TB disk.
  • All servers must be synced to the same Network Time Protocol (NTP) server.
  • The caspida user must be able to perform passwordless SSH to each UBA server in the deployment.
  • All ports on each UBA node must be open for inter-cluster communication between the nodes.

Prepare all servers in your distributed VMware environment

Perform these steps for each server in a distributed deployment.

  1. Download the Splunk UBA open virtual appliance (OVA) from Splunkbase. See Splunk UBA OVA Software.
  2. Deploy the Splunk UBA OVA on your virtual machine.
  3. Provision the virtual machine with two disks, one with 50GB of disk space and the other with 1TB of disk space. All Spark nodes must have a third disk with 1TB of disk space.
  4. Log in to one of the virtual machines as the caspida user using SSH. Specify caspida123 as the existing default password. You will be prompted to provide the default password a second time, and then change the existing password. For example:
    ssh caspida@ubahost-001.example.com
    caspida@ubahost-001.example.com's password: 
    You are required to change your password immediately (root enforced)
    Changing password for caspida.
    (current) UNIX password:
    Enter new UNIX password:
    Retype new UNIX password:
    caspida$
    
    After changing the password you may be logged out. Log in to the virtual machine again using your new credentials.
  5. Verify that the system date, time and time zone are correct using the timedatectl command, as shown below. The time zone in Splunk UBA must match the time zone configured in Splunk Enterprise.
    caspida@ubahost-001$ timedatectl status
          Local time: Mon 2019-04-08 14:30:02 UTC
      Universal time: Mon 2019-04-08 14:30:02 UTC
            RTC time: Mon 2019-04-08 14:30:01
           Time zone: UTC (UTC, +0000)
         NTP enabled: yes
    NTP synchronized: yes
     RTC in local TZ: no
          DST active: n/a
    

    Use the timedatectl command to change the time zone. For example, to change the time zone to UTC:

    timedatectl set-timezone UTC
    Refer to the documentation for your specific operating system to configure NTP synchronization. Use the ntpq -p command to verify that NTP is pointing to the desired server.
  6. The Splunk UBA OVA files contain the default hostname variable set to caspida. This must be changed to reflect the actual host name of the server.
    1. Use sudo to edit the /etc/hostname file and change the host name caspida to the short host name value of the server. For example, if your server is server1.company.com, replace caspida with server1.
    2. Run the following command to have changes take effect without a restart:
      sudo hostname -F /etc/hostname

      If you get an error, run the command again to allow the changes to take effect.

    Test your changes using the hostname command and verifying the following:

  7. Find the additional 1TB disks using the sudo fdisk -l command. An example disk is /dev/sdb. On the Spark nodes, there are two additional disks. See Disk space and memory requirements for a summary of where Spark is running per deployment.
  8. Format and mount the additional 1TB disks.
    1. Add the additional 1TB disk for Splunk UBA metadata storage. For example, using /dev/sdb as an example:
      /opt/caspida/bin/Caspida add-disk /dev/sdb 
      Verify that the disk is /var/vcap. Refer to your Linux documentation if you prefer to add a disk manually without using the add-disk command.
    2. Where applicable, add the additional 1TB disk on all Spark nodes. Use the /opt/caspida/bin/Caspida add-disk <device> <mount> command. For example:
      /opt/caspida/bin/Caspida add-disk /dev/sdc /var/vcap2
  9. Verify that IPv6 drivers are available. To do this, check that /proc/sys/net/ipv6/ exists. For example:
    caspida@ubahost-001$ ls -l /proc/sys/net/ipv6/
    total 0
    -rw-r--r-- 1 root root 0 Mar 12 16:52 anycast_src_echo_reply
    -rw-r--r-- 1 root root 0 Mar 12 16:52 auto_flowlabels
    -rw-r--r-- 1 root root 0 Mar 12 16:52 bindv6only
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 conf
    -rw-r--r-- 1 root root 0 Mar 12 16:52 flowlabel_consistency
    -rw-r--r-- 1 root root 0 Mar 12 16:52 flowlabel_state_ranges
    -rw-r--r-- 1 root root 0 Mar 12 16:52 fwmark_reflect
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 icmp
    -rw-r--r-- 1 root root 0 Mar 12 16:52 idgen_delay
    -rw-r--r-- 1 root root 0 Mar 12 16:52 idgen_retries
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_high_thresh
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_low_thresh
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_secret_interval
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_time
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip_nonlocal_bind
    -rw-r--r-- 1 root root 0 Mar 12 16:52 mld_max_msf
    -rw-r--r-- 1 root root 0 Mar 12 16:52 mld_qrv
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 neigh
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 route
    -rw-r--r-- 1 root root 0 Mar 12 16:52 xfrm6_gc_thresh
    
    • If the IPv6 drivers exist, skip to the next step.
    • If IPv6 drivers do not exist on your system, check if /etc/default/grub contains ipv6.disable=1. IPv6 drivers will not be available on a system if ipv6.disable=1 exists in /etc/default/grub. If ipv6.disable=1 is not present in /etc/default/grub and IPv6 drivers do not exist, consult with your system or network administrators. You will not be able to continue with the installation.
    • If /etc/default/grub contains ipv6.disable=1, perform the following tasks as root:
      1. Remove ipv6.disable=1 from /etc/default/grub.
      2. Recreate the grub config:
        grub2-mkconfig -o /boot/grub2/grub.cfg
      3. Reboot the machines. After the system comes up, make sure /proc/sys/net/ipv6 exists.

    To disable IPv6 functionality for security, networking or performance reasons, create the /etc/sysctl.d/splunkuba-ipv6.conf file as root. This file must contain the following content:

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1
    
    This procedure keeps the IPv6 drivers but disables the IPv6 addressing.
  10. On every server in your Splunk UBA deployment, run the following command to install or upgrade libjson-perl:
    sudo apt-get install libjson-perl

Setup passwordless SSH communication between the UBA nodes.

  1. Log in to the management server as the caspida user using SSH.
  2. Generate SSH keys using the ssh-keygen -t rsa command. Press enter for all the prompts and accept all default values. For example:
    [caspida@ubahost-001]$ ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/caspida/.ssh/id_rsa):
    Created directory '/home/caspida/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /home/caspida/.ssh/id_rsa.
    Your public key has been saved in /home/caspida/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:Ohe1oSpUtNT8siJzvn2lFLrHmVH7JGKke+c/5NRFb/g caspida@ubahost-001
    
  3. Run the following command. Enter the password for the caspida user when prompted for the password.
    ssh-copy-id localhost
  4. Copy the SSH files from the management server to every other server in the distributed deployment.
    scp -pr /home/caspida/.ssh caspida@<node_N>:/home/caspida/
  5. Verify proper passwordless SSH configuration and inter-node connectivity by doing the following on each node in the deployment. This step will also create trusted connections between the servers. You must complete this step before continuing with setup.
    1. ssh `hostname` <== Note the backquotes around hostname 
    2. ssh <node1>; exit
      ssh <node2>; exit
      ssh <node3>; exit
      ...
      ssh <nodeN>; exit
    3. In the steps above, if prompted, confirm that you want to continue.
      For example, the sample output will look similar to the following.
      caspida@ubahost-001$ ssh uba1
      The authenticity of host 'uba1 (172.31.35.204)' can't be established.
      ECDSA key fingerprint is af:12:54:60:f5:36:c2:36:9d:56:b2:52:9f:cb:73:bc.
      Are you sure you want to continue connecting (yes/no)? yes
      Warning: Permanently added 'uba1,172.31.35.204' (ECDSA) to the list of known hosts.

Complete the distributed VMware Splunk UBA installation on the management server

After the distributed server installation is complete, continue setting up Splunk UBA on the management server, server node 1.

  1. From the command line, log in as the caspida user using SSH.
  2. Check the system status with the uba_pre_check.sh shell script. You must specify the host name of each Splunk UBA node in the command, separated by spaces. For example, run the following command in a 3-node deployment and be sure to replace <node1> <node2> <node3> with the actual host names of your Splunk UBA nodes.
    /opt/caspida/bin/utils/uba_pre_check.sh <node1> <node2> <node3>
    See Check system status before and after installation for more information about the script.
  3. Run setup to install Splunk UBA.
    /opt/caspida/bin/Caspida setup
  4. When prompted, accept the license agreement and confirm removal of existing metadata.
  5. When prompted, type a comma-separated list of host names for a single-server or distributed installation. For example, specify the following in a 3-node deployment and be sure to replace <node1> <node2> <node3> with the actual host names of your Splunk UBA nodes:
    <node1>,<node2>,<node3>
  6. When prompted, confirm that you want to proceed with the deployment and continue setting up Splunk UBA.
  7. After setup completes:
    1. Open a web browser and log in to the public IP address with the default admin credentials to confirm a successful installation. The default username is admin and password is changeme. See Secure the default account after installing Splunk UBA for information about the default accounts provided with Splunk UBA and how to secure them.
    2. See Verify successful installation for more information about verifying a successful installation.
Last modified on 16 March, 2023
PREVIOUS
Install Splunk UBA on a single Linux server
  NEXT
Install Splunk UBA on several m4 Amazon Web Services instances

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.4.1, 5.0.5, 5.0.5.1


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters