Splunk® User Behavior Analytics

Install and Upgrade Splunk User Behavior Analytics

Acrobat logo Download manual as PDF

This documentation does not apply to the most recent version of Splunk® User Behavior Analytics. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Install Splunk UBA on several Amazon Web Services instances

Follow these instructions to install Splunk UBA 5.2.0 for the first time using the AMI image on several Amazon Web Services (AWS) instances.

If you already have Splunk UBA, do not follow the instructions on this page. Instead, follow the appropriate upgrade instructions to obtain your desired release. See, How to install or upgrade to this release of Splunk UBA.

Prerequisites for installing Splunk UBA on several Amazon Web Services instances

Verify that the following requirements are met:

  • Contact your Splunk sales representative and provide them with your AWS account information, name, and email address to obtain the Splunk UBA Amazon Machine Image (AMI). Entitlement will be verified by the account team and the AMI will be shared to the AWS account provided.
  • A valid key pair to access your AWS instance.
  • All server nodes must be in the same network subnet and also the same AWS region.
  • A security group with the following permissions:
    • Allows all TCP communication between all servers in a distributed installation.
    • A security group with inbound firewall rules that allow access to ports 22 and 443 from trusted external addresses.

Prepare all servers in your distributed Amazon Web Services environment

Complete these steps for every server node in your AWS deployment.

  1. In the AWS console, open the Splunk UBA AMI.
  2. Set up an AWS instance. The following server instance types are supported:
    • m4.4xlarge
    • m5.4xlarge, m5a.4xlarge, m5.8xlarge
    • m6a.4xlarge, m6i.4xlarge
  3. Click Edit storage to add two new 1TB volumes.
  4. Open the instance and download the key pair to your local machine. You will need this key pair later in the procedure.
  5. Note the public IP address of the UBA instance.
  6. From the command line, load the key pair, set up the caspida user, and log in to the AWS instance.
    ssh -i <keypair>.pem ubuntu@<public IP of your UBA instance>
    su - caspida

    Specify caspida123 as the existing default password. You will be prompted to provide the default password a second time, and then change the existing password. For example:

    ubuntu:~$ su - caspida
    You are required to change your password immediately (root enforced)
    Changing password for caspida.
    (current) UNIX password:
    Enter new UNIX password:
    Retype new UNIX password:
  7. If you need to change the host name, follow these steps:

    If you are not changing the host name of your system, skip this step.

    1. Run the following, replacing <NEW HOST NAME> accordingly:
      sudo hostname-ctl set-hostname <NEW HOST NAME>
    2. Update the /etc/hosts file with the new host name and IP address so that they can be resolved by DNS:
      sudo vi /etc/hosts 
      <NEW HOST NAME> localhost
  8. Verify that the system date, time and time zone are correct using the timedatectl command, as shown here. The time zone in Splunk UBA must match the time zone configured in Splunk Enterprise.
    caspida@ubahost$ timedatectl status
          Local time: Mon 2019-04-08 14:30:02 UTC
      Universal time: Mon 2019-04-08 14:30:02 UTC
            RTC time: Mon 2019-04-08 14:30:01
           Time zone: UTC (UTC, +0000)
         NTP enabled: yes
    NTP synchronized: yes
     RTC in local TZ: no
          DST active: n/a

    Use the timedatectl command to change the time zone. For example, to change the time zone to UTC:

    timedatectl set-timezone UTC
    Refer to the documentation for your specific operating system to configure NTP synchronization. Use the ntpq -p command to verify that NTP is pointing to the desired server.
  9. Find the additional 1TB disks.
    sudo fdisk -l 
    For example, /dev/xvdb. The nodes that are running Spark services must have two 1TB disks. See Disk space and memory requirements for a summary of where Spark is running per deployment.
  10. Format and mount the 1TB disks.
    1. Add the 1TB disk for Splunk UBA metadata storage using the following command:/opt/caspida/bin/Caspida add-disk <device>
      i) If your disk name is /dev/xvdb use the following command:
      /opt/caspida/bin/Caspida add-disk /dev/xvdb
      ii) If your disk name is /dev/nvme1n1 use the following command:
      /opt/caspida/bin/Caspida add-disk /dev/nvme1n1
    2. Add the 1TB disk for Spark. The disk should be mounted as /var/vcap2. Use the /opt/caspida/bin/Caspida add-disk <device> <mount> command.
      i) If your disk name is /dev/xvdc use the following command:
      /opt/caspida/bin/Caspida add-disk /dev/xvdc /var/vcap2
      ii) If your disk name is /dev/nvme2n1 use the following command:
      /opt/caspida/bin/Caspida add-disk /dev/nvme2n1 /var/vcap2
  11. Add the IP addresses and hostnames in your distributed deployment to the /etc/hosts file.
    The following example shows how the final file looks on uba1:
    uba1.splunk.com localhost
    # IPv6 format
    # uba1.splunk.com ::1 ip6-localhost ip6-loopback ... uba1.splunk.com uba1 uba2.splunk.com uba2 uba3.splunk.com uba3
  12. Verify that IPv6 drivers are available. To do this, check that /proc/sys/net/ipv6/ exists. For example:
    caspida@ubahost-001$ ls -l /proc/sys/net/ipv6/
    total 0
    -rw-r--r-- 1 root root 0 Mar 12 16:52 anycast_src_echo_reply
    -rw-r--r-- 1 root root 0 Mar 12 16:52 auto_flowlabels
    -rw-r--r-- 1 root root 0 Mar 12 16:52 bindv6only
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 conf
    -rw-r--r-- 1 root root 0 Mar 12 16:52 flowlabel_consistency
    -rw-r--r-- 1 root root 0 Mar 12 16:52 flowlabel_state_ranges
    -rw-r--r-- 1 root root 0 Mar 12 16:52 fwmark_reflect
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 icmp
    -rw-r--r-- 1 root root 0 Mar 12 16:52 idgen_delay
    -rw-r--r-- 1 root root 0 Mar 12 16:52 idgen_retries
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_high_thresh
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_low_thresh
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_secret_interval
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip6frag_time
    -rw-r--r-- 1 root root 0 Mar 12 16:52 ip_nonlocal_bind
    -rw-r--r-- 1 root root 0 Mar 12 16:52 mld_max_msf
    -rw-r--r-- 1 root root 0 Mar 12 16:52 mld_qrv
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 neigh
    dr-xr-xr-x 1 root root 0 Mar 12 16:52 route
    -rw-r--r-- 1 root root 0 Mar 12 16:52 xfrm6_gc_thresh

    If the IPv6 drivers exist, skip to the next step.

    If IPv6 drivers do not exist on your system, check if /etc/default/grub contains ipv6.disable=1. IPv6 drivers will not be available on a system if ipv6.disable=1 exists in /etc/default/grub. If ipv6.disable=1 is not present in /etc/default/grub and IPv6 drivers do not exist, consult with your system or network administrators. You will not be able to continue with the installation.

    If /etc/default/grub contains ipv6.disable=1, perform the following tasks as root:

    1. Remove ipv6.disable=1 from /etc/default/grub.
    2. Recreate the grub config: grub2-mkconfig -o /boot/grub2/grub.cfg
    3. Reboot the machines. After the system comes up, make sure /proc/sys/net/ipv6 exists.

    To disable IPv6 functionality for security, networking or performance reasons, create the /etc/sysctl.d/splunkuba-ipv6.conf file as root. This file must contain the following content:

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1

    This procedure keeps the IPv6 drivers but disables the IPv6 addressing.

  13. Generate SSH keys using the ssh-keygen -t rsa command. Press enter for all the prompts and accept all default values. For example:
    [caspida@ubahost-001]$ ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/caspida/.ssh/id_rsa):
    Created directory '/home/caspida/.ssh'.
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /home/caspida/.ssh/id_rsa.
    Your public key has been saved in /home/caspida/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:Ohe1oSpUtNT8siJzvn2lFLrHmVH7JGKke+c/5NRFb/g caspida@ubahost-001
  14. Add the SSH keys to the server and adjust the permissions to allow the server to access them.
    cat /home/caspida/.ssh/id_rsa.pub >> /home/caspida/.ssh/authorized_keys
    chmod 600 /home/caspida/.ssh/authorized_keys
  15. Copy the SSH keys from every server from /home/caspida/.ssh/id_rsa.pub into the /home/caspida/.ssh/authorized_keys file on every server in the distributed deployment.
    After this is complete, each authorized_keys file on each server will have all the SSH keys for every server in the deployment listed.
  16. Connect to each server without a password with the host name or internal IP address using SSH to create trusted connections between the servers. You must complete this step before continuing with setup. Be sure to replace <node1> <node2> <node3> with the actual host names or IP addresses of your Splunk UBA nodes:
    ssh <node1>; exit
    ssh <node2>; exit
    ssh <node3>; exit
  17. When prompted, confirm that you want to continue.
    The sample output will look similar to the following.
    caspida@ubahost-001$ ssh uba1
    The authenticity of host 'uba1 (' can't be established.
    ECDSA key fingerprint is af:12:54:60:f5:36:c2:36:9d:56:b2:52:9f:cb:73:bc.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'uba1,' (ECDSA) to the list of known hosts.

Complete the distributed Amazon Web Services Splunk UBA installation the management server

After all of these steps have been taken on every server node in your deployment, continue the installation on the management server node. For example, uba1.

  1. From the command line, log in as the caspida user using SSH.
  2. Check the system status with the uba_pre_check.sh shell script. You must specify the host names of each Splunk UBA node in the command, separated by spaces. For example, run the following command in a 3-node deployment and be sure to replace <node1> <node2> <node3> with the actual host names of your Splunk UBA nodes.
    /opt/caspida/bin/utils/uba_pre_check.sh <node1> <node2> <node3>
    See Check system status before and after installation for more information about the script.
  3. Run the following command to source the /etc/locale.conf file:
    source /etc/locale.conf
  4. Run setup script to install Splunk UBA.
    /opt/caspida/bin/Caspida setup
    1. When prompted, accept the license agreement and confirm removal of existing metadata.
    2. When prompted, type a comma-separated list of host names for a single-server or distributed installation. For example, specify the following in a 3-node deployment and be sure to replace <node1> <node2> <node3> with the actual host names of your Splunk UBA nodes:
    3. When prompted, confirm that you want to proceed with the deployment and continue setting up Splunk UBA.
  5. Perform one final sync across your cluster:
    /opt/caspida/bin/Caspida sync-cluster
  6. Restart UBA:
    /opt/caspida/bin/Caspida stop
    /opt/caspida/bin/Caspida start
  7. Verify the host name of all the nodes using the following command:

  8. Make sure all the nodes have a consistent setup. If using fully qualified domain names (FQDN) then all nodes should output FQDN in the host name command. If the short name is used, then all nodes should output the short name in the host name command.
    1. If FQDN is used then in the pre_check script provide the FQDN of all the nodes, for example:
      /opt/caspida/bin/utils/uba_pre_check.sh <NODE1_FQDN> <NODE2_FQDN> <NODE3_FQDN>
    2. When prompted for a list of host names in the setup script, if the output of the host name command is FQDN, then provide a CSV list of FQDN host names, for example:

      If you plan on connecting to Splunk Cloud to run queries for datasources, use fully qualified domain names (FQDN), not short names, for your Splunk UBA host names.

  9. On 7 node deployments or larger, run the following steps:
    1. SSH as caspida to UBA management node (node 1).
    2. Back up original file:
      cp /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow/ModelRegistry.json 
      /opt/caspida/content/Splunk-Standard Security/modelregistry/offlineworkflow/ModelRegistry.json.ORIG 
    3. Copy the ModelRegistry.json.large_deployment file to the ModelRegistry.json file:
      cp /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow/ModelRegistry.json.large_deployment /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow/ModelRegistry.json
    4. Sync cluster:
      /opt/caspida/bin/Caspida sync-cluster /opt/caspida/content/Splunk-Standard-Security/modelregistry/offlineworkflow/ 
    5. Restart all services:
      /opt/caspida/bin/Caspida stop-all 
      /opt/caspida/bin/Caspida start-all
  10. After setup completes:
    1. Open a web browser and log in to the public IP address with the default admin credentials to confirm a successful installation. The default username is admin and password is changeme. See Secure the default account after installing Splunk UBA for information about the default accounts provided with Splunk UBA and how to secure them.
    2. See Verify successful installation for more information about verifying a successful installation.
Last modified on 18 July, 2023
Install Splunk UBA on a single Linux server
Install Splunk UBA on several Linux servers

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.2.0

Was this documentation topic helpful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters