Splunk® SOAR (On-premises)

Install and Upgrade Splunk SOAR (On-premises)

The classic playbook editor will be deprecated in early 2025. Convert your classic playbooks to modern mode.
After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:

Create a cluster in Amazon Web Services

Build a cluster of instances, using several AWS native components; Elastic Load Balancer (ELB), Elastic File System (EFS), and Relational Database System (RDS).

Converting a Splunk SOAR (On-premises) installation to a server or cluster node is a one-way operation. It cannot be reverted.

Build a cluster that uses AWS services

Number Task Description
1 Launch and prepare instances of . Total number of instances = Number of cluster nodes

See Launch and prepare instances of .

2 Create a load balancer with Elastic Load Balancer (ELB). See Create a load balancer with Elastic Load Balancer (ELB).
  1. Create the ELB.
  2. Create the Target Group.
  3. Add routing rules.
3 Create the file stores with Elastic File System (EFS). Create the EFS file store for shared files.

See Create the file stores with Elastic File System (EFS).

4 Create the external database with Relational Database System (RDS). See Create the external database with Relational Database System (RDS).
  1. Create the external PostgreSQL database.
  2. Create the pgbouncer user.
5 Add the file shares to each instance. Mount the file shares on each instance.

See Add the file shares to each instance.

6 Convert the first instance into a cluster node. Convert the first instance into a cluster node. Creating the first node will use a script option to record all the make_cluster_node.pyc script answers to a file for use on each of your other nodes.

See Convert the first instance into a cluster node.

7 Convert the remaining instances into cluster nodes. Convert the remaining instances into cluster nodes using make_cluster_node.pyc and the mcn_responses.json file.

See Convert the remaining instances into cluster nodes.

Launch and prepare instances of Splunk SOAR (On-premises)

You need a number of instances equal to the number of nodes you want in your cluster plus one. A cluster requires a minimum of three nodes.

Total number of instances = Number of cluster nodes

If you need your cluster to be FIPS compliant, you must set the operating system to FIPS mode. For more information, see Clustering and external services in the topic FIPS compliance in Install and Upgrade Splunk SOAR (On-premises).

Each of your Splunk SOAR (On-premises) instances in Amazon Elastic Compute Cloud (EC2) must have a unique system GUID in order to build a cluster. If you've created your own VM template, or otherwise cloned EC2 instances or VMs in order to create a cluster, you might need to first regenerate the system GUIDs on your nodes by running phenv python ${PHANTOM_HOME}/bin/initialize.py --first-initialize --force.

Installation

  1. Obtain the TAR file installer. See Install Splunk SOAR (On-premises) as an unprivileged user.
  2. Log in to your AWS EC2 account.
  3. From your EC2 dashboard, select Launch Instance.
  4. Add tags to help identify your installation in your EC2 dashboard.
  5. You will need one Amazon Linux 2 AMI x86_64 instance for each SOAR cluster node.
  6. Select an instance size. The m5.xlarge instance size is preferred. does not support using instances smaller than t2.xlarge.
  7. For Number of Instances, type the number of instances you need. Total number of of instances = Number of cluster nodes
  8. Configure the instance according to your organization's policies for storage, network access, SSH keys, and other settings. See System requirements for production use and required ports for more information.
    • Configure Network access, including Security Groups. By default, SSH, HTTP, and HTTPS are permitted from all IP addresses. Increase security by limiting access to your organization's IP addresses.
      • Add the following ports for clustering:
        • 2049 - GlusterFS and NFS for NFS exports. Used by the nfsd process.
        • 4369 - RabbitMQ port mapper. All cluster nodes must be able to communicate with each other on this port.
        • 8300 - Consul RPC services. All cluster nodes must be able to communicate with each other on this port.
        • 8301 - Consul internode communication. All cluster nodes must be able to communicate with each other on this port.
        • 8302- Consul internode communication. All cluster nodes must be able to communicate with each other on this port.
        • 25672 - RabbitMQ internode communications. All cluster nodes must be able to communicate with each other on this port.
        • 9001, 9002 - Custom ports for the Splunk Universal Forwarder for connecting your to your Splunk Enterprise or Splunk Cloud Platform deployment.
  9. Configure storage. Use the Advanced Options to select your needed storage. See System requirements for production use
  10. Click Launch Instance. You can use the Review commands link to see what commands will be used to create your instances.
  11. Once your instances have been launched, copy the installation TAR file to your instances, and follow the instructions from Install Splunk SOAR (On-premises) as an unprivileged user to install Splunk SOAR (On-premises).

Install SSH keys

During the conversion to cluster nodes, each instance will need to SSH as the phantom user into other nodes. Install the client certificate you generated for SSH when the instances were created.

Do this on each of the instances that you will convert to cluster nodes.

  1. Copy the .pem file generated earlier to each instance using SCP.
    scp -i <path/to/.pem> <path/to/.pem to transfer> phantom@<instance IP or DNS name>:~/
  2. SSH to a instance as the phantom user.
  3. Move the .pem key to the phantom user's .ssh directory.
    mv <name of file>.pem .ssh
  4. Set the permissions on the .pem key.
    chmod 600 .ssh/<name of file>.pem
  5. Test that you are able to SSH from each instance to the others as the phantom user.

Create a load balancer with Elastic Load Balancer (ELB)

Create a load balancer for your cluster. An Elastic Load Balancer will be used instead of HAProxy.

  1. Log in to your AWS EC2 account.
  2. From the menu on the EC2 dashboard, under the heading Load Balancing, choose Load Balancers.
  3. Click Create Load Balancer.
  4. Under Application Load Balancer, click Create.
  5. Type a name for your load balancer in the Name field.
  6. Select a Scheme. The scheme will depend on your AWS network configuration. Assuming your load balancer will route on an internal network, select the internal radio button.
  7. Set the IP address type. This will also depend on your AWS network configuration. In most cases, select ipv4 from the menu.
  8. Under Listeners, Load Balancer Protocol, select HTTPS from the menu. The Load Balancer Port changes to 443. You must also set listeners to use the custom HTTPS port 9999.
  9. Under Availability Zones, select the VPC and Availability Zones to match your AWS network configuration.
  10. Add Tags to help organize and identify your load balancer.
  11. Click Next: Configure Security Settings.
  12. Select or create a security group according to your organization's policies. These settings can vary based on factors outside the scope of this document.
  13. Click Next: Configure Routing.
  14. Under Target group, choose New target group from the menu.
  15. Type a name for your target group in the Name field.
  16. For Target type, select the Instance radio button.
  17. For Protocol, select HTTPS from the menu. Port changes to 443 automatically.

    The custom HTTPS port used by your Splunk SOAR (On-premises) cluster nodes must be accessible to the load balancer.

  18. Under Health checks, set Protocol to HTTPS.

    Health checks will fail until you have run the make_cluster_node scripts to add your Splunk SOAR (On-premises) instances to your cluster. This is normal and expected.

  19. In the Path field, type /check.
  20. Click Next: Register Targets.
  21. Under Instances, find and select the cluster node instances for your cluster. You do not need to load balance the external services, such as PostgreSQL, file shares, or Splunk Enterprise.
  22. Click Add to registered.
  23. Click Next: Review.
  24. Review for and correct any errors.
  25. Click Create.
  26. Select the load balancer by name.
  27. From the Actions menu, select Edit attributes.
  28. Set the Idle timeout to 120 seconds.
  29. Click Save.

Create a Target Group for your cluster's websockets traffic

This target group will be used to route websockets traffic for the Cluster. See Groups for Your Application Load Balancers in the AWS documentation.

  1. In the sidebar on the EC2 dashboard, under Load Balancing, select Target Groups.
  2. Click Create target group.
  3. Now create the websockets target group. In the Create target group dialog:
    1. Type a name in the Target group name field.
    2. Select the Instance radio button.
    3. Select HTTPS from the Protocol menu. Port will change to 443. Splunk SOAR (On-premises) uses the custom HTTPS port 8443.
    4. Select the same VPC that your target instances are using from the menu.
    5. Under Health Check settings, select HTTPS from the Protocol menu.

      Health checks will fail until you have run the make_cluster_node scripts to add your Splunk SOAR (On-premises) instances to your cluster. This is normal and expected.

    6. In the Path field, type /check.
    7. Click Next: Register Targets.
    8. Under Instances, find and select the cluster node instances for your cluster. You do not need to load balance the external services, such as PostgreSQL, or file shares.
    9. Click Add to registered.
  4. Click Create.
  5. From the target groups list, select the target group you just created.
  6. On the Description tab, under Attributes, click Edit attributes.
  7. In the Edit attributes dialog:
    1. For Stickiness, select the Enable check box.
    2. Set Stickiness duration by typing 7 and choosing days from the menu.
    3. Click Save.

Setting the Stickiness duration is important so that websockets can persist. Always use the longest possible duration available. Setting this value too low will result in connections being prematurely closed. Wherever possible, set the idle_timeout.timeout_seconds to a value as high as possible for your Elastic Load Balancer. See Application Load Balancers in the AWS documentation.

Add the routing rules to your load balancer

Here you create rules to route traffic.

  • One rule to route all the persistent connections to the websockets listener.
  • A second rule to route all other traffic to the other listener.
  1. From the EWS menu, under Load Balancing, select Load Balancers.
  2. Select the load balancer you have created for your cluster.
  3. Click the Listeners tab.
  4. Under Rules, click the View/edit rules link.
  5. Click the + icon to add a new rule.
  6. Click the + Insert Rule link to edit the rule.
  7. Under IF (all match), click + Add condition.
  8. Select Path…, then type /websocket in the text box.
  9. Click the checkmark icon.


Create the file stores with Elastic File System (EFS)

Create shared file stores for your cluster. Cluster nodes will store files that must be shared by all instances to these shares. See System Requirements for more information.

Only instances in the VPC you select during EFS creation can connect to that file system.

  1. Under Configure file system access, select the desired VPC from the menu.
  2. Under Create mount targets, select the check boxes for the availability zones you need.
  3. Click Next Step.
  4. Set the security groups as required by your organization's policies.
  5. Under Configure optional settings, set options as required by your organization's requirements or policies.
  6. Click Next Step.
  7. Review the options selected, then click Create File System.

Create the external PostgreSQL database with the Relational Database System (RDS)

uses a PostgreSQL 15 database. In many installations, the database runs on the same server as . For an AWS cluster, it makes sense to set up an external PostgreSQL database using RDS. This database will serve as the primary database for the cluster.

You may use any release of PostgreSQL 15.x. See Upgrading for support.

  1. From your EC2 dashboard, click Services in the menu bar, and under Database choose RDS.
  2. Click Create database.
  3. Select Standard Create.
  4. Under Engine options, select PostgreSQL.
  5. For Version, select 15 from the menu. You may use any PostgreSQL 15.x release.
  6. For Templates, select either Production for production environments or Dev/Test for development/testing or Proof of Value environments.
  7. Under Settings, type a name for your DB instance identifier. Make sure that the name is unique across all DB instances owned by your AWS account.
  8. Under Credential Settings:
    1. Master username: postgres
    2. Make sure the Auto generate a password checkbox is not selected.
    3. Type and confirm the Master password in the fields provided. Record this password. You will need it later.
  9. Under DB instance size, select the radio button that matches your organization's needs.

    Warning: Instances below db.t2.large may deplete their available connections before installation of your Splunk SOAR (On-premises) cluster is complete.

  10. Under Storage, select a Storage type based on your organization's needs.
    1. For Allocated storage, set a number of GiB that matches your organization's needs.

      Databases with less than 500 gigabytes of storage are not supported for production use.

    2. Select the Enable storage autoscaling check box.
    3. Set Maximum storage threshold to 1000 (GiB).
  11. Under Availability & durability, select the Do not create a standby instance radio button.
  12. Under Connectivity, select the same VPC as you used for your instances.
  13. Under the Additional connectivity configuration section:
    1. Select the correct Subnet group. The available groups depend on your VPC selection.
    2. Under Publicly accessible, select the No radio button.
    3. Under VPC security group, select Choose existing.
    4. Select the appropriate security group from the menu.
    5. Click the X icon to remove any unwanted security groups that were added by default.
    6. Make sure the Database port is set to 5432.
  14. Under Additional configuration, Database options:
    1. Type phantom for Initial database name.
    2. Make sure the DB parameter group is set to default.postgres15.xx, match the PostgreSQL version 15 you selected earlier.
  15. Under Additional configuration, Backup, leave everything at the defaults.
  16. Click Create Database.

Create the pgbouncer user for the RDS

interacts with the PostgreSQL database using the pgbouncer user account. This account needs to be created for the database created in RDS.

  1. Login to a instance as the phantom user using SSH.
  2. Create the pgbouncer user.
    phenv psql --host <DNS name for RDS instance> --port 5432 --username postgres --echo-all --dbname phantom --command "CREATE ROLE pgbouncer WITH PASSWORD '<pgbouncer password>' login;"

Add file shares to each Splunk SOAR (On-premises) instance

Set up and mount the needed directories for your cluster. Do this in three stages. The first to install the required packages, the second to create the required shared directories in EFS and copy over existing data, the third to mount the directories on all instances and make the mounts permanent.

Stage one:

Do this stage on each of your instances.

  1. Login to a instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Install the package nfs-utils.
    yum install nfs-utils

Stage two:

Do this stage on only one of your instances. You will create a temporary directory, mount it to EFS, then use it to copy existing files to EFS.

  1. Login to a instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Create a local mount on this instance. This mount will be used to replicate the required directory structure on EFS.
    mkdir -p /mnt/external
  4. Mount this directory from EFS.
    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/ /mnt/external
  5. Now copy the instance's files to EFS with rsync.
    rsync -avz /<PHANTOM_HOME>/apps /mnt/external/
    rsync -avz /<PHANTOM_HOME>/local_data/app_states /mnt/external/
    rsync -avz /<PHANTOM_HOME>/scm /mnt/external/
    rsync -avz /<PHANTOM_HOME>/tmp/shared /mnt/external/
    rsync -avz /<PHANTOM_HOME>/vault /mnt/external/
  6. Unmount the temporary mounting.
    umount /mnt/external

Stage three:

Do this stage on each of your instances. Set the mounts for the shared directories to EFS, then update the file system table to make the directories mount from EFS when the instance starts.

  1. Login to a instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Mount all the shared directories to EFS.
    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/apps /<PHANTOM_HOME>/apps/

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/app_states /<PHANTOM_HOME>/local_data/app_states

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/scm /<PHANTOM_HOME>/scm

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/shared /<PHANTOM_HOME>/tmp/shared

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/vault /<PHANTOM_HOME>/vault
  4. Edit the file system table /etc/fstab to make the mounts permanent. Add these entries. You can get the IP address or DNS name for EFS from your EFS dashboard.
    vi /etc/fstab
    <ip address or DNS name for EFS>:/apps /<PHANTOM_HOME>/apps nfs4 defaults,_netdev 0 0
    <ip address or DNS name for EFS>:/app_states /<PHANTOM_HOME>/local_data/app_states nfs4 defaults,_netdev 0 0
    <ip address or DNS name for EFS>:/scm /<PHANTOM_HOME>/scm nfs4 defaults,_netdev 0 0
    <ip address or DNS name for EFS>:/shared /<PHANTOM_HOME>/tmp/shared nfs4 defaults,_netdev 0 0
    <ip address or DNS name for EFS>:/vault /<PHANTOM_HOME>/vault nfs4 defaults,_netdev 0 0


Test each Splunk SOAR (On-premises) instance for readiness

Before proceeding, test each instance to make sure it is ready for conversion to a cluster node. Log in to each instance that will become a cluster node, test that the EFS file shares are mounted and fix any errors.

Make sure that each instance has the EFS file shares mounted.

sudo df -T

You must see entries for shared directories in the table with the <EFS ID.dns_name>:/ for the directories apps, app_states, scm, shared, and vault.

Convert the first instance into a cluster node

Convert the first instance to a cluster node.

Converting an installation to a server or cluster node is a one-way operation. It cannot be reverted.

You will need this information readily available:

  • IP or hostname for the RDS Postgres 15 DB server
  • Password for the postgres user
  • Password for the pgbouncer user
  • IP or hostname of the ELB load balancer
  • Username for SSH
  • Path to the key file for SSH
  • (Optional) IP or hostname of the Splunk Enterprise or Splunk Cloud Deployment for your Universal Forwarders if you want to configure forwarders to send SOAR data to your Splunk deployment. See Configure forwarders to send SOAR data to your Splunk deployment in the Administer Splunk SOAR (On-premises) manual.


Run make_cluster_node.pyc

  1. SSH to the instance. Log in with the phantom user account.
  2. Run the make_cluster_node.pyc script with the --record argument.
    phenv python <PHANTOM_HOME>/bin/make_cluster_node.pyc --record

The response file is written to: <PHANTOM_HOME>/bin/response.json

The log is written to: <PHANTOM_HOME>/var/log/phantommake_cluster_node/make_cluster_node_<date and time>.log

The response file can be used with the make_cluster_node.pyc script on other nodes to automatically provide the information the script needs.

Convert the remaining Splunk SOAR (On-premises) instances into cluster nodes

Convert each of the remaining instances into cluster nodes by running the make_cluster_node.pyc script.

Run make_cluster_node.pyc

  1. SSH to the instance. Log in with the phantom user account.
  2. Run the make_cluster_node.pyc script with the --responses argument.
    <PHANTOM_HOME>/bin/phenv python <PHANTOM_HOME>/bin/make_cluster_node.pyc --responses <PHANTOM_HOME>/bin/response.json

You don't have to use responses.json. If you do not supply a JSON file, the script prompts you for the information it needs. The mcn_responses.json file contains secrets such as usernames and passwords in plain text. Store it in a secure location or delete it after the cluster configuration is complete.

Log in to the Splunk SOAR (On-premises) web interface

Connect to the web interface of your newly installed cluster.

  1. Get the public IP address or DNS name for the elastic load balancer from the EC2 Management Console.
  2. Using a browser, go to the public IP address or DNS name for the elastic load balancer.
    1. User name: admin
    2. Password: <default password, or password you set during configuration>
  3. Change the admin user's password:
    1. From the User Name menu, select Account Settings.
    2. From the second level of the menu bar, select Change Password.
    3. Type the current password.
    4. Type a new password.
    5. Type a new password a second time to confirm.
    6. Click Change Password.
Last modified on 17 September, 2024
Create a cluster using an unprivileged installation   Convert an existing instance into a cluster

This documentation applies to the following versions of Splunk® SOAR (On-premises): 6.3.0, 6.3.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters