Splunk® SOAR (On-premises)

Install and Upgrade Splunk SOAR (On-premises)

The classic playbook editor will be deprecated in early 2025. Convert your classic playbooks to modern mode.
After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:
This documentation does not apply to the most recent version of Splunk® SOAR (On-premises). For documentation on the most recent version, go to the latest release.

Create a cluster in Amazon Web Services

Do not use this release to create new clusters of Splunk SOAR (On-premises).

Use this release to upgrade from your current privileged deployment of Splunk Phantom 4.10.7 or Splunk SOAR (On-premises) releases 5.0.1 through 5.3.4.

If you are upgrading a privileged deployment of Splunk Phantom 4.10.7 or Splunk SOAR (On-premises) releases 5.0.1 through 5.3.4, upgrade to release 5.3.6, convert your deployment to unprivileged, then upgrade again directly to Splunk SOAR (On-premises) release 6.1.1 or higher.

If you have a privileged deployment of Splunk SOAR (On-premises) release 5.3.5, convert your deployment to unprivileged, then upgrade directly to Splunk SOAR (On-premises) release 6.1.1 or higher.

To learn how to upgrade see Splunk SOAR (On-premises) upgrade overview and prerequisites.

Build a cluster from AMI-based instances of , building several of the required services using AWS native components: Elastic Load Balancer (ELB), Elastic File System (EFS), and Relational Database System (RDS).

This configuration is built using the Amazon Marketplace Image of . This release is an unprivileged version of which runs under the user account phantom.

Converting an AMI-based installation to a server or cluster node is a one-way operation. It cannot be reverted.

Build a cluster with AWS services

Number Task Description
1 Launch and prepare AMI instances of . Total number of AMI instances = Number of cluster nodes + 1

See Launch and prepare AMI instances of .

2 Create a load balancer with Elastic Load Balancer (ELB). See Create a load balancer with Elastic Load Balancer (ELB).
  1. Create the ELB.
  2. Create the Target Group.
  3. Add routing rules.
3 Create the file stores with Elastic File System (EFS). Create the EFS file store for shared files.

See Create the file stores with Elastic File System (EFS).

4 Create the external database with Relational Database System (RDS). See Create the external database with Relational Database System (RDS).
  1. Create the external PostgreSQL database.
  2. Create the pgbouncer user.
5 Add the file shares to each instance. Mount the file shares on each instance.

See Add the file shares to each instance.

6 Convert an AMI-based Instance into the Splunk Enterprise instance. Convert one of the instances into the Splunk Enterprise instance. This instance will serve as the external search endpoint for the entire cluster. Use the make_server_node.pyc script with the splunk argument.

See Convert an AMI-based Instance into the Splunk Enterprise instance.

7 Convert the first AMI-based instance into a cluster node. Convert the first instance into a cluster node. Creating the first node will use a script option to record all the make_cluster_node.pyc script answers to a file for use on each of your other nodes.

See Convert the first AMI-based instance into a cluster node.

8 Convert the remaining AMI-based instances into cluster nodes. Convert the remaining instances into cluster nodes using make_cluster_node.pyc and the mcn_responses.json file.

See Convert the remaining AMI-based instances into cluster nodes.

Launch and prepare AMI-based instances of Splunk SOAR (On-premises)

You need a number of AMI-based instances equal to the number of nodes you want in your cluster plus one. The additional instance will be converted into the externalized Splunk Enterprise instance for your cluster. A cluster requires a minimum of three nodes.

Total number of AMI instances = Number of cluster nodes + 1

If you need your cluster to be FIPS compliant, you must set the operating system to FIPS mode. For more information, see Clustering and external services in the topic FIPS compliance in Install and Upgrade Splunk SOAR (On-premises).

If you already have a Splunk Enterprise deployment that you will use instead, follow the instructions for using an external Splunk Enterprise instance. See Set up Splunk Enterprise.

Amazon Elastic Compute Cloud (EC2) instances created using the AMI for Splunk SOAR On-premises automatically generate a unique system GUID on first boot. AMIs or virtual machine (VM) templates created through other means might not perform this operation, which causes problems when creating clusters. If you've created your own AMI or VM template, or otherwise cloned EC2 instances or VMs in order to create a cluster, you might need to first regenerate the system GUIDs on your nodes by running phenv python ${PHANTOM_HOME}/bin/initialize.py --first-initialize --force.

Installation

  1. Log in to your AWS EC2 account.
  2. From your EC2 dashboard, select Launch Instance.
  3. In the AWS Marketplace, search for .
  4. On the Amazon Machine Image entry, click the button Select.
  5. Click Continue.
  6. Select an instance size. The default is m5.xlarge. does not support using instances smaller than t2.xlarge.
  7. Click Next: Configure Instance Details.
  8. For Number of Instances, type the number of instances you need. Total number of AMI instances = Number of cluster nodes + 1
  9. Configure the instance according to your organization's policies. See required ports for more information.

    Make sure to open the HTTPS port 9999 for your instances.

  10. Click Next: Add Storage.
  11. Add storage.

    You can increase disk size later, but you cannot decrease disk size.

  12. Click Next: Add Tags.
  13. Add tags to help identify your installation in your EC2 dashboard.
  14. Click Next: Configure Security Group.
  15. Configure Security Groups. By default, SSH, HTTP, and HTTPS are permitted from all IP addresses. Increase security by limiting access to your organization's IP addresses.
  16. Add the following ports for clustering:
    • 2049 - GlusterFS and NFS for NFS exports. Used by the nfsd process.
    • 4369 - RabbitMQ port mapper. All cluster nodes must be able to communicate with each other on this port.
    • 8300 - Consul RPC services. All cluster nodes must be able to communicate with each other on this port.
    • 8301 - Consul internode communication. All cluster nodes must be able to communicate with each other on this port.
    • 8302- Consul internode communication. All cluster nodes must be able to communicate with each other on this port.
    • 25672 - RabbitMQ internode communications. All cluster nodes must be able to communicate with each other on this port.
  17. Click Review and Launch.
  18. Generate or choose SSH keys.
  19. Click Launch Instances. The installation typically takes 15 minutes to complete.

    In order to log in to the operating system of your AMI-based Splunk SOAR (On-premises) install using SSH, use the user account phantom. If you need root access, use sudo su - to elevate to root.

Install SSH keys

During the conversion to cluster nodes, each instance will need to SSH as the phantom user into other nodes. Install the client certificate you generated for SSH when the instances were created.

Do this on each of the instances that you will convert to cluster nodes.

  1. Copy the .pem file generated earlier to each instance using SCP.
    scp -i <path/to/.pem> <path/to/.pem to transfer> phantom@<instance IP or DNS name>:~/
  2. SSH to an AMI-based instance as the phantom user.
  3. Move the .pem key to the phantom user's .ssh directory.
    mv <name of file>.pem .ssh
  4. Set the permissions on the .pem key.
    chmod 600 .ssh/<name of file>.pem
  5. Test that you are able to SSH from each instance to the others as the phantom user.

Create a load balancer with Elastic Load Balancer (ELB)

Create a load balancer for your cluster. An Elastic Load Balancer will be used instead of HAProxy.

  1. Log in to your AWS EC2 account.
  2. From the menu on the EC2 dashboard, under the heading Load Balancing, choose Load Balancers.
  3. Click Create Load Balancer.
  4. Under Application Load Balancer, click Create.
  5. Type a name for your load balancer in the Name field.
  6. Select a Scheme. The scheme will depend on your AWS network configuration. Assuming your load balancer will route on an internal network, select the internal radio button.
  7. Set the IP address type. This will also depend on your AWS network configuration. In most cases, select ipv4 from the menu.
  8. Under Listeners, Load Balancer Protocol, select HTTPS from the menu. The Load Balancer Port changes to 443. You must also set listeners to use the custom HTTPS port 9999.
  9. Under Availability Zones, select the VPC and Availability Zones to match your AWS network configuration.
  10. Add Tags to help organize and identify your load balancer.
  11. Click Next: Configure Security Settings.
  12. Select or create a security group according to your organization's policies. These settings can vary based on factors outside the scope of this document.
  13. Click Next: Configure Routing.
  14. Under Target group, choose New target group from the menu.
  15. Type a name for your target group in the Name field.
  16. For Target type, select the Instance radio button.
  17. For Protocol, select HTTPS from the menu. Port changes to 443 automatically.

    The custom HTTPS port used by your Splunk SOAR (On-premises) cluster nodes must be accessible to the load balancer. For example, because the port you are using for HTTPS for the AMI Splunk SOAR (On-premises) cluster nodes is port 9999, you must also open port 9999 on the load balancer.

  18. Under Health checks, set Protocol to HTTPS.

    Health checks will fail until you have run the make_cluster_node scripts to add your Splunk SOAR (On-premises) instances to your cluster. This is normal and expected.

  19. In the Path field, type /check.
  20. Click Next: Register Targets.
  21. Under Instances, find and select the cluster node instances for your cluster. You do not need to load balance the external services, such as PostgreSQL, file shares, or Splunk Enterprise.
  22. Click Add to registered.
  23. Click Next: Review.
  24. Review for and correct any errors.
  25. Click Create.
  26. Select the load balancer by name.
  27. From the Actions menu, select Edit attributes.
  28. Set the Idle timeout to 120 seconds.
  29. Click Save.

Create a Target Group for your cluster's websockets traffic

This target group will be used to route websockets traffic for the Cluster. See Groups for Your Application Load Balancers in the AWS documentation.

  1. In the sidebar on the EC2 dashboard, under Load Balancing, select Target Groups.
  2. Click Create target group.
  3. Now create the websockets target group. In the Create target group dialog:
    1. Type a name in the Target group name field.
    2. Select the Instance radio button.
    3. Select HTTPS from the Protocol menu. Port will change to 443.

      You must also open Splunk SOAR (On-premises)'s custom HTTPS port 9999 to allow HTTPS traffic for unprivileged processes.

    4. Select the same VPC that your target instances are using from the menu.
    5. Under Health Check settings, select HTTPS from the Protocol menu.

      Health checks will fail until you have run the make_cluster_node scripts to add your Splunk SOAR (On-premises) instances to your cluster. This is normal and expected.

    6. In the Path field, type /check.
    7. Click Next: Register Targets.
    8. Under Instances, find and select the cluster node instances for your cluster. You do not need to load balance the external services, such as PostgreSQL, file shares, or Splunk Enterprise.
    9. Click Add to registered.
  4. Click Create.
  5. From the target groups list, select the target group you just created.
  6. On the Description tab, under Attributes, click Edit attributes.
  7. In the Edit attributes dialog:
    1. For Stickiness, select the Enable check box.
    2. Set Stickiness duration by typing 7 and choosing days from the menu.
    3. Click Save.

Setting the Stickiness duration is important so that websockets can persist. Always use the longest possible duration available. Setting this value too low will result in connections being prematurely closed. Wherever possible, set the idle_timeout.timeout_seconds to a value as high as possible for your Elastic Load Balancer. See Application Load Balancers in the AWS documentation.

Add the routing rules to your load balancer

Here you create rules to route traffic.

  • One rule to route all the persistent connections to the websockets listener.
  • A second rule to route all other traffic to the other listener.
  1. From the EWS menu, under Load Balancing, select Load Balancers.
  2. Select the load balancer you have created for your cluster.
  3. Click the Listeners tab.
  4. Under Rules, click the View/edit rules link.
  5. Click the + icon to add a new rule.
  6. Click the + Insert Rule link to edit the rule.
  7. Under IF (all match), click + Add condition.
  8. Select Path…, then type /websocket in the text box.
  9. Click the checkmark icon.

Create the file stores with Elastic File System (EFS)

Create shared file stores for your cluster. Cluster nodes will store files that must be shared by all instances to these shares. See System Requirements for more information.

Only instances in the VPC you select during EFS creation can connect to that file system.

  1. Under Configure file system access, select the desired VPC from the menu.
  2. Under Create mount targets, select the check boxes for the availability zones you need.
  3. Click Next Step.
  4. Set the security groups as required by your organization's policies.
  5. Under Configure optional settings, set options as required by your organization's requirements or policies.
  6. Click Next Step.
  7. Review the options selected, then click Create File System.

Create the external PostgreSQL database with the Relational Database System (RDS)

uses a PostgreSQL 11 database. In many installations, the database runs on the same server as . For an AWS cluster, it makes sense to set up an external PostgreSQL database using RDS. This database will serve as the primary database for the cluster.

You may use any release of PostgreSQL 11.x. See Upgrading for support.

  1. From your EC2 dashboard, click Services in the menu bar, and under Database choose RDS.
  2. Click Create database.
  3. Select Standard Create.
  4. Under Engine options, select PostgreSQL.
  5. For Version, select 11.11 from the menu. You may use any PostgreSQL 11.x release.
  6. For Templates, select either Production for production environments or Dev/Test for development/testing or Proof of Value environments.
  7. Under Settings, type a name for your DB instance identifier. Make sure that the name is unique across all DB instances owned by your AWS account.
  8. Under Credential Settings:
    1. Master username: postgres
    2. Make sure the Auto generate a password checkbox is not selected.
    3. Type and confirm the Master password in the fields provided. Record this password. You will need it later.
  9. Under DB instance size, select the radio button that matches your organization's needs.

    Warning: Instances below db.t2.large may deplete their available connections before installation of your Splunk SOAR (On-premises) cluster is complete.

  10. Under Storage, select a Storage type based on your organization's needs.
    1. For Allocated storage, set a number of GiB that matches your organization's needs.

      Databases with less than 500 gigabytes of storage are not supported for production use.

    2. Select the Enable storage autoscaling check box.
    3. Set Maximum storage threshold to 1000 (GiB).
  11. Under Availability & durability, select the Do not create a standby instance radio button.
  12. Under Connectivity, select the same VPC as you used for your instances.
  13. Under the Additional connectivity configuration section:
    1. Select the correct Subnet group. The available groups depend on your VPC selection.
    2. Under Publicly accessible, select the No radio button.
    3. Under VPC security group, select Choose existing.
    4. Select the appropriate security group from the menu.
    5. Click the X icon to remove any unwanted security groups that were added by default.
    6. Make sure the Database port is set to 5432.
  14. Under Additional configuration, Database options:
    1. Type phantom for Initial database name.
    2. Make sure the DB parameter group is set to default.postgres11.11. If you selected a different PostgreSQL version 11 earlier, set the parameter to match.
  15. Under Additional configuration, Backup, leave everything at the defaults.
  16. Click Create Database.

Create the pgbouncer user for the RDS

interacts with the PostgreSQL database using the pgbouncer user account. This account needs to be created for the database created in RDS.

  1. Login to an AMI-based instance as the phantom user using SSH.
  2. Create the pgbouncer user.
    phenv psql --host <DNS name for RDS instance> --port 5432 --username postgres --echo-all --dbname phantom --command "CREATE ROLE pgbouncer WITH PASSWORD '<pgbouncer password>' login;"

Add file shares to each Splunk SOAR (On-premises) instance

Set up and mount the needed directories for your cluster. Do this in three stages. The first to install the required packages, the second to create the required shared directories in EFS and copy over existing data, the third to mount the directories on all instances and make the mounts permanent.

Stage one:

Do this stage on each of your AMI-based instances.

  1. Login to an AMI-based instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Install the package nfs-utils.
    yum install nfs-utils

Stage two:

Do this stage on only one of your AMI-based instances. You will create a temporary directory, mount it to EFS, then use it to copy existing files to EFS.

  1. Login to an AMI-based instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Create a local mount on this instance. This mount will be used to replicate the required directory structure on EFS.
    mkdir -p /mnt/external
  4. Mount this directory from EFS.
    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/ /mnt/external
  5. Now copy the instance's files to EFS with rsync.
    rsync -avz /<PHANTOM_HOME>/apps /mnt/external/
    rsync -avz /<PHANTOM_HOME>/local_data/app_states /mnt/external/
    rsync -avz /<PHANTOM_HOME>/scm /mnt/external/
    rsync -avz /<PHANTOM_HOME>/tmp/shared /mnt/external/
    rsync -avz /<PHANTOM_HOME>/vault /mnt/external/
  6. Unmount the temporary mounting.
    umount /mnt/external

Stage three:

Do this stage on each of your AMI-based instances. Set the mounts for the shared directories to EFS, then update the file system table to make the directories mount from EFS when the instance starts.

  1. Login to an AMI-based instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Mount all the shared directories to EFS.
    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/apps /<PHANTOM_HOME>/apps/

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/app_states /<PHANTOM_HOME>/local_data/app_states

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/scm /<PHANTOM_HOME>/scm

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/shared /<PHANTOM_HOME>/tmp/shared

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/vault /<PHANTOM_HOME>/vault
  4. Edit the file system table /etc/fstab to make the mounts permanent. Add these entries. You can get the IP address or DNS name for EFS from your EFS dashboard.
    vi /etc/fstab
    <ip address or DNS name for EFS>:/apps /<PHANTOM_HOME>/apps nfs4 defaults,_netdev 0 0
    <ip address or DNS name for EFS>:/app_states /<PHANTOM_HOME>/local_data/app_states nfs4 defaults,_netdev 0 0
    <ip address or DNS name for EFS>:/scm /<PHANTOM_HOME>/scm nfs4 defaults,_netdev 0 0
    <ip address or DNS name for EFS>:/shared /<PHANTOM_HOME>/tmp/shared nfs4 defaults,_netdev 0 0
    <ip address or DNS name for EFS>:/vault /<PHANTOM_HOME>/vault nfs4 defaults,_netdev 0 0

Convert an AMI-based Splunk SOAR (On-premises) instance into the Splunk Enterprise instance

A cluster requires either a Splunk Enterprise instance or a distributed Splunk Enterprise deployment as its search endpoint. Convert one of your AMI-based instances into the required Splunk Enterprise endpoint.

If you already have a Splunk Enterprise deployment that you will use instead, follow the instructions for using an external Splunk Enterprise instance. See Set up Splunk Enterprise.

Convert instance into the Splunk Enterprise instance:

  1. Login to an AMI-based instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Run the make_server_node.pyc script with the splunk argument.
    <PHANTOM_HOME>/bin/phenv python <PHANTOM_HOME>/bin/make_server_node.pyc splunk

The Splunk Enterprise configuration is written to: <PHANTOM_HOME>/bin/splunk_config.json

Logs are written to: <PHANTOM_HOME>/var/log/phantom/make_server_node/make_server_node_<date and time>.log

Test each Splunk SOAR (On-premises) instance for readiness

Before proceeding, test each instance to make sure it is ready for conversion to a cluster node. Log in to each AMI-based instance that will become a cluster node, test that the EFS file shares are mounted and fix any errors.

Make sure that each instance has the EFS file shares mounted.

sudo df -T

You must see entries for shared directories in the table with the <EFS ID.dns_name>:/ for the directories apps, app_states, scm, shared, and vault.

Convert the first AMI-Based instance into a cluster node

Convert the first instance to a cluster node.

Converting an AMI-based installation to a server or cluster node is a one-way operation. It cannot be reverted.

You will need this information readily available:

  • IP or hostname for the RDS Postgres 11.6 or later DB server
  • Password for the postgres user
  • Password for the pgbouncer user
  • IP or hostname of the ELB load balancer
  • Username for SSH
  • Path to the key file for SSH
  • IP or hostname of the Splunk Enterprise instance
  • REST API port for Splunk Enterprise: 5122
  • User name for Search: phantomsearch
  • Password for the phantomsearch account
  • User name for Search: phantomdelete
  • Password for the phantomdelete account
  • HTTP Event Collector Token
  • HTTP Event Collector port: 5121

The information for the Splunk Enterprise instance can be found in the file <PHANTOM_HOME>/bin/splunk_config.json on your Splunk Enterprise instance.

Make a note the AWS instance ID of this instance. You need it later to log in to your Splunk SOAR (On-premises) cluster.

Run make_cluster_node.pyc

  1. SSH to the AMI-based instance. Log in with the phantom user account.
  2. Run the make_cluster_node.pyc script with the --record argument.
    phenv python <PHANTOM_HOME>/bin/make_cluster_node.pyc --record

The response file is written to: <PHANTOM_HOME>/bin/response.json

The log is written to: <PHANTOM_HOME>/var/log/phantommake_cluster_node/make_cluster_node_<date and time>.log

The response file can be used with the make_cluster_node.pyc script on other nodes to automatically provide the information the script needs.

Convert the remaining AMI-based Splunk SOAR (On-premises) instances into cluster nodes

Convert each of the remaining AMI-based instances into cluster nodes by running the make_cluster_node.pyc script.

Run make_cluster_node.pyc

  1. SSH to the AMI-based instance. Log in with the phantom user account.
  2. Run the make_cluster_node.pyc script with the --responses argument.
    <PHANTOM_HOME>/bin/phenv python <PHANTOM_HOME>/bin/make_cluster_node.pyc --responses <PHANTOM_HOME>/bin/response.json

You don't have to use responses.json. If you do not supply a JSON file, the script prompts you for the information it needs. The mcn_responses.json file contains secrets such as usernames and passwords in plain text. Store it in a secure location or delete it after the cluster configuration is complete.

Log in to the Splunk SOAR (On-premises) web interface

Connect to the web interface of your newly installed cluster.

Use the AWS instance ID of the first Splunk SOAR (On-premises) instance where the make_cluster_node script was run for the cluster's initial password.

  1. Get the public IP address or DNS name for the elastic load balancer from the EC2 Management Console.
  2. Get the full AWS instance ID for the EC2 instance.
  3. Using a browser, go to the public IP address or DNS name for the elastic load balancer.
    1. User name: admin
    2. Password: <Full AWS instance ID>
  4. Change the admin user's password:
    1. From the User Name menu, select Account Settings.
    2. From the second level of the menu bar, select Change Password.
    3. Type the current password.
    4. Type a new password.
    5. Type a new password a second time to confirm.
    6. Click Change Password.
Last modified on 22 January, 2024
Create a cluster using an unprivileged installation   Convert an existing instance into a cluster

This documentation applies to the following versions of Splunk® SOAR (On-premises): 5.3.6


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters