After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:
Create a cluster in Amazon Web Services
Build a cluster from AMI-based instances of , building several of the required services using AWS native components: Elastic Load Balancer (ELB), Elastic File System (EFS), and Relational Database System (RDS).
This configuration is built using the Amazon Marketplace Image of . This release is an unprivileged version of which runs under the user account phantom.
Converting an AMI-based installation to a server or cluster node is a one-way operation. It cannot be reverted.
Build a cluster with AWS services
Number | Task | Description |
---|---|---|
1 | Launch and prepare AMI instances of . | Total number of AMI instances = Number of cluster nodes + 1 |
2 | Create a load balancer with Elastic Load Balancer (ELB). | See Create a load balancer with Elastic Load Balancer (ELB).
|
3 | Create the file stores with Elastic File System (EFS). | Create the EFS file store for shared files. |
4 | Create the external database with Relational Database System (RDS). | See Create the external database with Relational Database System (RDS).
|
5 | Add the file shares to each instance. | Mount the file shares on each instance. |
6 | Convert an AMI-based Instance into the Splunk Enterprise instance. | Convert one of the instances into the Splunk Enterprise instance. This instance will serve as the external search endpoint for the entire cluster. Use the make_server_node.pyc script with the splunk argument.
See Convert an AMI-based Instance into the Splunk Enterprise instance. |
7 | Convert the first AMI-based instance into a cluster node. | Convert the first instance into a cluster node. Creating the first node will use a script option to record all the make_cluster_node.pyc script answers to a file for use on each of your other nodes.
See Convert the first AMI-based instance into a cluster node. |
8 | Convert the remaining AMI-based instances into cluster nodes. | Convert the remaining instances into cluster nodes using make_cluster_node.pyc and the mcn_responses.json file.
See Convert the remaining AMI-based instances into cluster nodes. |
Launch and prepare AMI-based instances of Splunk SOAR (On-premises)
You need a number of AMI-based instances equal to the number of nodes you want in your cluster plus one. The additional instance will be converted into the externalized Splunk Enterprise instance for your cluster. A cluster requires a minimum of three nodes.
Total number of AMI instances = Number of cluster nodes + 1
If you need your cluster to be FIPS compliant, you must set the operating system to FIPS mode. For more information, see Clustering and external services in the topic FIPS compliance in Install and Upgrade Splunk SOAR (On-premises).
If you already have a Splunk Enterprise deployment that you will use instead, follow the instructions for using an external Splunk Enterprise instance. See Set up Splunk Enterprise.
Amazon Elastic Compute Cloud (EC2) instances created using the AMI for Splunk SOAR On-premises automatically generate a unique system GUID on first boot. AMIs or virtual machine (VM) templates created through other means might not perform this operation, which causes problems when creating clusters.
If you've created your own AMI or VM template, or otherwise cloned EC2 instances or VMs in order to create a cluster, you might need to first regenerate the system GUIDs on your nodes by running phenv python ${PHANTOM_HOME}/bin/initialize.py --first-initialize --force
.
Installation
- Log in to your AWS EC2 account.
- From your EC2 dashboard, select Launch Instance.
- In the AWS Marketplace, search for .
- On the Amazon Machine Image entry, click the button Select.
- Click Continue.
- Select an instance size. The default is m5.xlarge. does not support using instances smaller than t2.xlarge.
- Click Next: Configure Instance Details.
- For Number of Instances, type the number of instances you need. Total number of AMI instances = Number of cluster nodes + 1
- Configure the instance according to your organization's policies. See required ports for more information.
Make sure to open the HTTPS port 9999 for your instances.
- Click Next: Add Storage.
- Add storage.
You can increase disk size later, but you cannot decrease disk size.
- Click Next: Add Tags.
- Add tags to help identify your installation in your EC2 dashboard.
- Click Next: Configure Security Group.
- Configure Security Groups. By default, SSH, HTTP, and HTTPS are permitted from all IP addresses. Increase security by limiting access to your organization's IP addresses.
- Add the following ports for clustering:
- 2049 - GlusterFS and NFS for NFS exports. Used by the nfsd process.
- 4369 - RabbitMQ port mapper. All cluster nodes must be able to communicate with each other on this port.
- 8300 - Consul RPC services. All cluster nodes must be able to communicate with each other on this port.
- 8301 - Consul internode communication. All cluster nodes must be able to communicate with each other on this port.
- 8302- Consul internode communication. All cluster nodes must be able to communicate with each other on this port.
- 25672 - RabbitMQ internode communications. All cluster nodes must be able to communicate with each other on this port.
- Click Review and Launch.
- Generate or choose SSH keys.
- Click Launch Instances. The installation typically takes 15 minutes to complete.
In order to log in to the operating system of your AMI-based Splunk SOAR (On-premises) install using SSH, use the user account phantom. If you need root access, use
sudo su -
to elevate to root.
Install SSH keys
During the conversion to cluster nodes, each instance will need to SSH as the phantom user into other nodes. Install the client certificate you generated for SSH when the instances were created.
Do this on each of the instances that you will convert to cluster nodes.
- Copy the .pem file generated earlier to each instance using SCP. scp -i <path/to/.pem> <path/to/.pem to transfer> phantom@<instance IP or DNS name>:~/
- SSH to an AMI-based instance as the phantom user.
- Move the .pem key to the phantom user's .ssh directory. mv <name of file>.pem .ssh
- Set the permissions on the .pem key. chmod 600 .ssh/<name of file>.pem
- Test that you are able to SSH from each instance to the others as the phantom user.
Create a load balancer with Elastic Load Balancer (ELB)
Create a load balancer for your cluster. An Elastic Load Balancer will be used instead of HAProxy.
- Log in to your AWS EC2 account.
- From the menu on the EC2 dashboard, under the heading Load Balancing, choose Load Balancers.
- Click Create Load Balancer.
- Under Application Load Balancer, click Create.
- Type a name for your load balancer in the Name field.
- Select a Scheme. The scheme will depend on your AWS network configuration. Assuming your load balancer will route on an internal network, select the internal radio button.
- Set the IP address type. This will also depend on your AWS network configuration. In most cases, select ipv4 from the menu.
- Under Listeners, Load Balancer Protocol, select HTTPS from the menu. The Load Balancer Port changes to 443. You must also set listeners to use the custom HTTPS port 9999.
- Under Availability Zones, select the VPC and Availability Zones to match your AWS network configuration.
- Add Tags to help organize and identify your load balancer.
- Click Next: Configure Security Settings.
- Select or create a security group according to your organization's policies. These settings can vary based on factors outside the scope of this document.
- Click Next: Configure Routing.
- Under Target group, choose New target group from the menu.
- Type a name for your target group in the Name field.
- For Target type, select the Instance radio button.
- For Protocol, select HTTPS from the menu. Port changes to 443 automatically.
The custom HTTPS port used by your Splunk SOAR (On-premises) cluster nodes must be accessible to the load balancer. For example, because the port you are using for HTTPS for the AMI Splunk SOAR (On-premises) cluster nodes is port 9999, you must also open port 9999 on the load balancer.
- Under Health checks, set Protocol to HTTPS.
Health checks will fail until you have run the make_cluster_node scripts to add your Splunk SOAR (On-premises) instances to your cluster. This is normal and expected.
- In the Path field, type /check.
- Click Next: Register Targets.
- Under Instances, find and select the cluster node instances for your cluster. You do not need to load balance the external services, such as PostgreSQL, file shares, or Splunk Enterprise.
- Click Add to registered.
- Click Next: Review.
- Review for and correct any errors.
- Click Create.
- Select the load balancer by name.
- From the Actions menu, select Edit attributes.
- Set the Idle timeout to 120 seconds.
- Click Save.
Create a Target Group for your cluster's websockets traffic
This target group will be used to route websockets traffic for the Cluster. See Groups for Your Application Load Balancers in the AWS documentation.
- In the sidebar on the EC2 dashboard, under Load Balancing, select Target Groups.
- Click Create target group.
- Now create the websockets target group. In the Create target group dialog:
- Type a name in the Target group name field.
- Select the Instance radio button.
- Select HTTPS from the Protocol menu. Port will change to 443.
You must also open the AMI build of Splunk SOAR (On-premises)'s custom HTTPS port 9999 to allow HTTPS traffic for unprivileged processes.
- Select the same VPC that your target instances are using from the menu.
- Under Health Check settings, select HTTPS from the Protocol menu.
Health checks will fail until you have run the make_cluster_node scripts to add your Splunk SOAR (On-premises) instances to your cluster. This is normal and expected.
- In the Path field, type /check.
- Click Next: Register Targets.
- Under Instances, find and select the cluster node instances for your cluster. You do not need to load balance the external services, such as PostgreSQL, file shares, or Splunk Enterprise.
- Click Add to registered.
- Click Create.
- From the target groups list, select the target group you just created.
- On the Description tab, under Attributes, click Edit attributes.
- In the Edit attributes dialog:
- For Stickiness, select the Enable check box.
- Set Stickiness duration by typing 7 and choosing days from the menu.
- Click Save.
Setting the Stickiness duration is important so that websockets can persist. Always use the longest possible duration available. Setting this value too low will result in connections being prematurely closed. Wherever possible, set the idle_timeout.timeout_seconds
to a value as high as possible for your Elastic Load Balancer. See Application Load Balancers in the AWS documentation.
Add the routing rules to your load balancer
Here you create rules to route traffic.
- One rule to route all the persistent connections to the websockets listener.
- A second rule to route all other traffic to the other listener.
- From the EWS menu, under Load Balancing, select Load Balancers.
- Select the load balancer you have created for your cluster.
- Click the Listeners tab.
- Under Rules, click the View/edit rules link.
- Click the + icon to add a new rule.
- Click the + Insert Rule link to edit the rule.
- Under IF (all match), click + Add condition.
- Select Path…, then type /websocket in the text box.
- Click the checkmark icon.
Create the file stores with Elastic File System (EFS)
Create shared file stores for your cluster. Cluster nodes will store files that must be shared by all instances to these shares. See System Requirements for more information.
Only instances in the VPC you select during EFS creation can connect to that file system.
- Under Configure file system access, select the desired VPC from the menu.
- Under Create mount targets, select the check boxes for the availability zones you need.
- Click Next Step.
- Set the security groups as required by your organization's policies.
- Under Configure optional settings, set options as required by your organization's requirements or policies.
- Click Next Step.
- Review the options selected, then click Create File System.
Create the external PostgreSQL database with the Relational Database System (RDS)
uses a PostgreSQL 11 database. In many installations, the database runs on the same server as . For an AWS cluster, it makes sense to set up an external PostgreSQL database using RDS. This database will serve as the primary database for the cluster.
You may use any release of PostgreSQL 11.x. See Upgrading for support.
- From your EC2 dashboard, click Services in the menu bar, and under Database choose RDS.
- Click Create database.
- Select Standard Create.
- Under Engine options, select PostgreSQL.
- For Version, select 11.11 from the menu. You may use any PostgreSQL 11.x release.
- For Templates, select either Production for production environments or Dev/Test for development/testing or Proof of Value environments.
- Under Settings, type a name for your DB instance identifier. Make sure that the name is unique across all DB instances owned by your AWS account.
- Under Credential Settings:
- Master username: postgres
- Make sure the Auto generate a password checkbox is not selected.
- Type and confirm the Master password in the fields provided. Record this password. You will need it later.
- Under DB instance size, select the radio button that matches your organization's needs.
Warning: Instances below db.t2.large may deplete their available connections before installation of your Splunk SOAR (On-premises) cluster is complete.
- Under Storage, select a Storage type based on your organization's needs.
- For Allocated storage, set a number of GiB that matches your organization's needs.
Databases with less than 500 gigabytes of storage are not supported for production use.
- Select the Enable storage autoscaling check box.
- Set Maximum storage threshold to 1000 (GiB).
- For Allocated storage, set a number of GiB that matches your organization's needs.
- Under Availability & durability, select the Do not create a standby instance radio button.
- Under Connectivity, select the same VPC as you used for your instances.
- Under the Additional connectivity configuration section:
- Select the correct Subnet group. The available groups depend on your VPC selection.
- Under Publicly accessible, select the No radio button.
- Under VPC security group, select Choose existing.
- Select the appropriate security group from the menu.
- Click the X icon to remove any unwanted security groups that were added by default.
- Make sure the Database port is set to 5432.
- Under Additional configuration, Database options:
- Type phantom for Initial database name.
- Make sure the DB parameter group is set to default.postgres11.11. If you selected a different PostgreSQL version 11 earlier, set the parameter to match.
- Under Additional configuration, Backup, leave everything at the defaults.
- Click Create Database.
Create the pgbouncer user for the RDS
interacts with the PostgreSQL database using the pgbouncer user account. This account needs to be created for the database created in RDS.
- Login to an AMI-based instance as the phantom user using SSH.
- Create the pgbouncer user. phenv psql --host <DNS name for RDS instance> --port 5432 --username postgres --echo-all --dbname phantom --command "CREATE ROLE pgbouncer WITH PASSWORD '<pgbouncer password>' login;"
Set up and mount the needed directories for your cluster. Do this in three stages. The first to install the required packages, the second to create the required shared directories in EFS and copy over existing data, the third to mount the directories on all instances and make the mounts permanent.
Stage one:
Do this stage on each of your AMI-based instances.
- Login to an AMI-based instance as the phantom user using SSH.
- Elevate to root. sudo su -
- Install the package nfs-utils. yum install nfs-utils
Stage two:
Do this stage on only one of your AMI-based instances. You will create a temporary directory, mount it to EFS, then use it to copy existing files to EFS.
- Login to an AMI-based instance as the phantom user using SSH.
- Elevate to root. sudo su -
- Create a local mount on this instance. This mount will be used to replicate the required directory structure on EFS.mkdir -p /mnt/external
- Mount this directory from EFS. mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/ /mnt/external
- Now copy the instance's files to EFS with rsync. rsync -avz /<PHANTOM_HOME>/apps /mnt/external/
rsync -avz /<PHANTOM_HOME>/local_data/app_states /mnt/external/
rsync -avz /<PHANTOM_HOME>/scm /mnt/external/
rsync -avz /<PHANTOM_HOME>/tmp/shared /mnt/external/
rsync -avz /<PHANTOM_HOME>/vault /mnt/external/ - Unmount the temporary mounting. umount /mnt/external
Stage three:
Do this stage on each of your AMI-based instances. Set the mounts for the shared directories to EFS, then update the file system table to make the directories mount from EFS when the instance starts.
- Login to an AMI-based instance as the phantom user using SSH.
- Elevate to root. sudo su -
- Mount all the shared directories to EFS. mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/apps /<PHANTOM_HOME>/apps/
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/app_states /<PHANTOM_HOME>/local_data/app_states
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/scm /<PHANTOM_HOME>/scm
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/shared /<PHANTOM_HOME>/tmp/shared
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/vault /<PHANTOM_HOME>/vault - Edit the file system table
/etc/fstab
to make the mounts permanent. Add these entries. You can get the IP address or DNS name for EFS from your EFS dashboard.vi /etc/fstab
<ip address or DNS name for EFS>:/apps /<PHANTOM_HOME>/apps nfs4 defaults,_netdev 0 0
<ip address or DNS name for EFS>:/app_states /<PHANTOM_HOME>/local_data/app_states nfs4 defaults,_netdev 0 0
<ip address or DNS name for EFS>:/scm /<PHANTOM_HOME>/scm nfs4 defaults,_netdev 0 0
<ip address or DNS name for EFS>:/shared /<PHANTOM_HOME>/tmp/shared nfs4 defaults,_netdev 0 0
<ip address or DNS name for EFS>:/vault /<PHANTOM_HOME>/vault nfs4 defaults,_netdev 0 0
Convert an AMI-based Splunk SOAR (On-premises) instance into the Splunk Enterprise instance
A cluster requires either a Splunk Enterprise instance or a distributed Splunk Enterprise deployment as its search endpoint. Convert one of your AMI-based instances into the required Splunk Enterprise endpoint.
If you already have a Splunk Enterprise deployment that you will use instead, follow the instructions for using an external Splunk Enterprise instance. See Set up Splunk Enterprise.
Convert instance into the Splunk Enterprise instance:
- Login to an AMI-based instance as the phantom user using SSH.
- Elevate to root. sudo su -
- Run the
make_server_node.pyc
script with thesplunk
argument.<PHANTOM_HOME>/bin/phenv python <PHANTOM_HOME>/bin/make_server_node.pyc splunk
The Splunk Enterprise configuration is written to: <PHANTOM_HOME>/bin/splunk_config.json
Logs are written to: <PHANTOM_HOME>/var/log/phantom/make_server_node/make_server_node_<date and time>.log
Test each Splunk SOAR (On-premises) instance for readiness
Before proceeding, test each instance to make sure it is ready for conversion to a cluster node. Log in to each AMI-based instance that will become a cluster node, test that the EFS file shares are mounted and fix any errors.
Make sure that each instance has the EFS file shares mounted.
You must see entries for shared directories in the table with the <EFS ID.dns_name>:/
for the directories apps, app_states, scm, shared, and vault.
Convert the first AMI-Based instance into a cluster node
Convert the first instance to a cluster node.
Converting an AMI-based installation to a server or cluster node is a one-way operation. It cannot be reverted.
You will need this information readily available:
- IP or hostname for the RDS Postgres 11.6 or later DB server
- Password for the postgres user
- Password for the pgbouncer user
- IP or hostname of the ELB load balancer
- Username for SSH
- Path to the key file for SSH
- IP or hostname of the Splunk Enterprise instance
- REST API port for Splunk Enterprise: 5122
- User name for Search: phantomsearch
- Password for the phantomsearch account
- User name for Search: phantomdelete
- Password for the phantomdelete account
- HTTP Event Collector Token
- HTTP Event Collector port: 5121
The information for the Splunk Enterprise instance can be found in the file <PHANTOM_HOME>/bin/splunk_config.json
on your Splunk Enterprise instance.
Make a note the AWS instance ID of this instance. You need it later to log in to your Splunk SOAR (On-premises) cluster.
Run make_cluster_node.pyc
- SSH to the AMI-based instance. Log in with the phantom user account.
- Run the
make_cluster_node.pyc
script with the--record
argument.phenv python <PHANTOM_HOME>/bin/make_cluster_node.pyc --record
The response file is written to: <PHANTOM_HOME>/bin/response.json
The log is written to: <PHANTOM_HOME>/var/log/phantommake_cluster_node/make_cluster_node_<date and time>.log
The response file can be used with the make_cluster_node.pyc
script on other nodes to automatically provide the information the script needs.
Convert the remaining AMI-based Splunk SOAR (On-premises) instances into cluster nodes
Convert each of the remaining AMI-based instances into cluster nodes by running the make_cluster_node.pyc script.
Run make_cluster_node.pyc
- SSH to the AMI-based instance. Log in with the phantom user account.
- Run the
make_cluster_node.pyc
script with the--responses
argument.<PHANTOM_HOME>/bin/phenv python <PHANTOM_HOME>/bin/make_cluster_node.pyc --responses <PHANTOM_HOME>/bin/response.json
You don't have to use responses.json. If you do not supply a JSON file, the script prompts you for the information it needs. The mcn_responses.json
file contains secrets such as usernames and passwords in plain text. Store it in a secure location or delete it after the cluster configuration is complete.
Log in to the Splunk SOAR (On-premises) web interface
Connect to the web interface of your newly installed cluster.
Use the AWS instance ID of the first Splunk SOAR (On-premises) instance where the make_cluster_node
script was run for the cluster's initial password.
- Get the public IP address or DNS name for the elastic load balancer from the EC2 Management Console.
- Get the full AWS instance ID for the EC2 instance.
- Using a browser, go to the public IP address or DNS name for the elastic load balancer.
- User name: admin
- Password: <Full AWS instance ID>
- Change the admin user's password:
- From the User Name menu, select Account Settings.
- From the second level of the menu bar, select Change Password.
- Type the current password.
- Type a new password.
- Type a new password a second time to confirm.
- Click Change Password.
Create a cluster using an unprivileged installation | Convert an existing instance into a cluster |
This documentation applies to the following versions of Splunk® SOAR (On-premises): 6.0.2, 6.1.0, 6.1.1
Feedback submitted, thanks!