After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:
Create a warm standby
You will need two identical instances of , one to serve as your primary instance, and the second to serve as the warm standby.
Do these steps to create your warm standby.
- Complete the prerequisites.
- Create a second instance to be the warm standby.
- Setup SSH access between the primary instance and the new warm standby.
- Configure warm standby using the setup_warm_standby.pyc script.
Creating a warm standby will restart Splunk SOAR (On-premises). You should schedule setting up warm standby for a change window or other scheduled downtime.
Prerequisites
There are some tasks that need to be completed before you can set up warm standby.
- Create a full backup or a virtual machine snapshot of the instance that will be your primary.
- Create a DNS A record for a hostname for your instance. You may need to work with other teams who manage DNS to accomplish this. Establish an appropriate Time To Live (TTL) value for this record since you will update the DNS A record in the event of a failover.
- Set the Base URL for Appliance with the the hostname from the DNS A record in Main Menu > Administration > Company Settings. Example: https://phantom.example.com
- Open the following ports on the primary instance's firewall TCP 22 for SSH, TCP 443 (HTTPS), and TCP 5432 for PostgreSQL operations.
- Set up SSH between the primary instance and the warm standby.
- Review Manage warm standby features and options for any additional options you might want to use.
Create a second instance to be the warm standby
You can either:
- clone the virtual machine that is your primary instance, or
- create an entirely new instance of to serve as the warm standby.
Create a Clone of your primary instance
You can create a clone of your primary instance. This clone will serve as the warm standby.
Consult the documentation for your virtualization software or the operating system software for how to clone and deploy the cloned instance of .
Your clone will need to have its own IP and MAC addresses.
Before you clone the Splunk SOAR (On-premises) instance check to see if it is already being used as part of a warm standby pair. If the instance is part of a warm standby pairing, warm standby must be disabled before cloning the instance. See Disable warm standby.
- Clone your instance as described by your virtualization or operating system documentation.
- Change the MAC and IP addresses for the new clone copy of .
- On the clone copy and primary instance of , set a password for the user account. This password will be used later during configuration. passwd phantom
- On the clone of , disable cron to prevent any jobs from making changes during setup and configuration. sudo systemctl stop crond.service
- On the clone of , make sure that the port used for PostgreSQL 5432 is allowed through your firewalls.
- Check your firewall rules. sudo firewall-cmd --list-all
- (Conditional) If the port 5432 is not permitted through the firewall, add an entry to the firewall rules for it. sudo firewall-cmd --zone=public --add-port=5432/tcp
- (Conditional) If you needed to add port 5432 to your firewalld configuration, make the entry from the previous step permanent. sudo firewall-cmd --zone=public --add-port=5432/tcp --permanent
- Check your firewall rules.
Create a new instance
If using a clone of your primary instance is not feasible or is otherwise unwanted, you can install a new instance of to serve as your warm standby.
Do these steps as the phantom user.
- Install . See How can be installed? in Install and Upgrade .
- SSH to your warm standby instance. ssh <username>@<warm_standby_phantom_hostname>
- Stop services on the standby.
sudo /<$PHANTOM_HOME>/bin/stop_phantom.sh - Copy these files from the primary instance of to the new warm standby instance.
- /<$PHANTOM_HOME>/keystore/private_key.pem
- /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
- On the warm standby instance of , set the permissions, ownership, and SELinux security contexts for the files you copied to it.
- chmod 0640 /<$PHANTOM_HOME>/keystore/private_key.pem /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
- chown phantom:phantom /<$PHANTOM_HOME>/keystore/private_key.pem
- chown phantom:phantom /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
- restorecon /<$PHANTOM_HOME>/keystore/private_key.pem /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
- On both the new warm standby instance and the primary instance of , set a password for the phantom user account if you haven't already done so. This password will be used later during configuration.passwd phantom
- On both the new warm standby instance and the primary instance of , make sure that the port used for PostgreSQL 5432 is allowed through your firewalls.
- Check your firewall rules. firewall-cmd --list-all
- (Conditional) If the port 5432 is not permitted through the firewall, add an entry to the firewall rules for it. firewall-cmd --zone=public --add-port=5432/tcp
- Check your firewall rules.
- On the new warm standby instance of , disable cron to prevent any jobs from making changes during setup and configuration. sudo systemctl stop crond.service
If you have installed and configured CyberArk AIM on your primary, you will need to install and configure CyberArk AIM on your warm standby.
Setup SSH between the primary and the new warm standby
During setup the primary instance of will need to connect to the warm standby instance of using SSH.
If password authentication is disabled, it must be enabled in order to proceed and can be disabled once set up is complete.
Configure warm standby using the setup_warm_standby.pyc script
Once both your primary and warm standby instances are ready, you can configure warm standby using the setup_warm_standby.pyc script.
If you do not know if one or both of the instances are already part of a warm standby configuration, check warm standby status before proceeding. See How to check the status of warm standby in the Warm standby feature overview.
Warm standby must be disabled before reconfiguring warm standby to use different instances. See Disable warm standby.
Do these steps as the phantom user.
- On the primary instance, make sure that is running. /opt/phantom/bin/start_phantom.sh
- On the warm standby instance, make sure that is running. /opt/phantom/bin/start_phantom.sh
- On the primary instance, run the setup_warm_standby.pyc script. phenv python /<$PHANTOM_HOME>/bin/setup_warm_standby.pyc --primary-mode --configure --primary-ip <IP address of the primary> --standby-ip <IP address of the warm standby>You will be prompted for:
- The password for the user account on the warm standby. This password was set when the warm standby instance was created earlier.
- Create a password for the database replication user. This password will be used to configure PostgreSQL database replication.
- Configuration information to create the SSL certificate file used for communication between the primary and warm standby instances.
Example:Country Code: US
State Code: CA
City: Palo Alto
Organization: Example
Organization Unit: Security
Domain: phantom.soc.example.com
Email: soc@example.com
- On the warm standby instance, run the setup_warm_standby.pyc script. phenv python /<$PHANTOM_HOME>/bin/setup_warm_standby.pyc --standby-mode --configure --primary-ip <IP address of the primary> --standby-ip <IP address of the warm standby>
- On the warm standby re-enable the cron service. sudo systemctl start crond.service
Warm standby feature overview | Failover to the warm standby |
This documentation applies to the following versions of Splunk® SOAR (On-premises): 5.4.0, 5.5.0, 6.0.0, 6.0.1, 6.0.2, 6.1.0, 6.1.1, 6.2.0, 6.2.1, 6.2.2, 6.3.0, 6.3.1
Feedback submitted, thanks!