Skip to main content
Splunk® SOAR (On-premises)

Administer Splunk SOAR (On-premises)

Splunk® SOAR (On-premises)
6.1.0
As of version 6.4.0, the visual editor for classic playbooks is no longer part of Splunk SOAR. Before upgrading, convert your classic playbooks to modern mode. Your classic playbooks will continue to run and you can view and edit them in the SOAR Python code editor.
For details, see:

Create a warm standby

You will need two identical instances of Splunk SOAR (On-premises), one to serve as your primary Splunk SOAR (On-premises) instance, and the second to serve as the warm standby.

Do these steps to create your warm standby.

  1. Complete the prerequisites.
  2. Create a second Splunk SOAR (On-premises) instance to be the warm standby.
  3. Setup SSH access between the primary Splunk SOAR (On-premises) instance and the new warm standby.
  4. Configure warm standby using the setup_warm_standby.pyc script.

Creating a warm standby will restart Splunk SOAR (On-premises). You should schedule setting up warm standby for a change window or other scheduled downtime.

Prerequisites

There are some tasks that need to be completed before you can set up warm standby.

  1. Create a full backup or a virtual machine snapshot of the Splunk SOAR (On-premises) instance that will be your primary.
  2. Create a DNS A record for a hostname for your Splunk SOAR (On-premises) instance. You may need to work with other teams who manage DNS to accomplish this. Establish an appropriate Time To Live (TTL) value for this record since you will update the DNS A record in the event of a failover.
  3. Set the Base URL for Splunk SOAR (On-premises) Appliance with the the hostname from the DNS A record in Main Menu > Administration > Company Settings. Example: https://phantom.example.com
  4. Open the following ports on the primary Splunk SOAR (On-premises) instance's firewall TCP 22 for SSH, TCP 443 (HTTPS), and TCP 5432 for PostgreSQL operations.
  5. Set up SSH between the primary Splunk SOAR (On-premises) instance and the warm standby.
  6. Review Manage warm standby features and options for any additional options you might want to use.

Create a second Splunk SOAR (On-premises) instance to be the warm standby

You can either:

  • clone the virtual machine that is your primary Splunk SOAR (On-premises) instance, or
  • create an entirely new instance of Splunk SOAR (On-premises) to serve as the warm standby.

Create a Clone of your primary Splunk SOAR (On-premises) instance

You can create a clone of your primary Splunk SOAR (On-premises) instance. This clone will serve as the warm standby.

Consult the documentation for your virtualization software or the operating system software for how to clone and deploy the cloned instance of Splunk SOAR (On-premises).

Your clone will need to have its own IP and MAC addresses.

Before you clone the Splunk SOAR (On-premises) instance check to see if it is already being used as part of a warm standby pair. If the instance is part of a warm standby pairing, warm standby must be disabled before cloning the instance. See Disable warm standby.

  1. Clone your Splunk SOAR (On-premises) instance as described by your virtualization or operating system documentation.
  2. Change the MAC and IP addresses for the new clone copy of Splunk SOAR (On-premises).
  3. On the clone copy and primary instance of Splunk SOAR (On-premises), set a password for the Splunk SOAR (On-premises) user account. This password will be used later during configuration.
    passwd phantom
  4. On the clone of Splunk SOAR (On-premises), disable cron to prevent any jobs from making changes during setup and configuration.
    sudo systemctl stop crond.service
  5. On the clone of Splunk SOAR (On-premises), make sure that the port used for PostgreSQL 5432 is allowed through your firewalls.
    1. Check your firewall rules.
      sudo firewall-cmd --list-all
    2. (Conditional) If the port 5432 is not permitted through the firewall, add an entry to the firewall rules for it.
      sudo firewall-cmd --zone=public --add-port=5432/tcp
    3. (Conditional) If you needed to add port 5432 to your firewalld configuration, make the entry from the previous step permanent.
      sudo firewall-cmd --zone=public --add-port=5432/tcp --permanent

Create a new Splunk SOAR (On-premises) instance

If using a clone of your primary Splunk SOAR (On-premises) instance is not feasible or is otherwise unwanted, you can install a new instance of Splunk SOAR (On-premises) to serve as your warm standby.

Do these steps as the phantom user.

  1. Install Splunk SOAR (On-premises). See How can Splunk SOAR (On-premises) be installed? in Install and Upgrade Splunk SOAR (On-premises).
  2. SSH to your warm standby Splunk SOAR (On-premises) instance.
    ssh <username>@<warm_standby_phantom_hostname>
  3. Stop Splunk SOAR (On-premises) services on the standby.
    sudo /<$PHANTOM_HOME>/bin/stop_phantom.sh
  4. Copy these files from the primary instance of Splunk SOAR (On-premises) to the new warm standby instance.
    1. /<$PHANTOM_HOME>/keystore/private_key.pem
    2. /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
  5. On the warm standby instance of Splunk SOAR (On-premises), set the permissions, ownership, and SELinux security contexts for the files you copied to it.
    1. chmod 0640 /<$PHANTOM_HOME>/keystore/private_key.pem /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
    2. chown phantom:phantom /<$PHANTOM_HOME>/keystore/private_key.pem
    3. chown phantom:phantom /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
    4. restorecon /<$PHANTOM_HOME>/keystore/private_key.pem /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
  6. On both the new warm standby instance and the primary instance of Splunk SOAR (On-premises), set a password for the phantom user account if you haven't already done so. This password will be used later during configuration.
    passwd phantom
  7. On both the new warm standby instance and the primary instance of Splunk SOAR (On-premises), make sure that the port used for PostgreSQL 5432 is allowed through your firewalls.
    1. Check your firewall rules.
      firewall-cmd --list-all
    2. (Conditional) If the port 5432 is not permitted through the firewall, add an entry to the firewall rules for it.
      firewall-cmd --zone=public --add-port=5432/tcp
  8. On the new warm standby instance of Splunk SOAR (On-premises), disable cron to prevent any jobs from making changes during setup and configuration.
    sudo systemctl stop crond.service

If you have installed and configured CyberArk AIM on your primary, you will need to install and configure CyberArk AIM on your warm standby.

Setup SSH between the primary and the new warm standby

During setup the primary instance of Splunk SOAR (On-premises) will need to connect to the warm standby instance of Splunk SOAR (On-premises) using SSH.

If password authentication is disabled, it must be enabled in order to proceed and can be disabled once set up is complete.

Configure warm standby using the setup_warm_standby.pyc script

Once both your primary and warm standby instances are ready, you can configure warm standby using the setup_warm_standby.pyc script.

If you do not know if one or both of the instances are already part of a warm standby configuration, check warm standby status before proceeding. See How to check the status of warm standby in the Warm standby feature overview.

Warm standby must be disabled before reconfiguring warm standby to use different instances. See Disable warm standby.

Do these steps as the phantom user.

  1. On the primary Splunk SOAR (On-premises) instance, make sure that Splunk SOAR (On-premises) is running.
    /opt/phantom/bin/start_phantom.sh
  2. On the warm standby Splunk SOAR (On-premises) instance, make sure that Splunk SOAR (On-premises) is running.
    /opt/phantom/bin/start_phantom.sh
  3. On the primary Splunk SOAR (On-premises) instance, run the setup_warm_standby.pyc script.
    phenv python /<$PHANTOM_HOME>/bin/setup_warm_standby.pyc --primary-mode --configure --primary-ip <IP address of the primary> --standby-ip <IP address of the warm standby>
    You will be prompted for:
    • The password for the user account Splunk SOAR (On-premises) on the warm standby. This password was set when the warm standby instance was created earlier.
    • Create a password for the database replication user. This password will be used to configure PostgreSQL database replication.
    • Configuration information to create the SSL certificate file used for communication between the primary and warm standby Splunk SOAR (On-premises) instances.
      Example:
      Country Code: US
      State Code: CA
      City: Palo Alto
      Organization: Example
      Organization Unit: Security
      Domain: phantom.soc.example.com
      Email: soc@example.com
  4. On the warm standby Splunk SOAR (On-premises) instance, run the setup_warm_standby.pyc script.
    phenv python /<$PHANTOM_HOME>/bin/setup_warm_standby.pyc --standby-mode --configure --primary-ip <IP address of the primary> --standby-ip <IP address of the warm standby>
  5. On the warm standby re-enable the cron service.
    sudo systemctl start crond.service
Last modified on 11 October, 2023
Warm standby feature overview   Failover to the warm standby

This documentation applies to the following versions of Splunk® SOAR (On-premises): 5.4.0, 5.5.0, 6.0.0, 6.0.1, 6.0.2, 6.1.0, 6.1.1, 6.2.0, 6.2.1, 6.2.2, 6.3.0, 6.3.1, 6.4.0


Please expect delayed responses to documentation feedback while the team migrates content to a new system. We value your input and thank you for your patience as we work to provide you with an improved content experience!

Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters