Splunk® SOAR (On-premises)

Administer Splunk SOAR (On-premises)

The classic playbook editor will be deprecated in early 2025. Convert your classic playbooks to modern mode.
After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:

Create a warm standby

You will need two identical instances of , one to serve as your primary instance, and the second to serve as the warm standby.

Do these steps to create your warm standby.

  1. Complete the prerequisites.
  2. Create a second instance to be the warm standby.
  3. Setup SSH access between the primary instance and the new warm standby.
  4. Configure warm standby using the setup_warm_standby.pyc script.

Creating a warm standby will restart Splunk SOAR (On-premises). You should schedule setting up warm standby for a change window or other scheduled downtime.

Prerequisites

There are some tasks that need to be completed before you can set up warm standby.

  1. Create a full backup or a virtual machine snapshot of the instance that will be your primary.
  2. Create a DNS A record for a hostname for your instance. You may need to work with other teams who manage DNS to accomplish this. Establish an appropriate Time To Live (TTL) value for this record since you will update the DNS A record in the event of a failover.
  3. Set the Base URL for Appliance with the the hostname from the DNS A record in Main Menu > Administration > Company Settings. Example: https://phantom.example.com
  4. Open the following ports on the primary instance's firewall TCP 22 for SSH, TCP 443 (HTTPS), and TCP 5432 for PostgreSQL operations.
  5. Set up SSH between the primary instance and the warm standby.
  6. Review Manage warm standby features and options for any additional options you might want to use.

Create a second instance to be the warm standby

You can either:

  • clone the virtual machine that is your primary instance, or
  • create an entirely new instance of to serve as the warm standby.

Create a Clone of your primary instance

You can create a clone of your primary instance. This clone will serve as the warm standby.

Consult the documentation for your virtualization software or the operating system software for how to clone and deploy the cloned instance of .

Your clone will need to have its own IP and MAC addresses.

Before you clone the Splunk SOAR (On-premises) instance check to see if it is already being used as part of a warm standby pair. If the instance is part of a warm standby pairing, warm standby must be disabled before cloning the instance. See Disable warm standby.

  1. Clone your instance as described by your virtualization or operating system documentation.
  2. Change the MAC and IP addresses for the new clone copy of .
  3. On the clone copy and primary instance of , set a password for the user account. This password will be used later during configuration.
    passwd phantom
  4. On the clone of , disable cron to prevent any jobs from making changes during setup and configuration.
    sudo systemctl stop crond.service
  5. On the clone of , make sure that the port used for PostgreSQL 5432 is allowed through your firewalls.
    1. Check your firewall rules.
      sudo firewall-cmd --list-all
    2. (Conditional) If the port 5432 is not permitted through the firewall, add an entry to the firewall rules for it.
      sudo firewall-cmd --zone=public --add-port=5432/tcp
    3. (Conditional) If you needed to add port 5432 to your firewalld configuration, make the entry from the previous step permanent.
      sudo firewall-cmd --zone=public --add-port=5432/tcp --permanent

Create a new instance

If using a clone of your primary instance is not feasible or is otherwise unwanted, you can install a new instance of to serve as your warm standby.

Do these steps as the phantom user.

  1. Install . See How can be installed? in Install and Upgrade .
  2. SSH to your warm standby instance.
    ssh <username>@<warm_standby_phantom_hostname>
  3. Stop services on the standby.
    sudo /<$PHANTOM_HOME>/bin/stop_phantom.sh
  4. Copy these files from the primary instance of to the new warm standby instance.
    1. /<$PHANTOM_HOME>/keystore/private_key.pem
    2. /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
  5. On the warm standby instance of , set the permissions, ownership, and SELinux security contexts for the files you copied to it.
    1. chmod 0640 /<$PHANTOM_HOME>/keystore/private_key.pem /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
    2. chown phantom:phantom /<$PHANTOM_HOME>/keystore/private_key.pem
    3. chown phantom:phantom /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
    4. restorecon /<$PHANTOM_HOME>/keystore/private_key.pem /<$PHANTOM_HOME>/www/phantom_ui/secret_key.py
  6. On both the new warm standby instance and the primary instance of , set a password for the phantom user account if you haven't already done so. This password will be used later during configuration.
    passwd phantom
  7. On both the new warm standby instance and the primary instance of , make sure that the port used for PostgreSQL 5432 is allowed through your firewalls.
    1. Check your firewall rules.
      firewall-cmd --list-all
    2. (Conditional) If the port 5432 is not permitted through the firewall, add an entry to the firewall rules for it.
      firewall-cmd --zone=public --add-port=5432/tcp
  8. On the new warm standby instance of , disable cron to prevent any jobs from making changes during setup and configuration.
    sudo systemctl stop crond.service

If you have installed and configured CyberArk AIM on your primary, you will need to install and configure CyberArk AIM on your warm standby.

Setup SSH between the primary and the new warm standby

During setup the primary instance of will need to connect to the warm standby instance of using SSH.

If password authentication is disabled, it must be enabled in order to proceed and can be disabled once set up is complete.

Configure warm standby using the setup_warm_standby.pyc script

Once both your primary and warm standby instances are ready, you can configure warm standby using the setup_warm_standby.pyc script.

If you do not know if one or both of the instances are already part of a warm standby configuration, check warm standby status before proceeding. See How to check the status of warm standby in the Warm standby feature overview.

Warm standby must be disabled before reconfiguring warm standby to use different instances. See Disable warm standby.

Do these steps as the phantom user.

  1. On the primary instance, make sure that is running.
    /opt/phantom/bin/start_phantom.sh
  2. On the warm standby instance, make sure that is running.
    /opt/phantom/bin/start_phantom.sh
  3. On the primary instance, run the setup_warm_standby.pyc script.
    phenv python /<$PHANTOM_HOME>/bin/setup_warm_standby.pyc --primary-mode --configure --primary-ip <IP address of the primary> --standby-ip <IP address of the warm standby>
    You will be prompted for:
    • The password for the user account on the warm standby. This password was set when the warm standby instance was created earlier.
    • Create a password for the database replication user. This password will be used to configure PostgreSQL database replication.
    • Configuration information to create the SSL certificate file used for communication between the primary and warm standby instances.
      Example:
      Country Code: US
      State Code: CA
      City: Palo Alto
      Organization: Example
      Organization Unit: Security
      Domain: phantom.soc.example.com
      Email: soc@example.com
  4. On the warm standby instance, run the setup_warm_standby.pyc script.
    phenv python /<$PHANTOM_HOME>/bin/setup_warm_standby.pyc --standby-mode --configure --primary-ip <IP address of the primary> --standby-ip <IP address of the warm standby>
  5. On the warm standby re-enable the cron service.
    sudo systemctl start crond.service
Last modified on 11 October, 2023
Warm standby feature overview   Failover to the warm standby

This documentation applies to the following versions of Splunk® SOAR (On-premises): 5.4.0, 5.5.0, 6.0.0, 6.0.1, 6.0.2, 6.1.0, 6.1.1, 6.2.0, 6.2.1, 6.2.2, 6.3.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters