Splunk® SOAR (On-premises)

Install and Upgrade Splunk SOAR (On-premises)

Acrobat logo Download manual as PDF


The classic playbook editor will be deprecated soon. Convert your classic playbooks to modern mode.
After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:
This documentation does not apply to the most recent version of Splunk® SOAR (On-premises). For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Create a cluster using an unprivileged installation

Build a cluster, putting each of the services on its own server or group of servers to serve multiple cluster nodes of .

Set up each of the external services either as the root user or a user with sudo permissions.

Install as an unprivileged user. In your cluster, each instance must have the same custom username and install directory. See Install as an unprivileged user.

Number Task Description
1 Create the HAProxy node. Use the HAProxy server to be a load balancer for the nodes in your cluster. See Set up a load balancer with an HAProxy server. There are additional steps to configure your load balancer to handle your custom HTTPS port for unprivileged clusters.
2 Install using the tar file method for unprivileged installs. Do this once for each node you need in your cluster. Each node must meet the system requirements for a deployment. See the following documentation for more information.
3 Create the PostgreSQL node. Establish a PostgreSQL database server or cluster to store information. See Set up the external PostreSQL server.

If you have an existing PostgreSQL database from a single-instance deployment of that you intended to use for your cluster, you should backup your PostgreSQL database and restore it to your new PostgreSQL node. See Backup a Splunk SOAR (On-premises) database and restore to an external database in Install and Upgrade Splunk SOAR (On-premises) for instructions.
4 Create the file shares node. stores all its shared files on the prepared GlusterFS server. You can use NFS or other network file system. Instructions for that are not included in this document. See Set up external file shares using GlusterFS.
5 Create the Splunk Enterprise node. uses Splunk Enterprise for searches and collect data for indexing using the HTTP Event Collector. See Set up Splunk Enterprise.
6 Prepare instances to connect to the GlusterFS file share. See Prepare an unprivileged instance to connect to the GlusterFS file share

This task must be completed by a user with root or sudo access.

7 Convert instances to cluster nodes. Convert the first instance into a cluster node by running make_cluster_node.pyc. See Run make_cluster_node.pyc. Repeat on each instance that will become a cluster node.

Prepare an unprivileged instance to connect to the GlusterFS file share

Before you can convert an unprivileged instance into a node in an unprivileged cluster, you must prepare each of the instances to connect to the GlusterFS file share.

Perform these steps as a user with root or sudo access to the instance.

  1. Stop Splunk SOAR (On-premises) services.
    <$PHANTOM_HOME>/bin/stop_phantom.sh
  2. Install the GlusterFS client on each instance.
    yum install glusterfs-fuse -y
  3. Add the required TLS keys for the Gluster FS server and the GlusterFS directory, /etc/ssl/, and control file to each instance. Make sure to use the keys generated during GlusterFS installation. If you are using the web interface to add new cluster nodes, you will need to supply the TLS keys in Administration > Product Settings > Clustering.
    1. Create the directory and control file to make GlusterFS use TLS.
      mkdir -p /var/lib/glusterd/
      touch /var/lib/glusterd/secure-access
      
    2. Copy your glusterkeys.tgz file to /etc/ssl/ on the instance.
    3. Extract the TAR file.
      tar xvzf glusterkeys.tgz
    4. Delete the glusterkeys.tgz file from /etc/ssl/.
  4. Edit the cluster member's file system table, /etc/fstab, to mount the GlusterFS volumes. Your fstab entries must not have line breaks.
    <glusterfs_hostname>:/apps /<phantom_install_dir>/apps glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/app_states /<phantom_install_dir>/local_data/app_states glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/scm /<phantom_install_dir>/scm glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/tmp /<phantom_install_dir>/tmp/shared glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/vault /<phantom_install_dir>/vault glusterfs defaults,_netdev 0 0
    
  5. Mount all the volumes to make them available.
    mount /<phantom_install_dir>/apps
    mount /<phantom_install_dir>/local_data/app_states
    mount /<phantom_install_dir>/scm
    mount /<phantom_install_dir>/tmp/shared
    mount /<phantom_install_dir>/vault
    
  6. Start services on all cluster nodes.
    <$PHANTOM_HOME>/bin/start_phantom.sh
Last modified on 01 November, 2023
PREVIOUS
Create a cluster using a privileged installation
  NEXT
Create a cluster in Amazon Web Services

This documentation applies to the following versions of Splunk® SOAR (On-premises): 5.1.0, 5.2.1, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters