After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:
Create a cluster using an unprivileged installation
Do not use this release to create new clusters of Splunk SOAR (On-premises).
Use this release to upgrade from your current privileged deployment of Splunk Phantom 4.10.7 or Splunk SOAR (On-premises) releases 5.0.1 through 5.3.4.
If you are upgrading a privileged deployment of Splunk Phantom 4.10.7 or Splunk SOAR (On-premises) releases 5.0.1 through 5.3.4, upgrade to release 5.3.6, convert your deployment to unprivileged, then upgrade again directly to Splunk SOAR (On-premises) release 6.1.1 or higher.
If you have a privileged deployment of Splunk SOAR (On-premises) release 5.3.5, convert your deployment to unprivileged, then upgrade directly to Splunk SOAR (On-premises) release 6.1.1 or higher.
To learn how to upgrade see Splunk SOAR (On-premises) upgrade overview and prerequisites.
Build a cluster, putting each of the services on its own server or group of servers to serve multiple cluster nodes of .
Set up each of the external services either as the root user or a user with sudo permissions.
Install as an unprivileged user. In your cluster, each instance must have the same custom username and install directory. See Install as an unprivileged user.
Number | Task | Description |
---|---|---|
1 | Create the HAProxy node. | Use the HAProxy server to be a load balancer for the nodes in your cluster. See Set up a load balancer with an HAProxy server. There are additional steps to configure your load balancer to handle your custom HTTPS port for unprivileged clusters. |
2 | Install using the tar file method for unprivileged installs. | Do this once for each node you need in your cluster. Each node must meet the system requirements for a deployment. See the following documentation for more information. |
3 | Create the PostgreSQL node. | Establish a PostgreSQL database server or cluster to store information. See Set up the external PostreSQL server. If you have an existing PostgreSQL database from a single-instance deployment of that you intended to use for your cluster, you should backup your PostgreSQL database and restore it to your new PostgreSQL node. See Backup a Splunk SOAR (On-premises) database and restore to an external database in Install and Upgrade Splunk SOAR (On-premises) for instructions. |
4 | Create the file shares node. | stores all its shared files on the prepared GlusterFS server. You can use NFS or other network file system. Instructions for that are not included in this document. See Set up external file shares using GlusterFS. |
5 | Create the Splunk Enterprise node. | uses Splunk Enterprise for searches and collect data for indexing using the HTTP Event Collector. See Set up Splunk Enterprise. |
6 | Prepare instances to connect to the GlusterFS file share. | See Prepare an unprivileged instance to connect to the GlusterFS file share This task must be completed by a user with root or sudo access. |
7 | Convert instances to cluster nodes. | Convert the first instance into a cluster node by running make_cluster_node.pyc . See Run make_cluster_node.pyc. Repeat on each instance that will become a cluster node.
|
Before you can convert an unprivileged instance into a node in an unprivileged cluster, you must prepare each of the instances to connect to the GlusterFS file share.
Perform these steps as a user with root or sudo access to the instance.
- Stop Splunk SOAR (On-premises) services.<$PHANTOM_HOME>/bin/stop_phantom.sh
- Install the GlusterFS client on each instance. yum install glusterfs-fuse -y
- Add the required TLS keys for the Gluster FS server and the GlusterFS directory,
/etc/ssl/
, and control file to each instance. Make sure to use the keys generated during GlusterFS installation. If you are using the web interface to add new cluster nodes, you will need to supply the TLS keys in Administration > Product Settings > Clustering.- Create the directory and control file to make GlusterFS use TLS.
mkdir -p /var/lib/glusterd/ touch /var/lib/glusterd/secure-access
- Copy your
glusterkeys.tgz
file to/etc/ssl/
on the instance. - Extract the TAR file.
tar xvzf glusterkeys.tgz
- Delete the
glusterkeys.tgz
file from/etc/ssl/
.
- Create the directory and control file to make GlusterFS use TLS.
- Edit the cluster member's file system table,
/etc/fstab
, to mount the GlusterFS volumes. Yourfstab
entries must not have line breaks.<glusterfs_hostname>:/apps /<phantom_install_dir>/apps glusterfs defaults,_netdev 0 0 <glusterfs_hostname>:/app_states /<phantom_install_dir>/local_data/app_states glusterfs defaults,_netdev 0 0 <glusterfs_hostname>:/scm /<phantom_install_dir>/scm glusterfs defaults,_netdev 0 0 <glusterfs_hostname>:/tmp /<phantom_install_dir>/tmp/shared glusterfs defaults,_netdev 0 0 <glusterfs_hostname>:/vault /<phantom_install_dir>/vault glusterfs defaults,_netdev 0 0
- Mount all the volumes to make them available.
mount /<phantom_install_dir>/apps mount /<phantom_install_dir>/local_data/app_states mount /<phantom_install_dir>/scm mount /<phantom_install_dir>/tmp/shared mount /<phantom_install_dir>/vault
- Start services on all cluster nodes.
<$PHANTOM_HOME>/bin/start_phantom.sh
Create a cluster using a privileged installation | Create a cluster in Amazon Web Services |
This documentation applies to the following versions of Splunk® SOAR (On-premises): 5.3.6
Feedback submitted, thanks!