Splunk® SOAR (On-premises)

Install and Upgrade Splunk SOAR (On-premises)

The classic playbook editor will be deprecated in early 2025. Convert your classic playbooks to modern mode.
After the future removal of the classic playbook editor, your existing classic playbooks will continue to run, However, you will no longer be able to visualize or modify existing classic playbooks.
For details, see:
This documentation does not apply to the most recent version of Splunk® SOAR (On-premises). For documentation on the most recent version, go to the latest release.

Set up external file shares using GlusterFS

Do not use this release to create new deployments of Splunk SOAR (On-premises).

Use this release to upgrade from your current privileged deployment of Splunk Phantom 4.10.7 or Splunk SOAR (On-premises) releases 5.0.1 through 5.3.4.

If you are upgrading a privileged deployment of Splunk Phantom 4.10.7 or Splunk SOAR (On-premises) releases 5.0.1 through 5.3.4, upgrade to release 5.3.6, convert your deployment to unprivileged, then upgrade again directly to Splunk SOAR (On-premises) release 6.1.1 or higher.

If you have a privileged deployment of Splunk SOAR (On-premises) release 5.3.5, convert your deployment to unprivileged, then upgrade directly to Splunk SOAR (On-premises) release 6.1.1 or higher.

To learn how to upgrade see Splunk SOAR (On-premises) upgrade overview and prerequisites.

uses several volumes for storage. implements GlusterFS for scalability and security of its file shares. You can put these volumes on their own server, or any server that has adequate storage and bandwidth.

You can use other file systems to provide shared storage. Any file system that meets your organization's security and performance requirements is sufficient. You need to configure the required mounts and permissions. See Supported file systems and required directories.

You can run GlusterFS as an expandable cluster of servers which provide a single mount point for access. While you can run GlusterFS on a single server, three or more servers provides more options for redundancy and high availability.

These instructions cover only configuring a single server and the required shares. To achieve high availability, data redundancy, and other features of GlusterFS see the GlusterFS Documentation.

Prepare the GlusterFS server

The steps to prepare the GlusterFS server differ slightly depending on what operating system you are using.

Prepare the GlusterFS server with CentOS 7

If you are using CentOS 7, complete the following steps to prepare the GlusterFS server.

  1. Install and configure one of the supported operating systems according to your organization's requirements.
  2. Install the prerequisites.
    yum install -y wget curl chrony
  3. Configure chronyd to synchronize the system clock. Search for "chronyd" on access.redhat.com. For other linux distributions, check the website for your specific distribution.
  4. Configure your firewall to allow access for nodes and other members of your GlusterFS cluster. For a complete list of ports, see ports and endpoints.
  5. Format and mount the storage partition. This partition must be separate from the operating system partition. The partition must be formatted with a file system that supports extended attributes.
    mkfs.xfs /dev/<device_name>
    mkdir -p /data/gluster
    echo '/dev/<device_name> /data/gluster xfs defaults 0 0' >> /etc/fstab
    mount -a && mount
    
  6. Install the GlusterFS server.
    yum update
    yum install centos-release-gluster
    yum install glusterfs-server
    
  7. Start the GlusterFS daemon and set it to start at boot.
    systemctl start glusterd
    systemctl enable glusterd
    

Prepare the GlusterFS server with RHEL 7

If you are using RHEL 7, complete the following steps to prepare the GlusterFS server.

  1. Install and configure one of the supported operating systems according to your organization's requirements.
  2. Install the prerequisites.
    yum install -y wget curl chrony
  3. Configure chronyd to synchronize the system clock. Search for "chronyd" on access.redhat.com. For other linux distributions, check the website for your specific distribution.
  4. Configure your firewall to allow access for nodes and other members of your GlusterFS cluster. For a complete list of ports, see ports and endpoints.
  5. Format and mount the storage partition. This partition must be separate from the operating system partition. The partition must be formatted with a file system that supports extended attributes.
    mkfs.xfs /dev/<device_name>
    mkdir -p /data/gluster
    echo '/dev/<device_name> /data/gluster xfs defaults 0 0' >> /etc/fstab
    mount -a && mount
    
  6. Create a new repository file, for example, etc/yum.repos.d/CentOS-Gluster-9.repo, with the following content.
    [gluster9]
    name=Gluster 9
    baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-9/
    gpgcheck=1
    gpgkey=https://centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Storage
    enabled=1
    
  7. Install the GlusterFS server.
    yum update
    yum install glusterfs-server
    
  8. Start the GlusterFS daemon and set it to start at boot.
    systemctl start glusterd
    systemctl enable glusterd
    

It is possible to replace GlusterFS with Red Hat Gluster Storage. Search for "Gluster Storage" on the Red Hat website for instructions on how to configure it.

Prepare TLS certificates

  1. Create the TLS certificates for GlusterFS.
    openssl genrsa -out /etc/ssl/glusterfs.key 2048
  2. Generate the .pem key for GlusterFS. You can use a certificate from a CA instead of generating a self-signed certificate.
    openssl req -new -x509 -days 3650 -key /etc/ssl/glusterfs.key -subj '/CN=gluster' -out /etc/ssl/glusterfs.pem
  3. Copy the glusterfs.pem file to a .ca file.
    cp /etc/ssl/glusterfs.pem /etc/ssl/glusterfs.ca
  4. Set ownership, read, write, and execute permissions on the glusterfs.key file.
    chown <user>:<group> /etc/ssl/glusterfs.key
    chmod o-rwx /etc/ssl/glusterfs.key
    
  5. Create the directory and control file to make GlusterFS use TLS.
    mkdir -p /var/lib/glusterd/
    touch /var/lib/glusterd/secure-access
    
  6. Copy the files for the TLS configuration. Store the copies in a safe place.

    You will need these files to connect client machines to the file share.

    tar -C /etc/ssl -cvzf glusterkeys.tgz glusterfs.ca glusterfs.key glusterfs.pem

Configure the shared volumes

  1. Create the shared directories used by .
    cd /data/gluster/
    mkdir -p apps app_states scm tmp/shared vault
    
  2. Create the volumes in GlusterFS from the directories. Repeat for each volume: apps, app_states, scm, tmp, and vault.

    gluster volume create <volume name> transport tcp <GlusterFS hostname>:/data/gluster/<volume name> force

  3. Activate SSL/TLS for each volume. Repeat for each volume: apps, app_states, scm, tmp, and vault.
    gluster volume set <volume name> client.ssl on
    gluster volume set <volume name> server.ssl on
    gluster volume set <volume name> auth.ssl-allow '*'
    
  4. Start each volume. Repeat for each volume: apps, app_states, scm, tmp, and vault.
    gluster volume start <volume name>

Configure to connect to the GlusterFS file shares

Follow these steps to connect your deployment to your GlusterFS file shares. If you are connecting a clustered deployment, repeat these steps for each SOAR cluster node.

Each node in your cluster must have the same TLS keys stored in /etc/ssl/. Make sure to use the keys generated during GlusterFS installation.

If you are using the web interface to add new cluster nodes, you will need to supply the TLS keys in Administration > Product Settings > Clustering.

  1. Create the directory and control file to make GlusterFS use TLS.
    mkdir -p /var/lib/glusterd/
    touch /var/lib/glusterd/secure-access
    
  2. Copy your glusterkeys.tgz file to /etc/ssl/ on the instance.
  3. Extract the tar file.
    tar xvzf glusterkeys.tgz
  4. Delete the glusterkeys.tgz file from /etc/ssl/.

Sync cluster nodes to the shared volumes

nodes must sync their local files to your newly shared volumes. The local directories for apps, app_states, scm, tmp/shared, and vault contain files that need to be preserved for use by your instance or cluster.

In a clustered environment, data only needs to be synced from the first node. Syncing data from additional nodes will overwrite data from the first node.

  1. Stop services on each node of the cluster.
    stop_phantom.sh
  2. Mount the local volumes to a temporary directory.
    mkdir -p /tmp/phantom/<volume>
    mount -t glusterfs <hostname of external file share>:<glusterfs volume name> /tmp/phantom/<volume>
    
    The shared directory should be mounted a little differently.
    mkdir -p /tmp/phantom/shared
    mount -t glusterfs <hostname of external file share>:tmp /tmp/phantom/shared

    If you get an error message mount: unknown filesystem type 'glusterfs', then you have not installed glusterfs. See Prepare the GlusterFS server.

  3. Sync local data to the temporary location.
    rsync -ah --progress <path/to/local/volume>/ /tmp/phantom/<volume>/
    The shared directory should be synched using this command.
    rsync -ah --progress <path/to/local/volume>/tmp/shared/ /tmp/phantom/shared/
    Repeat for each volume: apps, app_states, scm, and shared.
  4. Sync the vault.
    rsync -ah --exclude tmp --exclude chunks --progress <path/to/local/vault>/ /tmp/phantom/vault/
    Sync the vault separately because it often contains very large amounts of data.
  5. Unmount the temporary volumes. Repeat for each volume: apps, app_states, scm, tmp/shared, and vault.
    umount /tmp/phantom/<volume>
  6. Edit the cluster member's file system table, /etc/fstab, to mount the GlusterFS volumes. Your fstab entries must not have line breaks.
    <glusterfs_hostname>:/apps /<phantom_install_dir>/apps glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/app_states /<phantom_install_dir>/local_data/app_states glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/scm /<phantom_install_dir>/scm glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/tmp /<phantom_install_dir>/tmp/shared glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/vault /<phantom_install_dir>/vault glusterfs defaults,_netdev 0 0
    
  7. Mount all the volumes to make them available.
    mount /<phantom_install_dir>/apps
    mount /<phantom_install_dir>/local_data/app_states
    mount /<phantom_install_dir>/scm
    mount /<phantom_install_dir>/tmp/shared
    mount /<phantom_install_dir>/vault
    
  8. Update the ownership of the mounted volumes.
    chown <phantom user>:<phantom group> /<phantom_install_dir>/apps
    chown <phantom user>:<phantom group> /<phantom_install_dir>/local_data/app_states
    chown <phantom user>:<phantom group> /<phantom_install_dir>/scm
    chown <phantom user>:<phantom group> /<phantom_install_dir>/tmp/shared
    chown <phantom user>:<phantom group> /<phantom_install_dir>/vault
  9. Start services on all cluster nodes.
    start_phantom.sh
Last modified on 08 January, 2024
Set up an external PostgreSQL server   Set up a load balancer with an HAProxy server

This documentation applies to the following versions of Splunk® SOAR (On-premises): 5.3.6, 5.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters