Splunk® Phantom

Install and Upgrade Splunk Phantom

Acrobat logo Download manual as PDF


This documentation does not apply to the most recent version of Phantom. Click here for the latest version.
Acrobat logo Download topic as PDF

Set up external file shares using GlusterFS

Splunk Phantom uses several volumes for storage. Splunk Phantom implements GlusterFS for scalability and security of its file shares. You can put these volumes on their own server, or any server that has adequate storage and bandwidth.

You can use other file systems to provide shared storage for Splunk Phantom. Any file system that meets your organization's security and performance requirements is sufficient. You need to configure the required mounts and permissions. See Supported file systems and required directories.

You can run GlusterFS as an expandable cluster of servers which provide a single mount point for access. While you can run GlusterFS on a single server, three or more servers provides more options for redundancy and high availability.

These instructions cover only configuring a single server and the required shares for Splunk Phantom. To achieve high availability, data redundancy, and other features of GlusterFS see the GlusterFS Documentation.

Prepare the GlusterFS server

  1. Install and configure one of the supported operating systems according to your organization's requirements.
  2. Install the prerequisites.
    yum install -y wget curl ntp
  3. Synchronize the system clock.
    ntpdate -v -u 0.centos.pool.ntp.org
  4. Configure your firewall to allow access for Splunk Phantom nodes and other members of your GlusterFS cluster. For a complete list of ports, see Splunk Phantom required ports.
  5. Format and mount the storage partition. This partition must be separate from the operating system partition. The partition must be formatted with a file system that supports extended attributes.
    mkfs.xfs /dev/<device name>
    mkdir -p /data/gluster
    echo '/dev/<device name> /data/gluster xfs defaults 0 0' >> /etc/fstab
    mount -a && mount
    
  6. Install the phantom-base repository.
  7. Update yum.
    yum update
  8. Install GlusterFS server.
    yum install -y glusterfs-server-4.1.6-1.el7
  9. Start the GlusterFS daemon and set it to start at boot.
    systemctl start glusterd
    systemctl enable glusterd
    

Prepare TLS certificates

  1. Create the TLS certificates for GlusterFS.
    openssl genrsa -out /etc/ssl/glusterfs.key 2048
  2. Generate the .pem key for GlusterFS. You can use a certificate from a CA instead of generating a self-signed certificate.
    openssl req -new -x509 -days 3650 -key /etc/ssl/glusterfs.key -subj '/CN=gluster' -out /etc/ssl/glusterfs.pem
  3. Copy the glusterfs.pem file to a .ca file.
    cp /etc/ssl/glusterfs.pem /etc/ssl/glusterfs.ca
  4. Set ownership, read, write, and execute permissions on the glusterfs.key file.
    chown <user>:<group> /etc/ssl/glusterfs.key
    chmod o-rwx /etc/ssl/glusterfs.key
    
  5. Create the directory and control file to make GlusterFS use TLS.
    mkdir -p /var/lib/glusterd/
    touch /var/lib/glusterd/secure-access
    
  6. Copy the files for the TLS configuration. Store the copies in a safe place.

    You will need these files to connect client machines to the file share.

    tar -C /etc/ssl -cvzf glusterkeys.tgz glusterfs.ca glusterfs.key glusterfs.pem

Configure the shared volumes

  1. Create the shared directories used by Splunk Phantom.
    cd /data/gluster/
    mkdir -p apps app_states scm tmp/shared vault
    
  2. Create the volumes in GlusterFS from the directories. Repeat for each volume: apps, app_states, scm, tmp, and vault.

    gluster volume create <volume name> transport tcp <GlusterFS hostname>:/data/gluster/<volume name> force

  3. Activate SSL/TLS for each volume. Repeat for each volume: apps, app_states, scm, tmp, and vault.
    gluster volume set <volume name> client.ssl on
    gluster volume set <volume name> server.ssl on
    gluster volume set <volume name> auth.ssl-allow '*'
    
  4. Start each volume. Repeat for each volume: apps, app_states, scm, tmp/shared, and vault.
    gluster volume start <volume name>

Configure Splunk Phantom cluster nodes to connect to the GlusterFS file shares

Each Splunk Phantom node in your cluster must have the same TLS keys stored in /etc/ssl/. Make sure to use the keys generated during GlusterFS installation.

If you are using the Splunk Phantom GUI to add new cluster nodes, you will need to supply the TLS keys in Administration > Product Settings > Clustering.

  1. Create the directory and control file to make GlusterFS use TLS.
    mkdir -p /var/lib/glusterd/
    touch /var/lib/glusterd/secure-access
    
  2. Copy your glusterkeys.tgz file to /etc/ssl/ on the Splunk Phantom instance.
  3. Extract the tar file.
    tar xvzf glusterkeys.tgz
  4. Delete the glusterkeys.tgz file from /etc/ssl/.

Sync Splunk Phantom cluster nodes to the shared volumes

Splunk Phantom nodes must sync their local files to your newly shared volumes. The local directories for apps, app_states, scm, tmp/shared, and vault contain files that need to be preserved for use by your Splunk Phantom instance or cluster.

In a clustered environment, data only needs to be synced from the first node. Syncing data from additional nodes will overwrite data from the first node.

  1. Stop Splunk Phantom services on each node of the cluster.
    stop_phantom.sh
  2. Mount the local volumes to a temporary directory.
    mkdir -p /tmp/phantom/<volume>
    mount -t glusterfs <hostname of external file share>:<glusterfs volume name> /tmp/phantom/<volume>
    

    If you get an error message mount: unknown filesystem type 'glusterfs', then you have not installed glusterfs. See Prepare the GlusterFS server

  3. Sync local data to the temporary location.
    rsync -ah --progress <path/to/local/volume>/ /tmp/phantom/<volume>/
    Repeat for each volume: apps, app_states, scm, and tmp.
  4. Sync the vault.
    rsync -ah --exclude tmp --exclude chunks --progress <path/to/local/vault>/ /tmp/phantom/vault/
    Sync the vault separately because it often contains very large amounts of data.
  5. Unmount the temporary volumes. Repeat for each volume: apps, app_states, scm, tmp/shared, and vault.
    umount /tmp/phantom/<volume>
  6. Edit the cluster member's file system table, /etc/fstab, to mount the GlusterFS volumes. Your fstab entries must not have line breaks.
    <glusterfs_hostname>:/apps /<phantom_install_dir>/apps glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/app_states /<phantom_install_dir>/local_data/app_states glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/scm /<phantom_install_dir>/scm glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/tmp /<phantom_install_dir>/tmp/shared glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/vault /<phantom_install_dir>/vault glusterfs defaults,_netdev 0 0
    
  7. Mount all the volumes to make them available.
    mount /<phantom_install_dir>/apps
    mount /<phantom_install_dir>/local_data/app_states
    mount /<phantom_install_dir>/scm
    mount /<phantom_install_dir>/tmp/shared
    mount /<phantom_install_dir>/vault
    
  8. Start Splunk Phantom services on all cluster nodes.
    start_phantom.sh
Last modified on 06 November, 2020
PREVIOUS
Set up an external PostgreSQL server
  NEXT
Set up a load balancer with an HAProxy® server

This documentation applies to the following versions of Splunk® Phantom: 4.8


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters