Splunk Phantom uses several volumes for storage. Splunk Phantom implements GlusterFS for scalability and security of its file shares. You can put these volumes on their own server, or any server that has adequate storage and bandwidth.
You can use other file systems to provide shared storage for Splunk Phantom. Any file system that meets your organization's security and performance requirements is sufficient. You need to configure the required mounts and permissions. See Supported file systems and required directories.
You can run GlusterFS as an expandable cluster of servers which provide a single mount point for access. While you can run GlusterFS on a single server, three or more servers provides more options for redundancy and high availability.
These instructions cover only configuring a single server and the required shares for Splunk Phantom. To achieve high availability, data redundancy, and other features of GlusterFS see the GlusterFS Documentation.
Prepare the GlusterFS server
- Install and configure one of the supported operating systems according to your organization's requirements.
- Install the prerequisites.
yum install -y wget curl ntp
- Synchronize the system clock.
ntpdate -v -u 0.centos.pool.ntp.org
- Configure your firewall to allow access for Splunk Phantom nodes and other members of your GlusterFS cluster. For a complete list of ports, see Splunk Phantom required ports.
- Format and mount the storage partition. This partition must be separate from the operating system partition. The partition must be formatted with a file system that supports extended attributes.
mkfs.xfs /dev/<device_name> mkdir -p /data/gluster echo '/dev/<device_name> /data/gluster xfs defaults 0 0' >> /etc/fstab mount -a && mount
- Install the phantom-base repository.
- CentOS or RHEL version 6:
- CentOS or RHEL version 7:
If you are seeing <build_number> in the RPM commands, you are using the offline documentation. The online documentation at docs.splunk.com has updated RPM commands.
- Update yum.
yum update
- Install GlusterFS server.
yum install -y glusterfs-server-7.5-1.el7
- Start the GlusterFS daemon and set it to start at boot.
systemctl start glusterd systemctl enable glusterd
Prepare TLS certificates
- Create the TLS certificates for GlusterFS.
openssl genrsa -out /etc/ssl/glusterfs.key 2048
- Generate the .pem key for GlusterFS. You can use a certificate from a CA instead of generating a self-signed certificate.
openssl req -new -x509 -days 3650 -key /etc/ssl/glusterfs.key -subj '/CN=gluster' -out /etc/ssl/glusterfs.pem
- Copy the glusterfs.pem file to a .ca file.
cp /etc/ssl/glusterfs.pem /etc/ssl/glusterfs.ca
- Set ownership, read, write, and execute permissions on the
glusterfs.key
file.chown <user>:<group> /etc/ssl/glusterfs.key chmod o-rwx /etc/ssl/glusterfs.key
- Create the directory and control file to make GlusterFS use TLS.
mkdir -p /var/lib/glusterd/ touch /var/lib/glusterd/secure-access
- Copy the files for the TLS configuration. Store the copies in a safe place.
You will need these files to connect client machines to the file share.
tar -C /etc/ssl -cvzf glusterkeys.tgz glusterfs.ca glusterfs.key glusterfs.pem
- Create the shared directories used by Splunk Phantom.
cd /data/gluster/ mkdir -p apps app_states scm tmp/shared vault
- Create the volumes in GlusterFS from the directories. Repeat for each volume: apps, app_states, scm, tmp, and vault.
gluster volume create <volume name> transport tcp <GlusterFS hostname>:/data/gluster/<volume name> force
- Activate SSL/TLS for each volume. Repeat for each volume: apps, app_states, scm, tmp, and vault.
gluster volume set <volume name> client.ssl on gluster volume set <volume name> server.ssl on gluster volume set <volume name> auth.ssl-allow '*'
- Start each volume. Repeat for each volume: apps, app_states, scm, tmp, and vault.
gluster volume start <volume name>
Each Splunk Phantom node in your cluster must have the same TLS keys stored in /etc/ssl/
. Make sure to use the keys generated during GlusterFS installation.
If you are using the Splunk Phantom GUI to add new cluster nodes, you will need to supply the TLS keys in Administration > Product Settings > Clustering.
- Create the directory and control file to make GlusterFS use TLS.
mkdir -p /var/lib/glusterd/ touch /var/lib/glusterd/secure-access
- Copy your
glusterkeys.tgz
file to/etc/ssl/
on the Splunk Phantom instance. - Extract the tar file.
tar xvzf glusterkeys.tgz
- Delete the
glusterkeys.tgz
file from/etc/ssl/
.
Splunk Phantom nodes must sync their local files to your newly shared volumes. The local directories for apps
, app_states
, scm
, tmp/shared
, and vault
contain files that need to be preserved for use by your Splunk Phantom instance or cluster.
In a clustered environment, data only needs to be synced from the first node. Syncing data from additional nodes will overwrite data from the first node.
- Stop Splunk Phantom services on each node of the cluster.
stop_phantom.sh
- Mount the local volumes to a temporary directory.
mkdir -p /tmp/phantom/<volume> mount -t glusterfs <hostname of external file share>:<glusterfs volume name> /tmp/phantom/<volume>
If you get an error message
mount: unknown filesystem type 'glusterfs'
, then you have not installed glusterfs. See Prepare the GlusterFS server - Sync local data to the temporary location.
rsync -ah --progress <path/to/local/volume>/ /tmp/phantom/<volume>/Repeat for each volume:
apps
,app_states
,scm
, andtmp/shared
. - Sync the vault.
rsync -ah --exclude tmp --exclude chunks --progress <path/to/local/vault>/ /tmp/phantom/vault/Sync the vault separately because it often contains very large amounts of data.
- Unmount the temporary volumes. Repeat for each volume: apps, app_states, scm, tmp/shared, and vault.
umount /tmp/phantom/<volume>
- Edit the cluster member's file system table,
/etc/fstab
, to mount the GlusterFS volumes. Yourfstab
entries must not have line breaks.<glusterfs_hostname>:/apps /<phantom_install_dir>/apps glusterfs defaults,_netdev 0 0 <glusterfs_hostname>:/app_states /<phantom_install_dir>/local_data/app_states glusterfs defaults,_netdev 0 0 <glusterfs_hostname>:/scm /<phantom_install_dir>/scm glusterfs defaults,_netdev 0 0 <glusterfs_hostname>:/tmp /<phantom_install_dir>/tmp/shared glusterfs defaults,_netdev 0 0 <glusterfs_hostname>:/vault /<phantom_install_dir>/vault glusterfs defaults,_netdev 0 0
- Mount all the volumes to make them available.
mount /<phantom_install_dir>/apps mount /<phantom_install_dir>/local_data/app_states mount /<phantom_install_dir>/scm mount /<phantom_install_dir>/tmp/shared mount /<phantom_install_dir>/vault
- Start Splunk Phantom services on all cluster nodes.
start_phantom.sh
Set up an external PostgreSQL server | Set up a load balancer with an HAProxy® server |
This documentation applies to the following versions of Splunk® Phantom (Legacy): 4.9
Feedback submitted, thanks!