Splunk® Phantom (Legacy)

Install and Upgrade Splunk Phantom

Create a Splunk Phantom cluster using an unprivileged installation

Build a cluster, putting each of the services on its own server or group of servers to serve multiple cluster nodes of Splunk Phantom.

Set up each of the external services either as the root user or a user with sudo permissions.

Install Splunk Phantom as an unprivileged user. In your cluster, each Splunk Phantom instance must have the same custom username and install directory. See Install Splunk Phantom as an unprivileged user.

Number Task Description
1 Create the HAProxy node. Use the HAProxy server to be a load balancer for the Splunk Phantom nodes in your cluster. See Set up a load balancer with an HAProxy server. There are additional steps to configure your load balancer to handle your custom HTTPS port for unprivileged clusters.
2 Install Splunk Phantom using the tar file method for unprivileged installs. Do this once for each node you need in your cluster. See Install Splunk Phantom as an unprivileged user.
3 Create the PostgreSQL node. Establish a PostgreSQL database server or cluster to store Splunk Phantom information. See Set up the external PostreSQL server.
4 Create the file shares node. Splunk Phantom will store all its shared files on the prepared GlusterFS server. You can use NFS or other network file system. Instructions for that are not included in this document. See Set up external file shares using GlusterFS.
5 Create the Splunk Enterprise node. Splunk Phantom will use Splunk Enterprise for searches and collect data for indexing using the HTTP Event Collector. See Set up Splunk Enterprise.
6 Prepare Splunk Phantom instances to connect to the GlusterFS file share. See Prepare an unprivileged Splunk Phantom instance to connect to the GlusterFS file share

This task must be completed by a user with root or sudo access.

7 Convert Splunk Phantom instances to cluster nodes. Convert the first instance into a cluster node by running make_cluster_node.pyc. See Run make_cluster_node.pyc. Repeat on each Splunk Phantom instance that will become a cluster node.

Prepare an unprivileged Splunk Phantom instance to connect to the GlusterFS file share

Before you can convert an unprivileged Splunk Phantom instance into a node in an unprivileged cluster, you must prepare each of the Splunk Phantom instances to connect to the GlusterFS file share.

Do these steps as a user with root or sudo access to the Splunk Phantom instance.

  1. Install the GlusterFS client on each Splunk Phantom instance.
    yum install glusterfs-fuse -y
  2. Add the required TLS keys for the Gluster FS server and the GlusterFS directory and control file to each Splunk Phantom instance. See Configure Splunk Phantom cluster nodes to connect to the GlusterFS file shares in Set up external file shares using GlusterFS.
  3. Edit the cluster member's file system table, /etc/fstab, to mount the GlusterFS volumes. Your fstab entries must not have line breaks.
    <glusterfs_hostname>:/apps /<phantom_install_dir>/apps glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/app_states /<phantom_install_dir>/local_data/app_states glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/scm /<phantom_install_dir>/scm glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/tmp /<phantom_install_dir>/tmp/shared glusterfs defaults,_netdev 0 0
    <glusterfs_hostname>:/vault /<phantom_install_dir>/vault glusterfs defaults,_netdev 0 0
    
  4. Mount all the volumes to make them available.
    mount /<phantom_install_dir>/apps
    mount /<phantom_install_dir>/local_data/app_states
    mount /<phantom_install_dir>/scm
    mount /<phantom_install_dir>/tmp/shared
    mount /<phantom_install_dir>/vault
    
  5. Start Splunk Phantom services on all cluster nodes.
    <$PHANTOM_HOME>/bin/start_phantom.sh
Last modified on 09 November, 2020
Create a Splunk Phantom cluster from an RPM or TAR file installation   Create a Splunk Phantom Cluster in Amazon Web Services

This documentation applies to the following versions of Splunk® Phantom (Legacy): 4.10, 4.10.1, 4.10.2, 4.10.3, 4.10.4, 4.10.6, 4.10.7


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters