Splunk® Phantom

Install and Upgrade Splunk Phantom

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Create a Splunk Phantom Cluster in Amazon Web Services

Build a cluster from AMI-based instances of Splunk Phantom, building several of the required services using AWS native components: Elastic Load Balancer (ELB), Elastic File System (EFS), and Relational Database System (RDS).

This configuration is built using the Amazon Marketplace Image of Splunk Phantom. This release is an unprivileged version of Splunk Phantom which runs under the user account phantom..

Converting a Splunk Phantom installation to a server or cluster node is a one-way operation. It cannot be reverted.

Build a cluster with AWS services

Number Task Description
1 Launch and prepare AMI instances of Splunk Phantom. Total number of Splunk Phantom AMI instances = Number of cluster nodes + 1

See Launch and prepare AMI instances of Splunk Phantom.

2 Create a load balancer with Elastic Load Balancer (ELB). See Create a load balancer with Elastic Load Balancer (ELB).
  1. Create the ELB.
  2. Create the Target Group.
  3. Add routing rules.
3 Create the file stores with Elastic File System (EFS). Create the EFS file store for shared files.

See Create the file stores with Elastic File System (EFS).

4 Create the external database with Relational Database System (RDS). See Create the external database with Relational Database System (RDS).
  1. Create the external PostgreSQL database.
  2. Create the pgbouncer user.
5 Add the file shares to each Splunk Phantom instance. Mount the file shares on each Splunk Phantom instance.

See Add the file shares to each Splunk Phantom instance.

6 Convert an AMI-based Splunk Phantom Instance into the Splunk Enterprise instance. Convert one of the Splunk Phantom instances into the Splunk Enterprise instance. This instance will serve as the external search endpoint for the entire cluster. Use the make_server_node.pyc script with the splunk argument.

See Convert an AMI-based Splunk Phantom Instance into the Splunk Enterprise instance.

7 Convert the first AMI-based Splunk Phantom instance into a cluster node. Convert the first Splunk Phantom instance into a cluster node. Creating the first node will use a script option to record all the make_cluster_node.pyc script answers to a file for use on each of your other nodes.

See Convert the first AMI-based Splunk Phantom instance into a cluster node.

8 Convert the remaining AMI-based Splunk Phantom instances into cluster nodes. Convert the remaining Splunk Phantom instances into cluster nodes using make_cluster_node.pyc and the mcn_responses.json file.

See Convert the remaining AMI-based Splunk Phantom instances into cluster nodes.

Launch and prepare AMI-based instances of Splunk Phantom

You need a number of AMI-based Splunk Phantom instances equal to the number of Splunk Phantom nodes you want in your cluster plus one. The additional instance will be converted into the externalized Splunk Enterprise instance for your cluster. A Splunk Phantom cluster requires a minimum of three nodes.

Total number of Splunk Phantom AMI instances = Number of cluster nodes + 1

If you already have a Splunk Enterprise deployment that you will use instead, follow the instructions for using an external Splunk Enterprise instance. See Set up Splunk Enterprise.

Installation

  1. Log in to your AWS EC2 account.
  2. From your EC2 dashboard, select Launch Instance.
  3. In the AWS Marketplace, search for Splunk Phantom.
  4. On the Amazon Machine Image entry, click the button Select.
  5. Click Continue.
  6. Select an instance size. The default is m5.xlarge. Splunk Phantom does not support using instances smaller than t2.xlarge.
  7. Click Next: Configure Instance Details.
  8. For Number of Instances, type the number of instances you need. Total number of Splunk Phantom AMI instances = Number of cluster nodes + 1
  9. Configure the instance according to your organization's policies. See Splunk Phantom required ports for more information.

    Because this is an unprivileged version of Splunk Phantom, you will need to be sure to open the custom HTTPS port 9999 for your instances.

  10. Click Next: Add Storage.
  11. Add storage.

    You can increase disk size later, but you cannot decrease disk size.

  12. Click Next: Add Tags.
  13. Add tags to help identify your Splunk Phantom installation in your EC2 dashboard.
  14. Click Next: Configure Security Group.
  15. Configure Security Groups. By default, SSH, HTTP, and HTTPS are permitted from all IP addresses. Increase security by limiting access to your organization's IP addresses.
  16. Click Review and Launch.
  17. Generate or choose SSH keys.
  18. Click Launch Instances. The installation typically takes 15 minutes to complete.

In order to log in to the operating system of your AMI-based Splunk Phantom install using SSH, use the user account phantom. If you need root access, use sudo su - to elevate to root.

Install SSH keys

During the conversion to Splunk Phantom cluster nodes, each instance will need to SSH as the phantom user into other nodes. Install the client certificate you generated for SSH when the instances were created.

Do this on each of the instances that you will convert to cluster nodes.

  1. Copy the .pem file generated earlier to each instance using SCP.
    scp -i <path/to/.pem> <path/to/.pem to transfer> phantom@<instance IP or DNS name>:~/
  2. SSH to an AMI-based Splunk Phantom instance as the phantom user.
  3. Move the .pem key to the phantom user's .ssh directory.
    mv <name of file>.pem .ssh
  4. Set the permissions on the .pem key.
    chmod 600 .ssh/<name of file>.pem
  5. Test that you are able to SSH from each instance to the others as the phantom user.

Create a load balancer with Elastic Load Balancer (ELB)

Create a load balancer for your Splunk Phantom cluster. An Elastic Load Balancer will be used instead of HAProxy.

  1. Log in to your AWS EC2 account.
  2. From the menu on the EC2 dashboard, under the heading Load Balancing, choose Load Balancers.
  3. Click Create Load Balancer.
  4. Under Application Load Balancer, click Create.
  5. Type a name for your load balancer in the Name field.
  6. Select a Scheme. The scheme will depend on your AWS network configuration. Assuming your load balancer will route on an internal network, select the internal radio button.
  7. Set the IP address type. This will also depend on your AWS network configuration. In most cases, select ipv4 from the menu.
  8. Under Listeners, Load Balancer Protocol, select HTTPS from the menu. The Load Balancer Port changes to 443.
  9. Under Availability Zones, select the VPC and Availability Zones to match your AWS network configuration.
  10. Add Tags to help organize and identify your load balancer.
  11. Click Next: Configure Security Settings.
  12. Select or create a security group according to your organization's policies. These settings can vary based on factors outside the scope of this document.
  13. Click Next: Configure Routing.
  14. Under Target group, choose New target group from the menu.
  15. Type a name for your target group in the Name field.
  16. For Target type, select the Instance radio button.
  17. For Protocol, select HTTPS from the menu. Port changes to 443 automatically.

    You must also open the custom HTTPS port 9999.

  18. Under Health checks, set Protocol to HTTPS.

    Health checks will fail until you have run the make_cluster_node scripts to add your Splunk Phantom instances to your cluster. This is normal and expected.

  19. In the Path field, type /check.
  20. Click Next: Register Targets.
  21. Under Instances, find and select the cluster node instances for your Splunk Phantom cluster. You do not need to load balance the external services, such as PostgreSQL, file shares, or Splunk Enterprise.
  22. Click Add to registered.
  23. Click Next: Review.
  24. Review for and correct any errors.
  25. Click Create.
  26. Select the load balancer by name.
  27. From the Actions menu, select Edit attributes.
  28. Set the Idle timeout to 120 seconds.
  29. Click Save.

Create a Target Group for your cluster's websockets traffic

This target group will be used to route websockets traffic for the Splunk Phantom Cluster. See Groups for Your Application Load Balancers in the AWS documentation.

  1. In the sidebar on the EC2 dashboard, under Load Balancing, select Target Groups.
  2. Click Create target group.
  3. Now create the websockets target group. In the Create target group dialog:
    1. Type a name in the Target group name field.
    2. Select the Instance radio button.
    3. Select HTTPS from the Protocol menu. Port will change to 443.

      You must also open the custom HTTPS port 9999.

    4. Select the same VPC that your target Splunk Phantom instances are using from the menu.
    5. Under Health Check settings, select HTTPS from the Protocol menu.

      Health checks will fail until you have run the make_cluster_node scripts to add your Splunk Phantom instances to your cluster. This is normal and expected.

    6. In the Path field, type /check.
    7. Click Next: Register Targets.
    8. Under Instances, find and select the cluster node instances for your Splunk Phantom cluster. You do not need to load balance the external services, such as PostgreSQL, file shares, or Splunk Enterprise.
    9. Click Add to registered.
  4. Click Create.
  5. From the target groups list, select the target group you just created.
  6. On the Description tab, under Attributes, click Edit attributes.
  7. In the Edit attributes dialog:
    1. For Stickiness, select the Enable check box.
    2. Set Stickiness duration by typing 7 and choosing days from the menu.
    3. Click Save.

Setting the Stickiness duration is important so that websockets can persist. Always use the longest possible duration available. Setting this value too low will result in connections being prematurely closed. Wherever possible, set the idle_timeout.timeout_seconds to a value as high as possible for your Elastic Load Balancer. See Application Load Balancers in the AWS documentation.

Add the routing rules to your load balancer

Here you create rules to route traffic.

  • One rule to route all the persistent connections to the websockets listener.
  • A second rule to route all other traffic to the other listener.
  1. From the EWS menu, under Load Balancing, select Load Balancers.
  2. Select the load balancer you have created for your Splunk Phantom cluster.
  3. Click the Listeners tab.
  4. Under Rules, click the View/edit rules link.
  5. Click the + icon to add a new rule.
  6. Click the + Insert Rule link to edit the rule.
  7. Under IF (all match), click + Add condition.
  8. Select Path…, then type /websocket in the text box.
  9. Click the checkmark icon.

Create the file stores with Elastic File System (EFS)

Create shared file stores for your Splunk Phantom cluster. Cluster nodes will store files that must be shared by all instances to these shares. See System Requirements for more information.

Only instances in the VPC you select during EFS creation can connect to that file system. Make sure to use the same VPC for your EFS storage as you used for your Splunk Phantom instances.

  1. Under Configure file system access, select the desired VPC from the menu.
  2. Under Create mount targets, select the check boxes for the availability zones you need.
  3. Click Next Step.
  4. Set the security groups as required by your organization's policies.
  5. Under Configure optional settings, set options as required by your organization's requirements or policies.
  6. Click Next Step.
  7. Review the options selected, then click Create File System.

Create the external PostgreSQL database with the Relational Database System (RDS)

Splunk Phantom uses a PostgreSQL 11 database. In many installations, the database runs on the same server as Splunk Phantom. For an AWS cluster, it makes sense to set up an external PostgreSQL database using RDS. This database will serve as the primary database for the Splunk Phantom cluster.

You may use any release of PostgreSQL 11.x. See Upgrading for support.

  1. From your EC2 dashboard, click Services in the menu bar, and under Database choose RDS.
  2. Click Create database.
  3. Select Standard Create.
  4. Under Engine options, select PostgreSQL.
  5. For Version, select 11.11 from the menu. You may use any PostgreSQL 11.x release.
  6. For Templates, select either Production for production environments or Dev/Test for development/testing or Proof of Value environments.
  7. Under Settings, type a name for your DB instance identifier. Make sure that the name is unique across all DB instances owned by your AWS account.
  8. Under Credential Settings:
    1. Master username: postgres
    2. Make sure the Auto generate a password checkbox is not selected.
    3. Type and confirm the Master password in the fields provided. Record this password. You will need it later.
  9. Under DB instance size, select the radio button that matches your organization's needs.

    Warning: Instances below db.t2.large may deplete their available connections before installation of your Splunk Phantom cluster is complete.

  10. Under Storage, select a Storage type based on your organization's needs.
    1. For Allocated storage, set a number of GiB that matches your organization's needs.

      Warning: Splunk Phantom Databases below 500 gigabytes of storage are not supported for production use.

    2. Select the Enable storage autoscaling check box.
    3. Set Maximum storage threshold to 1000 (GiB).
  11. Under Availability & durability, select the Do not create a standby instance radio button.
  12. Under Connectivity, select the same VPC as you used for your Splunk Phantom instances.
  13. Under the Additional connectivity configuration section:
    1. Select the correct Subnet group. The available groups depend on your VPC selection.
    2. Under Publicly accessible, select the No radio button.
    3. Under VPC security group, select Choose existing.
    4. Select the appropriate security group from the menu.
    5. Click the X icon to remove any unwanted security groups that were added by default.
    6. Make sure the Database port is set to 5432.
  14. Under Additional configuration, Database options:
    1. Type phantom for Initial database name.
    2. Make sure the DB parameter group is set to default.postgres11.11. If you selected a different PostgreSQL version 11 earlier, set the parameter to match.
  15. Under Additional configuration, Backup, leave everything at the defaults.
  16. Click Create Database.

Create the pgbouncer user for the RDS

Splunk Phantom interacts with the PostgreSQL database using the pgbouncer user account. This account needs to be created for the database created in RDS.

  1. Login to an AMI-based Splunk Phantom instance as the phantom user using SSH.
  2. Create the pgbouncer user.
    phenv psql --host <DNS name for RDS instance> --port 5432 --username postgres --echo-all --dbname phantom --command "CREATE ROLE pgbouncer WITH PASSWORD '<pgbouncer password>' login;"
  3. Make the pgbouncer user a superuser.
    phenv psql --host <DNS name for RDS instance> --port 5432 --username postgres --echo-all --dbname phantom --command "GRANT rds_superuser TO pgbouncer;"

Add file shares to each Splunk Phantom instance

Set up and mount the needed directories for your Splunk Phantom cluster. Do this in three stages. The first to install the required packages, the second to create the required shared directories in EFS and copy over existing data, the third to mount the directories on all Splunk Phantom instances and make the mounts permanent.

Stage one:

Do this stage on each of your AMI-based Splunk Phantom instances.

  1. Login to an AMI-based Splunk Phantom instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Install the package nfs-utils.
    yum install nfs-utils

Stage two:

Do this stage on only one of your AMI-based Splunk Phantom instances. You will create a temporary directory, mount it to EFS, then use it to copy existing files to EFS.

  1. Login to an AMI-based Splunk Phantom instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Create a local mount on this instance. This mount will be used to replicate the required directory structure on EFS.
    mkdir -p /mnt/external
  4. Mount this directory from EFS.
    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/ /mnt/external
  5. Now copy the instance's files to EFS with rsync.
    rsync -avz /<PHANTOM_HOME>/apps /mnt/external/
    rsync -avz /<PHANTOM_HOME>/local_data/app_states /mnt/external/
    rsync -avz /<PHANTOM_HOME>/scm /mnt/external/
    rsync -avz /<PHANTOM_HOME>/tmp/shared /mnt/external/
    rsync -avz /<PHANTOM_HOME>/vault /mnt/external/
  6. Unmount the temporary mounting.
    umount /mnt/external

Stage three:

Do this stage on each of your AMI-based Splunk Phantom instances. Set the mounts for the shared directories to EFS, then update the file system table to make the directories mount from EFS when the instance starts.

  1. Login to an AMI-based Splunk Phantom instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Mount all the shared directories to EFS.
    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/apps /<PHANTOM_HOME>/apps/

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/app_states /<PHANTOM_HOME>/local_data/app_states

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/scm /<PHANTOM_HOME>/scm

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/shared /<PHANTOM_HOME>/tmp/shared

    mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 <ip address or DNS name for EFS>:/vault /<PHANTOM_HOME>/vault
  4. Edit the file system table /etc/fstab to make the mounts permanent. Add these entries. You can get the EFS ID from your EFS dashboard.
    vi /etc/fstab
    <EFS ID>:/apps /<PHANTOM_HOME>/apps nfs4 defaults,_netdev 0 0
    <EFS ID>:/app_states /<PHANTOM_HOME>/local_data/app_states nfs4 defaults,_netdev 0 0
    <EFS ID>:/scm /<PHANTOM_HOME>/scm nfs4 defaults,_netdev 0 0
    <EFS ID>:/shared /<PHANTOM_HOME>/tmp/shared nfs4 defaults,_netdev 0 0
    <EFS ID>:/vault /<PHANTOM_HOME>/vault nfs4 defaults,_netdev 0 0

Convert an AMI-based Splunk Phantom instance into the Splunk Enterprise instance

A Splunk Phantom cluster requires either a Splunk Enterprise instance or a distributed Splunk Enterprise deployment as its search endpoint. Convert one of your AMI-based Splunk Phantom instances into the required Splunk Enterprise endpoint.

If you already have a Splunk Enterprise deployment that you will use instead, follow the instructions for using an external Splunk Enterprise instance. See Set up Splunk Enterprise.

Convert Splunk Phantom instance into the Splunk Enterprise instance:

  1. Login to an AMI-based Splunk Phantom instance as the phantom user using SSH.
  2. Elevate to root.
    sudo su -
  3. Run the make_server_node.pyc script with the splunk argument.
    phenv python <PHANTOM_HOME>/bin/make_server_node.pyc splunk

The Splunk Enterprise configuration is written to: <PHANTOM_HOME>/bin/splunk_config.json

Logs are written to: <PHANTOM_HOME>/var/log/phantom/make_server_node/make_server_node_<date and time>.log

Test each Splunk Phantom instance for readiness

Before proceeding, test each instance to make sure it is ready for conversion to a cluster node. Log in to each AMI-based Splunk Phantom instance that will become a cluster node, test that the EFS file shares are mounted and fix any errors.

Make sure that each instance has the EFS file shares mounted.

sudo df -T

You must see entries for shared directories in the table with the <EFS ID.dns_name>:/ for the directories apps, app_states, scm, shared, and vault.

Convert the first AMI-Based Splunk Phantom instance into a cluster node

Convert the first instance to a Splunk Phantom cluster node.

Converting a Splunk Phantom installation to a server or cluster node is a one-way operation. It cannot be reverted.

You will need this information readily available:

  • IP or hostname for the RDS Postgres 11.6 or later DB server
  • Password for the postgres user
  • Password for the pgbouncer user
  • IP or hostname of the ELB load balancer
  • Username for SSH
  • Path to the key file for SSH
  • IP or hostname of the Splunk Enterprise instance
  • REST API port for Splunk Enterprise: 5122
  • User name for Splunk Phantom Search: phantomsearch
  • Password for the phantomsearch account
  • User name for Splunk Phantom Search: phantomdelete
  • Password for the phantomdelete account
  • HTTP Event Collector Token
  • HTTP Event Collector port: 5121

The information for the Splunk Enterprise instance can be found in the file <PHANTOM_HOME>/bin/splunk_config.json on your Splunk Enterprise instance.

Make a note the AWS instance ID of this instance. You need it later to log in to your Splunk Phantom cluster.

Run make_cluster_node.pyc

  1. SSH to the AMI-based Splunk Phantom instance. Log in with the phantom user account.
  2. Run the make_cluster_node.pyc script with the --record argument.
    phenv python <PHANTOM_HOME>/bin/make_cluster_node.pyc --record

The response file is written to: <PHANTOM_HOME>/bin/response.json

The log is written to: <PHANTOM_HOME>/var/log/phantommake_cluster_node/make_cluster_node_<date and time>.log

The response file can be used with the make_cluster_node.pyc script on other nodes to automatically provide the information the script needs.

Convert the remaining AMI-based Splunk Phantom instances into cluster nodes

Convert each of the remaining AMI-based Splunk Phantom instances into cluster nodes by running the make_cluster_node.pyc script.

Run make_cluster_node.pyc

  1. SSH to the AMI-based Splunk Phantom instance. Log in with the phantom user account.
  2. Run the make_cluster_node.pyc script with the --responses argument.
    phenv python <PHANTOM_HOME>/bin/make_cluster_node.pyc --responses <PHANTOM_HOME>/bin/response.json

You don't have to use responses.json. If you do not supply a JSON file, the script prompts you for the information it needs. The mcn_responses.json file contains secrets such as usernames and passwords in plain text. Store it in a secure location or delete it after the cluster configuration is complete.

Log in to the Splunk Phantom web interface

Connect to the web interface of your newly installed Splunk Phantom cluster.

Use the AWS instance ID of the first Splunk Phantom instance where the make_cluster_node script was run for the cluster's initial password.

  1. Get the public IP address or DNS name for the elastic load balancer from the EC2 Management Console.
  2. Get the full AWS instance ID for the EC2 instance.
  3. Using a browser, go to the public IP address or DNS name for the elastic load balancer.
    1. User name: admin
    2. Password: <Full AWS instance ID>
  4. Change the admin user's password:
    1. From the User Name menu, select Account Settings.
    2. From the second level of the menu bar, select Change Password.
    3. Type the current password.
    4. Type a new password.
    5. Type a new password a second time to confirm.
    6. Click Change Password.
Last modified on 02 September, 2021
PREVIOUS
Create a Splunk Phantom cluster using an unprivileged installation
  NEXT
Convert an existing Splunk Phantom instance into a cluster

This documentation applies to the following versions of Splunk® Phantom: 4.10, 4.10.1, 4.10.2, 4.10.3, 4.10.4, 4.10.6, 4.10.7


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters