For details, see:
Splunk SOAR (On-premises) upgrade overview and prerequisites
Splunk Phantom and Splunk SOAR (On-premises) releases are numbered as <major>.<minor>.<patch>.<build>.
Examples:
- Splunk Phantom 4.10.7.63984 is major version 4, minor version 10, patch version 7, build number 63984.
- Splunk SOAR (On-premises) 5.3.5.97812 major version 5, minor version 3, patch version 5, build number 97812.
- Splunk SOAR (On-premises) 6.0.0.114895 major version 6, minor version 0, patch version 0, build number 114895.
- Splunk SOAR (On-premises) 6.0.1.123902 major version 6, minor version 0, patch version 1, build number 123902.
- Splunk SOAR (On-premises) 6.0.2.127725 major version 6, minor version 0, patch version 2, build number 127725.
- Splunk SOAR (On-premises) 6.1.0.112 major version 6, minor version 1, patch version 0, build number 112.
- Splunk SOAR (On-premises) 6.1.1.211 major version 6, minor version 1, patch version 1, build number 211.
- Splunk SOAR (On-premises) 6.2.0.355 major version 6, minor version 2, patch version 0, build number 355.
- Splunk SOAR (On-premises) 6.2.1.305 major version 6, minor version 2, patch version 1, build number 305.
- Splunk SOAR (On-premises) 6.2.2.134 major version 6, minor version 2, patch version 2, build number 134.
- Splunk SOAR (On-premises) 6.3.0.179 major version 6, minor version 3, patch version 0, build number 179.
- Splunk SOAR (On-premises) 6.3.1.171 major version 6, minor version 3, patch version 1, build number 171.
- Splunk SOAR (On-premises) 6.4.0.90 major version 6, minor version 4, patch version 0, build number 90.
Upgrade overview checklist
Follow these steps to prepare for and then upgrade :
Step | Tasks | Description |
---|---|---|
1 | Identify your upgrade path. | See:
You will need to plan your upgrades by identifying your currently installed Splunk Phantom or Splunk SOAR (On-premises) release, then path to your destination release. You must follow the path from your currently installed release to the desired destination release. |
2 | Make a full backup of your deployment | Make a full backup of your deployment before upgrading. See Backup or restore your instance in Administer .
For single instance deployments running as a virtual machine, you can create a snapshot of the virtual machine instead. |
3 | Perform the prerequisites | See Prerequisites for upgrading .
|
4 | Prepare your system for upgrade | See Prepare your Splunk SOAR (On-premises) deployment for upgrade. |
5 | Conditional: Convert a privileged deployment to an unprivileged deployment. | See Convert a privileged Splunk SOAR (On-premises) deployment to an unprivileged deployment. |
6 | Upgrade | See Upgrade .
After all the preparation stages are complete, you can upgrade your instance or cluster. For clustered deployments, after the preparation stages are complete, upgrade your cluster in a rolling fashion, one node at a time. |
7 | Conditional: Repair indicator hashes for non-federal information processing standards (FIPS) deployments. | If you are upgrading a non-FIPS instance, you must run the following script after running the installation script: repair_520_indicators.sh . That script is located in <$PHANTOM_HOME>/bin/. You may optionally pass the batch size as an argument: repair_520_indicators.sh <batch_size> . The default batch size is 1000. You can restart the script at any time. The script terminates after execution.
|
8 | Conditional: Rerun the setup command for ibackup | See Prepare for a backup in Administer . |
9 | Conditional: Reestablish warm standby. | See Warm standby feature overview. |
Important changes between releases
This table lists versions of Splunk Phantom and Splunk SOAR (On-premises) product where important changes are introduced. Some of these changes may impact your upgrade plans. Review this table carefully before planning your upgrade.
Release | Important changes | |
---|---|---|
4.8.24304 |
| |
4.9.39220 |
| |
4.10.x |
| |
5.0.1 |
| |
5.2.1 |
| |
5.3.0 |
| |
5.3.3 |
| |
5.3.4 |
| |
5.3.5 |
| |
5.3.6 |
| |
5.5.0 |
| |
6.0.0 |
| |
6.0.1 |
| |
6.0.2 |
| |
6.1.0 |
| |
6.1.1 |
| |
6.2.0 |
| |
6.2.1 |
If you have an external PostgreSQL 11.x database, you must upgrade PostgreSQL to release 15.x before you can upgrade Splunk SOAR (On-premises) to a higher release. | |
6.2.2 |
| |
6.3.0 |
| |
6.3.1 |
|
|
6.4.0 |
|
Prerequisites for upgrading Splunk SOAR (On-premises)
You need the following information before beginning your upgrade:
- Logins
- For unprivileged deployments, you need the login credentials for the user account that runs . For new AMI versions of , the user account is phantom.
See What's new in 6.0.0 in Release Notes for important information about the change to the default administrator user account.
- Your Splunk Phantom Community portal login.
- For unprivileged deployments, you need the login credentials for the user account that runs . For new AMI versions of , the user account is phantom.
- A minimum of 5GB of space available in the
/tmp
directory on the instance or cluster node. - Enough free disk space in <$PHANTOM_HOME>/data/ and its subdirectories to allow for the upgrade of PostgreSQL. See Additional disk space requirements for upgrading PostgreSQL in this topic for more information.
- Make note of the directory where is installed.
- On an unprivileged AMI, or virtual machine image deployment - /opt/phantom, also called <$PHANTOM_HOME>.
- On an unprivileged deployment - the home directory of the user account that will run , also called <$PHANTOM_HOME>.
- Conditional: If your deployment uses the warm standby feature, turn off warm standby. See Warm standby feature overview.
- Conditional: Turn off scheduled backups. For example, if you scheduled backups with a cron job, deactivate the cron job to turn them off.
Additional disk space requirements for upgrading PostgreSQL
In order to upgrade to Splunk SOAR (On-premises) release 6.2.1 or higher, the disk partition that holds <$PHANTOM_HOME>/data/ and its subdirectories needs to be large enough to hold a copy of both the PostgreSQL 11.x database and the PostgreSQL 15.x database.
- If you are upgrading from Splunk SOAR (On-premises) release 6.2.0, your local PostgreSQL database is already version 15.x. No further action is required.
- If you have mounted the <$PHANTOM_HOME>/data/db/ partition elsewhere, you must make sure that mount is large enough to accommodate the upgrade.
- During the upgrade, your existing Splunk SOAR (On-premises) PostgreSQL 11.x database will be moved to <$PHANTOM_HOME>/data/db/db.old/. This copy is used as part of the migration to copy your existing data into the new database, and as a data integrity precaution.
- After the upgrade, your new PostgreSQL 15.x database will be located in the same location as the previous PostgreSQL 11.x database, <$PHANTOM_HOME>/data/db/.
Do the following steps before upgrading to make sure you have sufficient space for the upgrade:
- Check the size of your current PostgreSQL 11.x database directory.
du -sh <$PHANTOM_HOME>/data/db/
Example output:
[phantom@localhost db]$ du -sh /opt/phantom/data/db/ 102G /opt/phantom/data/db/
- Use the output from the disk usage command to calculate the minimum requirement of 225% (2.25 times) the disk space.
<du output> * 2.25 = <minimum required disk space to upgrade PostgreSQL>
Calculation:
102G * 2.25 = 229.5G
- Locate your <$PHANTOM_HOME>/data/db/ directory.
grep "<$PHANTOM_HOME>/data/db/" /proc/mounts
If this command returns nothing, your directory is mounted in the default location <$PHANTOM_HOME>/data/db/.
- If your current <$PHANTOM_HOME>/data/ partition does not have at least as much available space as calculated in step 2, you must increase the size of the <$PHANTOM_HOME>/data/ partition to have at least that much available space.
Upgrade Splunk SOAR (On-premises)
Prepare your system for upgrade by completing the prerequisites listed in Prepare your Splunk SOAR (On-premises) deployment for upgrade.
Set up a load balancer with an HAProxy server | Upgrade path for Splunk SOAR (On-premises) privileged installations |
This documentation applies to the following versions of Splunk® SOAR (On-premises): 6.4.0
Feedback submitted, thanks!