Back up Splunk UBA using the backup script
Perform a full backup of Splunk UBA using the
/opt/caspida/bin/utils/uba-backup.sh script. View the command line options by using the
--help option. The table lists and describes the various options that can be used in the script.
|--archive||Create a single archive containing all of the backup data. The archive is created after the backup is completed and Splunk UBA is restarted.|
|--archive-only||Keep the backup archive only but do not keep the backup folder.|
|--archive-type %FORMAT%||Specity the type of archive you want to create.
Install a package called pigz on the management node to use multi-threaded compression when creating
yum -y install pigz
|--dateformat %FORMAT%||Override the default date/time format for the backup folder name. If this option is not used, the folder name is based on ISO 8601 format |
|--folder %FOLDER%||Override the target folder location where the backup is stored. Use this option if you configured a secondary volume for storing backups, such as another 1TB disk on the management node. Don't use NFS for performance ramifications.|
|--log-time||Add additional logging for how long each section takes, including all function calls and tasks. Use this option to help troubleshoot issues if your backup is taking more than two hours.|
|--no-checksum||Don't create a checksum of the archive.|
|--no-data||Don't back up any data, only the Splunk UBA configuration.|
|--no-prestart||Don't start Splunk UBA before the backup begins, because Splunk UBA is already running. Make sure Splunk UBA is up and running before using this option.|
|--no-start||Don't start Splunk UBA after the backup is completed. Use this option to perform additional post-backup actions that required Splunk UBA to be offline.|
|--no-time||Don't log the amount of time taken for each task.|
|--restart-on-fail||Restart Splunk UBA if the backup fails. If Splunk UBA encounters an error during the backup, the script attempts to restart Splunk UBA so the system does not remain offline.|
|--script %FILENAME%||Run the specified script after the backup is completed. Use this with the |
|--skip-freespace||Skip the free space checks.|
|--skip-hdfs-fsck||Skip the HDFS file system consistency check. This is useful in large environments if you want to skip this check due to time constraints.|
|--skip-start-live-ds||Don't start the live data sources.|
|--use-distcp||Perform a parallel backup of Hadoop. If the HDFS export is taking several hours, use this option to perform a parallel backup which may be faster. Use the |
|--validate||Validate that a backup can be successfully performed, but do not actually perform the backup.|
Below is an example backup.
- Log in to the management node of your Splunk UBA deployment as caspida using SSH.
- Navigate to the
- Run the backup script. Below is the command and its output:
[caspida@uba bin]$ ./uba-backup.sh --no-prestart --archive --archive-type tgz UBA Backup Script - Version 2.1.5 Backup started at: Thu Oct 8 07:44:59 UTC 2020 Backup running on: uba-backup.splunk.com Logfile: /var/log/caspida/uba-backup-2020-10-08_07-44-59.log Script Name: uba-backup.sh Script SHA: ce57125beb8de883f8a22ed61bdb45f79a15062b74ccb4676ce4dbd1730b6c35 (sha256sum) Parsing any CLI args - Disabling UBA pre-start before backup - Enabling archive creation - Archive type: tgz Backup folder: /var/vcap/ubabackup/2020-10-08_07-44-59 UBA version: 5.0.3 Node Count: 1 Checking hypervisor and network configuration > Time taken: 4.182 seconds Testing SSH connectivity to UBA nodes > Time taken: 0.323 seconds Retrieving system information for each UBA node > Time taken: 5.289 seconds Determining IP address of each UBA node > Time taken: 0.011 seconds Not starting UBA (pre-backup), disabled via CLI Checking available free space Space required: 18.26 GB Space free: 890.34 GB Space requirements for the backup have been met > Time taken: 3.816 seconds Checking HDFS folder structure > Time taken: 6.807 seconds Creating backup folder > Time taken: 0.133 seconds Retrieving list of active datasources > Time taken: 0.219 seconds Determining current counts/stats from PostgreSQL > Time taken: 2.787 seconds Stopping UBA > Time taken: 208.426 seconds Starting UBA (partial) > Time taken: 104.810 seconds Checking that HDFS is not in safe-mode > Time taken: 2.256 seconds Performing fsck of HDFS (this may take a while) > Time taken: 2.425 seconds Beginning parallel tasks (1) Creating backup of deployment configuration Creating backup of local configurations Creating backup of UBA rules Creating backup of version information Waiting for parallel tasks to finish > Time taken: 1.013 seconds Beginning parallel tasks (2) Creating backup of Postgres Creating backup of Hadoop HDFS Logging Redis information Waiting for parallel tasks to finish Stopping UBA Creating backup of Timeseries data Creating backup of Redis data Waiting for parallel tasks to finish > Time taken: 288.708 seconds Creating summary of backup > Time taken: 0.010 seconds Beginning parallel tasks (3) Starting UBA Creating backup archive Waiting for parallel tasks to finish Creating archive checksum file > Time taken: 268.030 seconds Calculating backup space usage > Time taken: 2.801 seconds Backup completed successfully Time taken: 0 hour(s), 15 minute(s), 2 second(s)
You can review the log file in
How to handle your Splunk UBA web interface certificates during migration
Restore Splunk UBA using the restore script
This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.4, 22.214.171.124, 5.0.5, 126.96.36.199