Backup and restore Splunk UBA using the backup and restore scripts
You can back up and restore any deployment of Splunk UBA using the scripts located in /opt/caspida/bin/utils
. Run the scripts from the management node of your Splunk UBA deployment.
- Use
uba-backup.sh
for backing up Splunk UBA. This script stops Splunk UBA, perform the backup, then restarts Splunk UBA. - Use
uba-restore.sh
for restoring Splunk UBA from a backup. This script stops Splunk UBA, restores the system from a backup, then starts Splunk UBA.
Back up Splunk UBA using the backup script
Perform the following tasks to back up Splunk UBA using the /opt/caspida/bin/utils/uba-backup.sh
script. You can use the --help
option to view the options that can be used with the script.
[caspida@ubanode1 utils]$ ./uba-backup.sh --help UBA Backup Script - Version 1.1 Backup started at: Tue Jul 23 21:09:15 UTC 2019 Backup running on: ubanode1.example.domain Logfile: /var/log/caspida/uba-backup-2019-07-23_21-09-15.log Parsing any CLI args Usage: ./uba-backup.sh [options] --archive (create an archive of the backup folder) --archive-type %FORMAT% (compression for the archive, e.g. tgz tbz2 tar) --dateformat %FORMAT% (override the default date/time format for the folder-name) --folder %FOLDER% (override the target folder for the backup) --log-time (add additional logging for how long each section takes) --no-data (dont backup the data, only the configuration) --no-prestart (dont start UBA before the backup begins) --no-start (dont start UBA after the backup finishes) --restart-on-fail (restart UBA if the backup fails, for automation tasks) --script %FILENAME% (a script to execute upon successful backup) --use-distcp (perform a parallel backup of Hadoop, faster in some environments) Note: lbzip2 is found, compression for tbz2 will be multi-threaded Note: pigz not found, compression for tgz will be single-threaded
Use --folder
to specify a destination location for the backup archive. If no folder is specified the script will create the backup in /var/vcap/ubabackup
by default.
Use the --log-time
option to enhance the logging data with the amount of time required to perform certain tasks during the backup.
Use --archive
to create an archive of your backed up data and --archive-type
to specify the archive type. The script checks your system to see if lbzip2
is available for multi-threading .tbz2
archives and if pigz
is available for multi-threading .tgz
archives. This information is provided at the end of the --help
output. The default format of the backup file that is created is an uncompressed .tar
file.
Make sure Splunk UBA is running if you use the --no-prestart
option.
Below is an example backup.
- Login to the master node of your Splunk UBA deployment as
caspida
using SSH. - Navigate to the
/opt/caspida/bin/utils
folder:cd /opt/caspida/bin/utils
- Run the backup script. In this example, we are creating a
.tar
backup file in the default/var/vcap/ubabackup
directory../uba-backup.sh --no-prestart
Below is a sample output of the command:
[caspida@ubanode1 utils]$ ./uba-backup.sh --no-prestart UBA Backup Script - Version 1.1 Backup started at: Tue Jul 23 21:13:58 UTC 2019 Backup running on: ubanode1.example.domain Logfile: /var/log/caspida/uba-backup-2019-07-23_21-13-58.log Parsing any CLI args - Disabling UBA pre-start before backup Node Count: 1 Testing SSH connectivity to UBA node 1 (ubanode1) Attempting to resolve the IP of UBA node ubanode1 UBA node ubanode1 resolves to 192.168.19.88 Not starting UBA (pre-backup), disabled via CLI Backup folder: /var/vcap/ubabackup/2019-07-23_21-13-59 Creating backup folder Changing ownership of the backup folder WARNING: No datasources were found as active in UBA Stopping UBA (full) Starting UBA (partial) Creating backup of deployment configuration Creating backup of local configurations Creating backup of UBA rules Creating backup of version information Creating backup of PostgreSQL caspidadb database on UBA node 1 (ubanode1) Creating backup of PostgreSQL metastore database on UBA node 1 (ubanode1) Creating backup of Hadoop HDFS (this may take a while) - Checking status of PID 5160 (2019-07-23_21-18-40) - Backup job has finished (total size: 393M) Stopping UBA (full) Creating backup of timeseries data Creating backup of Redis database (parallel mode) - Performing backup of UBA node 1 (ubanode1) - Waiting for pid 22067 to finish - Process finished successfully Creating summary of backup Starting UBA (full) Backup completed successfully Time taken: 0 hour(s), 10 minute(s), 0 second(s)
You can review the log file in /var/log/caspida/uba-backup-<timestamp>.log
.
Restore Splunk UBA using the restore script
After you have created a backup, you can use the /opt/caspida/bin/utils/uba-restore.sh
script. You can use the --help
option to view the options that can be used with the script.
[caspida@ubanode1 utils]$ ./uba-restore.sh --help UBA Backup Script - Version 1.1 Backup started at: Tue Jul 23 21:28:01 UTC 2019 Backup running on: ubanode1.example.domain Logfile: /var/log/caspida/uba-restore-2019-07-23_21-28-01.log Parsing any CLI args Usage: ./uba-restore.sh --folder xxx [--dateformat xxx] [--log-time] --dateformat %FORMAT% (override the default date/time format for logging messages) --folder %FOLDER% (override the source folder for the restore) --log-time (add additional logging for how long each section takes)
Specify the location of your backup archive using the --folder
option when running the restore script.
Below is an example restore:
- Login to the master node of your Splunk UBA deployment as
caspida
using SSH. - Navigate to the
/opt/caspida/bin/utils
folder:cd /opt/caspida/bin/utils
- Run the restore script. In this example, we are restoring from a backup file in the
/var/vcap/ubabackup
directory../uba-restore.sh --folder /var/vcap/ubabackup/2019-07-23_21-13-59/
Below is a sample output of the command, with some parts truncated for brevity:
[caspida@ubanode1 utils]$ ./uba-restore.sh --folder /var/vcap/ubabackup/2019-07-23_21-13-59/ UBA Backup Script - Version 1.1 Backup started at: Tue Jul 23 21:32:00 UTC 2019 Backup running on: ubanode1.example.domain Logfile: /var/log/caspida/uba-restore-2019-07-23_21-32-00.log Parsing any CLI args - Set restore folder to /var/vcap/ubabackup/2019-07-23_21-13-59/ Node Count: 1 Backup Node Count: 1 Execution Mode: Restore Testing SSH connectivity to UBA node 1 (ubanode1) Attempting to resolve the IP of UBA node ubanode1 UBA node ubanode1 resolves to 192.168.19.88 Attempting to retrieve the IP of each node (old) Stopping UBA (full) Starting PostgreSQL Restoring PostgreSQL caspidadb database on UBA node 1 (ubanode1) Restoring PostgreSQL metastore database on UBA node 1 (ubanode1) Stopping PostgreSQL Restoring timeseries data Backing up existing uba-system-env.sh/uba-tuning.properties Restoring local configurations Restoring UBA rules Restoring uba-system-env.sh/uba-tuning.properties UBA site-configuration files match, nothing to change Starting UBA (partial) Removing existing Hadoop HDFS content Restoring Hadoop HDFS (this may take a while) - Checking status of PID 6056 (2019-07-23_21-36-59) - Restore is still running, please wait - Folder size: 200.5 K (target: 389.4 M) - Checking status of PID 6056 (2019-07-23_21-37-21) - Restore is still running, please wait - Folder size: 343.1 K (target: 389.4 M) ... ... - Checking status of PID 6056 (2019-07-23_21-41-41) - Backup job has finished Changing ownership of Hadoop HDFS files Restoring Redis database (parallel mode) - Performing restore of data from UBA node 1 to UBA node 1 (ubanode1) - Skipping Redis configuration (not a migration) - Performing rsync of database from UBA node 1 to UBA node 1 - Waiting for pid 13281 to finish - Process finished successfully Configuring containerization Starting UBA (full) Testing Impala Restore completed successfully Time taken: 0 hour(s), 13 minute(s), 26 second(s)
You can review the log file in /var/log/caspida/uba-restore-<timestamp>.log.
Backup and restore Splunk UBA using automated incremental backups | Configure warm standby in Splunk UBA |
This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.0
Feedback submitted, thanks!