Splunk® User Behavior Analytics

Administer Splunk User Behavior Analytics

Restore Splunk UBA from incremental backups

To restore Splunk UBA from online incremental backup files, at least one base backup directory containing a full backup must exist.

This example shows how to restore from a base directory 1000123 with all of the incremental directories 0000124, 0000125, and 0000126.

  1. Prepare the server for the restore operation. If there is any existing data, run:
  2. Stop all services:
    /opt/caspida/bin/Caspida stop-all
  3. Restore Postgres.
    1. As a root user on the Postgres node (node 2 in 20-node deployments, node 1 in all other deployments), clean any existing data. On RHEL or OEL systems, use the following command:
      sudo rm -rf /var/lib/pgsql/15/data/*

      On Ubuntu systems, use the following command:

      sudo rm -rf /var/lib/postgresql/15/main/*
    2. Copy all content under <base directory>/postgres/base to the Postgres node. For example, if you are copying from different server on RHEL or OEL systems, use the following command:
      sudo scp -r 

      On Ubuntu systems, use the following command:

      sudo scp -r 
    3. As a root user, create the recovery.signal file in Postgres data directory. On RHEL or OEL systems, use the following command:
      sudo touch /var/lib/pgsql/15/data/recovery.signal

      On Ubuntu systems, use the following command:

      sudo touch /var/lib/postgresql/15/main/recovery.signal
    4. As a root user, remove unnecessary WAL files. On RHEL or OEL systems, use the following command:
      sudo rm -rf /var/lib/pgsql/15/data/pg_wal/*

      On Ubuntu systems, use the following command:

      sudo rm -rf /var/lib/postgresql/15/main/pg_wal/*

      Make sure the system has access to Postgres WAL archive directory. Modify the /var/lib/pgsql/15/data/postgresql.conf (on RHEL or OEL systems) or /etc/postgresql/15/main/postgresql.conf (on Ubuntu systems) file. Add the following properties:

      restore_command = 'cp <WAL directory>/%f "%p"'
      recovery_target_time = '<recovery timestamp>'
      recovery_target_action = 'promote'

      Where <WAL directory> is the directory with all Postgres WAL files, and <recovery timestamp> is the timestamp in backup file <BACKUP_HOME>/0000126/postgres/recovery_target_time.
      For example, the postgresql.conf file looks like this:

      restore_command = 'cp /backup/wal_archive/%f "%p"'
      recovery_target_time = '2019-09-16 12:36:03'
      recovery_target_action = 'promote'
    5. Change ownership of the backup files. On RHEL or OEL systems, use the following command:
      sudo chown -R postgres:postgres /var/lib/pgsql/15/data

      On Ubuntu systems, use the following command:

      sudo chown -R postgres:postgres /var/lib/postgresql/15/main
    6. As the caspida user, restart the Postgres service by running the following command on the management node:
      /opt/caspida/bin/Caspida stop-postgres
      /opt/caspida/bin/Caspida start-postgres
      Monitor Postgres logs under /var/log/postgresql, which show the recovering process.
    7. As the caspida user wait for the recovery to complete. Once the recovery completes, query Postgres to see if data is recovered. For example, run the following command from the Postgres CLI:
      psql -d caspidadb -c 'SELECT * FROM dbinfo'
  4. Restore Redis. Redis backups are full backups, even for incremental Splunk UBA backups. You can restore Redis from any backup directory, such as the most recent incremental backup directory. In our example, we can backup Redis from the 0000126 incremental backup directory. The Redis backup file ends with the node number. Be sure to restore the backup file on the correct corresponding node. For example, in a 5-node cluster, the Redis file must be restored on nodes 4 and 5. Assuming the backup files are on node 1, run the following command on node 4 to restore Redis:
    sudo scp caspida@node1:<BACKUP_HOME>/0000126/redis/redis-server.rdb.4 /var/vcap/store/redis/redis-server.rdb

    Similarly, run the following command on node 5:

    sudo scp caspida@node1:<BACKUP_HOME>/0000126/redis/redis-server.rdb.5 /var/vcap/store/redis/redis-server.rdb
    View your /opt/caspida/conf/deployment/caspida-deployment.conf file to see where Redis is running on in your deployment.
  5. Restore InfluxDB. Similar to Redis, InfluxDB backups are full backups. You can restore InfluxDB from the most recent backup directory. In this example, InfluxDB is restored from the 0000126 incremental backup directory. On the management node, which hosts InfluxDB, start InfluxDB, clean it up, and restore from backup files:
    sudo service influxdb start
    influx bucket delete --configs-path /etc/influxdb/configs --name caspida/default 
    influx bucket delete --configs-path /etc/influxdb/configs --name caspida/longTermRP
    influx bucket delete --configs-path /etc/influxdb/configs --name ubaMonitor/ubaMonitorRP
    influx restore --configs-path /etc/influxdb/configs <BACKUP_HOME>/0000126/influx
    /opt/caspida/bin/CaspidaCleanup influx-auth
  6. Restore HDFS. To restore HDFS, we need to first restore base, and then incremental data in sequential order. In our example, we would first restore from 1000123, then 0000124, 0000125, and 0000126.
    1. Start the necessary services. On the management node, run the following command:
      /opt/caspida/bin/Caspida start-all --no-caspida
    2. Restore HDFS from the base backup directory and also restore the incremental backup directories:
      nohup bash -c 'export BACKUPHOME=/backup; hadoop fs -copyFromLocal -f $(ls ${BACKUPHOME}/caspida/1*/hdfs/caspida -d) /user && for dir in $(ls ${BACKUPHOME}/caspida/0*/hdfs/caspida -d); do hadoop fs -copyFromLocal -f ${dir} /user || exit 1; done; echo Done' &

      Replace /backup as the value of BACKUP_HOME as needed, if you configured a different directory for your backups. Restoring HDFS can take a long time. Check the process ID to see if the restore is completed. For example if the PID is 111222, check by using the following command:

      ps 111222
      You can also check the nohup.out file and look for "Done" at the end of the file.
    3. Change owner in HDFS:
      sudo -u hdfs hdfs dfs -chown -R impala:caspida /user/caspida/analytics
      sudo -u hdfs hdfs dfs -chown -R mapred:hadoop /user/history
      sudo -u hdfs hdfs dfs -chown -R impala:impala /user/hive
      sudo -u hdfs hdfs dfs -chown -R yarn:yarn /user/yarn
    4. If the server you are restoring to is different from the one where the backup was taken, run the following commands to update the metadata:
      hive --service metatool -updateLocation hdfs://<RESTORE_HOST>:8020 hdfs://<BACKUP_HOST>:8020
      impala-shell -q "INVALIDATE METADATA"
      Note the host is node1 in deployment file.
  7. Restore your rules and customized configurations from the latest backup directory:
    1. Restore the configurations:
      cp -pr <BACKUP_HOME>/0000126/conf/* /etc/caspida/local/conf/
    2. Restore the rules:
      rm -Rf /opt/caspida/conf/rules/*
      cp -prf <BACKUP_HOME>/0000126/rule/* /opt/caspida/conf/rules/
  8. Start the server:
    /opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf
    /opt/caspida/bin/CaspidaCleanup container-grouping
    /opt/caspida/bin/Caspida start
    Check the Splunk UBA web UI to make sure the server is operational.
  9. If the server for backup and restore are different, perform the following tasks:
    1. Update the data source metadata:
      curl -X PUT -Ssk -v -H "Authorization: Bearer $(grep '^\s*jobmanager.restServer.auth.user.token=' /opt/caspida/conf/uba-default.properties | cut -d'=' -f2)" https://localhost:9002/datasources/moveDS?name=<DS_NAME>
      Replace <DS_NAME> with the data source name displayed in Splunk UBA.
    2. Trigger a one-time sync with Splunk ES: If your Splunk ES host did not change, run the following command:
      curl -X POST 'https://localhost:9002/jobs/trigger?name=EntityScoreUpdateExecutor' -H "Authorization: Bearer $(grep '^\s*jobmanager.restServer.auth.user.token=' /opt/caspida/conf/uba-default.properties | cut -d'=' -f2)" -H 'Content-Type: application/json' -d '{"schedule": false}' -k
      If you are pointing to a different Splunk ES host, edit the host in Splunk UBA to automatically trigger a one-time sync.
Last modified on 08 April, 2024
Restore Splunk UBA from a full backup   Perform periodic cleanup of the backup files

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.4.0

Was this topic useful?

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters