Splunk® User Behavior Analytics

Administer Splunk User Behavior Analytics

This documentation does not apply to the most recent version of Splunk® User Behavior Analytics. For documentation on the most recent version, go to the latest release.

Restore Splunk UBA from incremental backups

To restore Splunk UBA from online incremental backup files, at least one base backup directory containing a full backup must exist.

This example shows how to restore from a base directory 1000123 with all of the incremental directories 0000124, 0000125, and 0000126.

  1. Prepare the server for the restore operation. If there is any existing data, run:
    /opt/caspida/bin/CaspidaCleanup
  2. Stop all services:
    /opt/caspida/bin/Caspida stop-all
  3. Restore Postgres.
    1. On the Postgres node (node 2 in 20-node deployments, node 1 in all other deployments), clean any existing data. On RHEL or OEL systems, use the following command:
      sudo rm -rf /var/lib/pgsql/10/data/*

      On Ubuntu systems, use the following command:

      sudo rm -rf /var/lib/postgresql/10/main/*
    2. Copy all content under <base directory>/postgres/base to the Postgres node. For example, if you are copying from different server on RHEL or OEL systems, use the following command:
      sudo scp -r caspida@ubap1:<BACKUP_HOME>/1000123/postgres/base/* /var/lib/pgsql/10/data

      On Ubuntu systems, use the following command:

      sudo scp -r caspida@ubap1:<BACKUP_HOME>/1000123/postgres/base/* /var/lib/postgresql/10/main
    3. As a root user, remove unnecessary WAL files. On RHEL or OEL systems, use the following command:
      sudo rm -rf /var/lib/pgsql/10/data/pg_wal/*

      On Ubuntu systems, use the following command:

      sudo rm -rf /var/lib/postgresql/10/main/pg_wal/*

      Make sure the system has access to Postgres WAL archive directory. Modify the /var/lib/pgsql/10/data/recovery.conf (on RHEL or OEL systems) or /var/lib/postgresql/10/main/recovery.conf (on Ubuntu systems) file. Remove all contents in the file, and add the following properties:

      restore_command = 'cp <WAL directory>/%f "%p"'
      recovery_target_time = '<recovery timestamp>'
      recovery_target_action = 'promote'
      

      Where <WAL directory> is the directory with all Postgres WAL files, and <recovery timestamp> is the timestamp in backup file <BACKUP_HOME>/0000126/postgres/recovery_target_time.
      For example, the recovery.conf file looks like this:

      restore_command = 'cp /backup/wal_archive/%f "%p"'
      recovery_target_time = '2019-09-16 12:36:03'
      recovery_target_action = 'promote'
      
    4. Change ownership of the backup files. On RHEL or OEL systems, use the following command:
      sudo chown -R postgres:postgres /var/lib/pgsql/10/data

      On Ubuntu systems, use the following command:

      sudo chown -R postgres:postgres /var/lib/postgresql/10/main
    5. Start Postgres services. Run the following command on management node:
      /opt/caspida/bin/Caspida start-postgres
      Monitor Postgres logs under /var/log/postgresql, which show the recovering process.
    6. As a root user, verify that Postgres is restored. Check in the /var/lib/pgsql/10/data (on RHEL or OEL systems) or /var/lib/postgresql/10/main (on Ubuntu systems) directory and verify that the recovery.conf file is renamed to recovery.done.
    7. As a caspida user and once the recovery completes, query Postgres to see if data is recovered. For example, run the following command from the Postgres CLI:
      psql -d caspidadb -c 'SELECT * FROM dbinfo'
  4. Restore Redis. Redis backups are full backups, even for incremental Splunk UBA backups. You can restore Redis from any backup directory, such as the most recent incremental backup directory. In our example, we can backup Redis from the 0000126 incremental backup directory. The Redis backup file ends with the node number. Be sure to restore the backup file on the correct corresponding node. For example, in a 5-node cluster, the Redis file must be restored on nodes 4 and 5. Assuming the backup files are on node 1, run the following command on node 4 to restore Redis:
    sudo scp caspida@node1:<BACKUP_HOME>/0000126/redis/redis-server.rdb.4 /var/vcap/store/redis/redis-server.rdb
    

    Similarly, run the following command on node 5:

    sudo scp caspida@node1:<BACKUP_HOME>/0000126/redis/redis-server.rdb.5 /var/vcap/store/redis/redis-server.rdb
    
    View your /etc/caspida/local/conf/caspida-deployment.conf file to see where Redis is running on in your deployment.
  5. Restore InfluxDB. Similar to Redis, InfluxDB backups are full backups. You can restore InfluxDB from the most recent backup directory. In this example, InfluxDB is restored from the 0000126 incremental backup directory. On the management node, which hosts InfluxDB, start InfluxDB, clean it up, and restore from backup files:
    sudo service influxdb start
    influx -execute "DROP DATABASE caspida"
    influx -execute "DROP DATABASE ubaMonitor"
    influxd restore -portable <BACKUP_HOME>/0000126/influx
    
  6. Restore HDFS. To restore HDFS, we need to first restore base, and then incremental data in sequential order. In our example, we would first restore from 1000123, then 0000124, 0000125, and 0000126.
    1. Start the necessary services. On the management node, run the following command:
      /opt/caspida/bin/Caspida start-all --no-caspida
    2. Restore HDFS from the base backup directory and also restore the incremental backup directories:
      nohup bash -c 'export BACKUPHOME=/backup; hadoop fs -copyFromLocal -f $(ls ${BACKUPHOME}/caspida/1*/hdfs/caspida -d) /user && for dir in $(ls ${BACKUPHOME}/caspida/0*/hdfs/caspida -d); do hadoop fs -copyFromLocal -f ${dir} /user || exit 1; done; echo Done' &
      

      Replace /backup as the value of BACKUP_HOME as needed, if you configured a different directory for your backups. Restoring HDFS can take a long time. Check the process ID to see if the restore is completed. For example if the PID is 111222, check by using the following command:

      ps 111222
      You can also check the nohup.out file and look for "Done" at the end of the file.
    3. Change owner in HDFS:
      sudo -u hdfs hdfs dfs -chown -R impala:caspida /user/caspida/analytics
      sudo -u hdfs hdfs dfs -chown -R mapred:hadoop /user/history
      sudo -u hdfs hdfs dfs -chown -R impala:impala /user/hive
      sudo -u hdfs hdfs dfs -chown -R yarn:yarn /user/yarn
      
    4. If the server you are restoring to is different from the one where the backup was taken, run the following commands to update the metadata:
      hive --service metatool -updateLocation hdfs://<RESTORE_HOST>:8020 hdfs://<BACKUP_HOST>:8020
      impala-shell -q "INVALIDATE METADATA"
      
      Note the host is node1 in deployment file.
  7. Restore your rules and customized configurations from the latest backup directory:
    1. Restore the configurations:
      cp -pr <BACKUP_HOME>/0000126/conf/* /etc/caspida/local/conf/
    2. Restore the rules:
      rm -Rf /opt/caspida/conf/rules/*
      cp -prf <BACKUP_HOME>/0000126/rule/* /opt/caspida/conf/rules/
      
  8. Start the server:
    /opt/caspida/bin/Caspida sync-cluster /etc/caspida/local/conf
    /opt/caspida/bin/CaspidaCleanup container-grouping
    /opt/caspida/bin/Caspida start
    
    Check the Splunk UBA web UI to make sure the server is operational.
  9. If the server for backup and restore are different, perform the following tasks:
    1. Update the data source metadata:
      curl -X PUT -Ssk -v -H "Authorization: Bearer $(grep '^\s*jobmanager.restServer.auth.user.token=' /opt/caspida/conf/uba-default.properties | cut -d'=' -f2)" https://localhost:9002/datasources/moveDS?name=<DS_NAME>
      
      Replace <DS_NAME> with the data source name displayed in Splunk UBA.
    2. Trigger a one-time sync with Splunk ES: If your Splunk ES host did not change, run the following command:
      curl -X POST 'https://localhost:9002/jobs/trigger?name=EntityScoreUpdateExecutor' -H "Authorization: Bearer $(grep '^\s*jobmanager.restServer.auth.user.token=' /opt/caspida/conf/uba-default.properties | cut -d'=' -f2)" -H 'Content-Type: application/json' -d '{"schedule": false}' -k
      
      If you are pointing to a different Splunk ES host, edit the host in Splunk UBA to automatically trigger a one-time sync.
Last modified on 07 June, 2023
Restore Splunk UBA from a full backup   Perform periodic cleanup of the backup files

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.2.0, 5.2.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters