Splunk® User Behavior Analytics

Administer Splunk User Behavior Analytics

This documentation does not apply to the most recent version of Splunk® User Behavior Analytics. For documentation on the most recent version, go to the latest release.

Perform periodic cleanup of the backup files

Periodically clean up the backup files on your system so that you don't run out of space. Perform this clean up at least once a month.

Clean up older backup files in the delete directory

Completed full backups are saved in the caspida directory. All existing backups in the caspida directory are moved to delete directory. You can safely remove all content in the delete directory to help minimize the number of files retained on the system, while also preserving recovery capability to the latest checkpoint.

In the following example, it is safe to remove all backup directories 0000021 to 0000038 in /backup/delete/, while keeping 1000039 to 0000045 in /backup/caspida/. The 1000039 folder contains a full backup, while all the other directories starting with zero contain incremental backups.

caspida@node1:~$ ls -t /backup/caspida/ /backup/delete/
/backup/caspida/:
0000045  0000044  0000043  0000042  0000041  0000040  1000039
 
/backup/delete/:
0000038  0000036  1000034  0000032  0000030  0000028  0000026  0000024  0000022  1000020
0000037  0000035  0000033  0000031  0000029  0000027  0000025  0000023  0000021

Perform additional cleanup of the WAL files

The /backup/wal_archive directory contains the Postgres write-ahead logging (WAL) files used to recover Splunk UBA to a specific point using an incremental backup. Perform periodic clean up of this directory as well to avoid filling up the space on your system.

The following example safely removes all the WAL archive segments that are older than the most recent WAL file segment found in the Splunk UBA full backup 1000039. Perform the following steps as the caspida user on node 2 if you have a 20-node deployment, or on the management node in all other deployments:

  1. On the Splunk UBA management node, view the archive segments:
    caspida@node1:~$ ls /backup/caspida/1000039/postgres/base/pg_wal
    0000000100000000000000AC  archive_status
    
  2. On the Postgres node in your Splunk UBA deployment, verify that there is a backup history file in /backup/wal_archive. The Postgres node is on node 2 in 20-node deployments, and node 1 in all other deployments:
    caspida@node1:~$ ls 0000000100000000000000AC*.backup
    0000000100000000000000AC.000559A8.backup
    
  3. On the Postgres node in your Splunk UBA deployment, use the pg_archivecleanup command to remove all unneeded WAL archive segments older than this point. The Postgres node is on node 2 in 20-node deployments, and node 1 in all other deployments.
    If you are using Ubuntu:
    /usr/bin/pg_archivecleanup -d /backup/wal_archive/ 0000000100000000000000AC.000559A8.backup

    If you are using RHEL or CentOS:

    /usr/pgsql-10/bin/pg_archivecleanup -d /backup/wal_archive/ 0000000100000000000000AC.000559A8.backup

Clean up your Postgres logs

Postgres logs can accumulate over time and take up large amounts of space on your system. Run the following commands to clean up all Postgres logs older than 14 days.

If you are using Ubuntu, run the following command:

find /var/lib/postgresql/10/main/pg_log -type f -mtime +14 -delete

If you are using RHEL, run the following command:

find /var/vcap/store/pgsql/10/data/pg_log -type f -mtime +14 -delete
Last modified on 12 July, 2022
Restore Splunk UBA from incremental backups   Disable automated incremental backups

This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.4, 5.0.4.1, 5.0.5, 5.0.5.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters