Perform maintenance on your Splunk UBA clusters using warm standby
When maintenance is required, you can perform that maintenance without disrupting replication, and with minimal impact to your UBA environment and users.
Follow these steps for both primary and/ or standby systems:
- Check replication table and logs to ensure replication is active and same cycle ids:
- Postgres node:
psql -d caspidadb -c 'select * from replication'
- Splunk UBA management node:
tail -f /var/log/caspida/replication/replication.log
In 20-node clusters, Postgres services run on node 2 instead of node 1.
- Postgres node:
- Stop all Splunk UBA services on management node of Standby UBA system:
/opt/caspida/bin/Caspida stop-all
- Perform maintenance on the affected node.
Complete this task as soon as possible, ideally in less than four hours.
- Start all Splunk UBA services on management node of standby UBA system:
- For primary systems:
/opt/caspida/bin/Caspida start-all
- For standby systems:
/opt/caspida/bin/Caspida start-all --no-caspida
- For primary systems:
- Check replication table and logs to ensure replication is still active and same cycle ids:
- Postgres node:
psql -d caspidadb -c 'select * from replication'
- Splunk UBA management node:
tail -f /var/log/caspida/replication/replication.log
In 20-node clusters, Postgres services run on node 2 instead of node 1.
- Postgres node:
Stop the primary system from synchronizing with the standby system | Clean up the standby system if you accidentally started Splunk UBA services |
This documentation applies to the following versions of Splunk® User Behavior Analytics: 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.4.1, 5.0.5, 5.0.5.1, 5.1.0, 5.1.0.1, 5.2.0, 5.2.1, 5.3.0, 5.4.0, 5.4.1
Feedback submitted, thanks!