Resync the KV store
When a KV store member fails to transform its data with all of the write operations, then the KV store member might be stale. To resolve this issue, you must resynchronize the member.
Before downgrading Splunk Enterprise to version 7.1 or earlier, you must use the REST API to resynchronize the KV store.
Identify a stale KV store member
You can check the status of the KV store using the command line.
- Log into the shell of any KV store member.
- Navigate to the
bin
subdirectory in the Splunk Enterprise installation directory. - Type
./splunk show kvstore-status
. The command line returns a summary of the KV store member you are logged into, as well as information about every other member in the KV store cluster. - Look at the
replicationStatus
field and identify any members that have neither "KV store captain" nor "Non-captain KV store member" as values.
Resync stale KV store members
If more than half of the members are stale, you can either recreate the cluster or resync it from one of the members. See Back up KV store for details about restoring from backup.
To resync the cluster from one of the members, use the following procedure. This procedure triggers the recreation of the KV store cluster, when all of the members of current existing KV store cluster resynchronize all data from the current member (or from the member specified in -source sourceId). The command to resync the KV store cluster can be invoked only from the node that is operating as search head cluster captain.
- Determine which node is currently the search head cluster captain. Use the CLI command
splunk show shcluster-status
. - Log into the shell on the search head cluster captain node.
- Run the command
splunk resync kvstore [-source sourceId]
. The source is an optional parameter, if you want to use a member other than the search head cluster captain as the source.SourceId
refers to the GUID of the search head member that you want to use. - Enter your admin login credentials.
- Wait for a confirmation message on the command line.
- Use the
splunk show kvstore-status
command to verify that the cluster is resynced.
If fewer than half of the members are stale, resync each member individually.
- Stop the search head that has the stale KV store member.
- Run the command
splunk clean kvstore --local
. - Restart the search head. This triggers the initial synchronization from other KV store members.
- Run the command
splunk show kvstore-status
to verify synchronization.
Prevent stale members by increasing operations log size
If you find yourself resyncing KV store frequently because KV store members are transitioning to stale mode frequently (daily or maybe even hourly), this means that apps or users are writing a lot of data to the KV store and the operations log is too small. Increasing the size of the operations log (or oplog) might help.
After initial synchronization, noncaptain KV store members no longer access the captain collection. Instead, new entries in the KV store collection are inserted in the operations log. The members replicate the newly inserted data from there. When the operations log reaches its allocation (1 GB by default), it overwrites the beginning of the oplog. Consider a lookup that is close to the size of the allocation. The KV store rolls the data (and overwrites starting from the beginning of the oplog) only after the majority of the members have accessed it, for example, three out of five members in a KV store cluster. But once that happens, it rolls, so a minority member (one of the two remaining members in this example) cannot access the beginning of the oplog. Then that minority member becomes stale and need to be resynced, which means reading from the entire collection (which is likely much larger than the operations log).
To decide whether to increase the operations log size, visit the Monitoring Console KV store: Instance dashboard or use the command line as follows:
- Determine which search head cluster member is currently the KV store captain by running
splunk show kvstore-status
from any cluster member. - On the KV store captain, run
splunk show kvstore-status
. - Compare the oplog start and end timestamps. The start is the oldest change, and the end is the newest one. If the difference is on the order of a minute, you should probably increase the operations log size.
While keeping your operations log too small has obvious negative effects (like members becoming stale), setting an oplog size much larger than your needs might not be ideal either. The KV store takes the full log size that you allocate right away, regardless of how much data is actually being written to the log. Reading the oplog can take a fair bit of RAM, too, although it is loosely bound. Work with Splunk Support to determine an appropriate operations log size for your KV store use. The operations log is 1 GB by default.
To increase the log size:
- Determine which search head cluster member is currently the KV store captain by running
splunk show kvstore-status
from any cluster member. - On the KV store captain, edit server.conf file, located in
$SPLUNK_HOME/etc/system/local/
. Increase theoplogSize
setting in the[kvstore]
stanza. The default value is 1000 (in units of MB). - Restart the KV store captain.
- For each of the other cluster members:
- Stop the member.
- Run
splunk clean kvstore --local
. - Edit server.conf file, located in
$SPLUNK_HOME/etc/system/local/
. Increase theoplogSize
setting in the[kvstore]
stanza. The default value is 1000 (in units of MB). - Restart the member.
- Run
splunk show kvstore-status
to verify synchronization.
About the app key value store | Back up and restore KV store |
This documentation applies to the following versions of Splunk® Enterprise: 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0
Feedback submitted, thanks!