Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Set a retirement and archiving policy

Note: Most of this topic is not relevant to SmartStore indexes. See Configure data retention for SmartStore indexes.

Configure data retirement and archiving policy by controlling the size of indexes or the age of data in indexes.

The indexer stores indexed data in directories called buckets. Buckets go through four stages of retirement. When indexed data reaches the final, frozen state, the indexer removes it from the index. You can configure the indexer to archive the data when it freezes, instead of deleting it entirely. See "Archive indexed data" for details.

Bucket stage Description Searchable?
Hot Contains newly indexed data. Open for writing. One or more hot buckets for each index. Yes
Warm Data rolled from hot. There are many warm buckets. Yes
Cold Data rolled from warm. There are many cold buckets. Yes
Frozen Data rolled from cold. The indexer deletes frozen data by default, but you can also archive it. Archived data can later be thawed. No

You configure the sizes, locations, and ages of indexes and their buckets by editing indexes.conf, as described in "Configure index storage".

Caution: When you change your data retirement and archiving policy settings, the indexer can delete old data without prompting you.

Set attributes for cold to frozen rolling behavior

The maxTotalDataSizeMB and frozenTimePeriodInSecs attributes in indexes.conf help determine when buckets roll from cold to frozen. These attributes are described in detail below.

Freeze data when an index grows too large

You can use the size of an index to determine when data gets frozen and removed from the index. If an index grows larger than its maximum specified size, the oldest data is rolled to the frozen state.

The default maximum size for an index is 500,000MB. To change the maximum size, edit the maxTotalDataSizeMB attribute in indexes.conf. For example, to specify the maximum size as 250,000MB:

[main]
maxTotalDataSizeMB = 250000

Specify the size in megabytes.

Restart the indexer for the new setting to take effect. Depending on how much data there is to process, it can take some time for the indexer to begin to move buckets out of the index to conform to the new policy. You might see high CPU usage during this time.

This setting works with frozenTimePeriodInSecs to determine when data gets frozen. Data rolls to frozen when either setting is reached.

If maxTotalDataSizeMB is reached before frozenTimePeriodInSecs, data will be rolled to frozen before the configured time period has elapsed. If archiving policy has not been properly configured, unintended data loss can occur.

Freeze data when it grows too old

You can use the age of data to determine when a bucket gets rolled to frozen. When the most recent data in a particular bucket reaches the configured age, the entire bucket is rolled.

To specify the age at which data freezes, edit the frozenTimePeriodInSecs attribute in indexes.conf. This attribute specifies the number of seconds to elapse before data gets frozen. The default value is 188697600 seconds, or approximately 6 years. This example configures the indexer to cull old events from its index when they become more than 180 days (15552000 seconds) old:

[main]
frozenTimePeriodInSecs = 15552000

Specify the time in seconds.

Depending on how much data there is to process, it can take some time for the indexer to begin to move buckets out of the index to conform to the new policy. You might see high CPU usage during this time.

Archive data

If you want to archive frozen data instead of deleting it entirely, you must tell the indexer to do so, as described in "Archive indexed data". You can create your own archiving script or you can just let the indexer handle the archiving for you. You can later restore ("thaw") the archived data, as described in "Restore archived data".

Other ways that buckets age

There are a number of other conditions that can cause buckets to roll from one stage to another, some of which can also trigger deletion or archiving. These are all configurable, as described in "Configure index storage". For a full understanding of all your options for controlling retirement policy, read that topic and look at the indexes.conf spec file.

For example, the indexer rolls buckets when they reach their maximum size. You can reduce bucket size by setting a smaller maxDataSize in indexes.conf so they roll faster. But note that it takes longer to search more small buckets than fewer large buckets. To get the results you are after, you will have to experiment a bit to determine the right size for your buckets.

Troubleshoot the archive policy

I ran out of disk space so I changed the archive policy, but it's still not working

If you changed your archive policy to be more restrictive because you've run out of disk space, you may notice that events haven't started being archived according to your new policy. This is most likely because you must first free up some space so the process has room to run. Stop the indexer, clear out ~5GB of disk space, and then start the indexer again. After a while (exactly how long depends on how much data there is to process) you should see INFO entries about BucketMover in splunkd.log showing that buckets are being archived.

Last modified on 22 August, 2023
Back up indexed data   Archive indexed data

This documentation applies to the following versions of Splunk® Enterprise: 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters