Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Splunk Enterprise version 7.3 is no longer supported as of October 22, 2021. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.

Archive cold buckets to frozen in Hadoop

Data is aged locally on every indexer. The way you configure your index determines the data size or age at which the data to moves to the next state (hot, warm, cold, frozen) and is ultimately deleted.

Once you configure an index to archive data, the archiving of indexes runs on a schedule that is determined globally on the Splunk search head.

When both processes occur, a disconnect can occur between the indexer's local processes and the archiving process. As a result, the indexers can delete a bucket before it's been archived.

To avoid buckets from being deleted you can use the the splunk_archiver app coldToFrozen.sh script on the local indexer process. This script shifts the responsibility for deleting buckets from the indexer to Hadoop Data Roll, so only use this script for indexes that are being archived.

Consider the coldToFrozen.sh script as a fallback and not your primary hook for archiving. This script buys you more time when either your system is receiving data faster than normal, or when the archiving storage layer is down, so that you'll have more time to archive a given bucket. To facilitate this further, for each archive index you can set your vix.output.buckets.older.than = seconds as low as possible, so that buckets are archived as quickly as possible.

Configure the cold bucket to roll to frozen

Note the following if you are using the coldToFrozen.sh script:

  • The script must be installed on each stanza which configures an index that is being archived.
  • All the search peers to the search head must have the script installed. You can do each peer manually or use the deployer for search head clusters.
  • The script must be removed from any index for which you disable archiving. Otherwise, the script will continue to run and the data will overfill your existing disk space because there is no archive to receive that data (and thus it will not get deleted).
  • Do not add this script to any indexers that are not configured to archive data.

For each Splunk index, use the provided script located in $SPLUNK_HOME/etc/apps/splunk_archiver/bin/ and named coldToFrozen.sh to archive your cold data to frozen. This path may very depending upon your configuration path. For example:

[<index name>]
coldToFrozenScript = "$SPLUNK_HOME/etc/apps/splunk_archiver/bin/coldToFrozen.sh"
Last modified on 30 January, 2018
Search indexed data archived to Hadoop   Troubleshoot Hadoop Data Roll

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters