Archive cold buckets to frozen in Hadoop
Data is aged locally on every indexer. The way you configure your index determines the data size or age at which the data to moves to the next state (hot, warm, cold, frozen) and is ultimately deleted.
Once you configure an index to archive data, the archiving of indexes runs on a schedule that is determined globally on the Splunk search head.
When both processes occur, a disconnect can occur between the indexer's local processes and the archiving process. As a result, the indexers can delete a bucket before it's been archived.
To avoid buckets from being deleted you can use the the
coldToFrozen.sh script on the local indexer process. This script shifts the responsibility for deleting buckets from the indexer to Hadoop Data Roll, so only use this script for indexes that are being archived.
coldToFrozen.sh script as a fallback and not your primary hook for archiving. This script buys you more time when either your system is receiving data faster than normal, or when the archiving storage layer is down, so that you'll have more time to archive a given bucket. To facilitate this further, for each archive index you can set your
vix.output.buckets.older.than = seconds as low as possible, so that buckets are archived as quickly as possible.
Configure the cold bucket to roll to frozen
Note the following if you are using the
- The script must be installed on each stanza which configures an index that is being archived.
- All the search peers to the search head must have the script installed. You can do each peer manually or use the deployer for search head clusters.
- The script must be removed from any index for which you disable archiving. Otherwise, the script will continue to run and the data will overfill your existing disk space because there is no archive to receive that data (and thus it will not get deleted).
- Do not add this script to any indexers that are not configured to archive data.
For each Splunk index, use the provided script located in
$SPLUNK_HOME/etc/apps/splunk_archiver/bin/ and named
coldToFrozen.sh to archive your cold data to frozen. This path may very depending upon your configuration path. For example:
[<index name>] coldToFrozenScript = "$SPLUNK_HOME/etc/apps/splunk_archiver/bin/coldToFrozen.sh"
Search indexed data archived to Hadoop
Troubleshoot Hadoop Data Roll
This documentation applies to the following versions of Splunk® Enterprise: 6.5.0, 6.5.1, 6.5.2, 6.5.3, 6.5.4, 6.5.5, 6.5.6, 6.5.7, 6.5.8, 6.5.9, 6.5.10, 6.6.0, 6.6.1, 6.6.2, 6.6.3, 6.6.4, 6.6.5, 6.6.6, 6.6.7, 6.6.8, 6.6.9, 6.6.10, 6.6.11, 6.6.12, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4