Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Acrobat logo Download manual as PDF


Splunk Enterprise version 7.0 is no longer supported as of October 23, 2019. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
This documentation does not apply to the most recent version of Splunk® Enterprise. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

How Hadoop Data Roll works

Hadoop Data Roll does not work with buckets with journalCompression set to zstd.

After you configure an index as an Archive, a number of processes work to move aged data into archived indexes:

1. A saved search | archivebuckets automatically runs once an hour on the search head. This is a custom command packaged with the archiver which is implemented as the Python script archivebuckets.py.

2. archivebuckets queries the local REST endpoints to discover which indexes should be archived, and where to archive the indexes.

3. archivebuckets copies the Hadoop Data Roll jars into its own app directory, then launches distributed searches for each provider for the indexes to be archived.

The search used in this step is | copybuckets, which is a custom command automatically implemented by copybuckets.py.

4. The information for the index and its provider is fed to the search.

5. For each indexer, Splunk Enterprise copies the knowledge bundle needed to run the search.

6. On the indexer, copybuckets launches a Java process, with the same entry point (the SplunkMR class) used for Splunk Analytics for Hadoop searches.

7. Splunk Enterprise passes the info about providers and indexes to the Java process using stdin.

8. When the Java process sends events back to Splunk Enterprise, it writes them to stdout, and the custom search command (copybuckets.py) writes them back to the search process using stdout.

9. The Java process logs these actions to the splunk_archiver.log file.

10. The Java process checks all buckets in the designated indexes. If the buckets are ready to be archived, the process determines whether the buckets already exist in the archive. It accesses the archive using the provider information.

11. If the bucket has not yet been archived, the bucket is copied to a temporary directory at the archive. Once the bucket it is completely copied, and a receipt file added, it is moved to the correct folder in the archive.

12. If the bucket is previously archived, any new data that has reached it's archived date is copied into that bucket.

13. Archived buckets are ready to be searched in Splunk Web.

About the Hadoop Data Roll processes

Two search commands are defined to correspond with python scripts:

archivebuckets -> archivebuckets.py
copybuckets -> copybuckets.py

The implementation of the Hadoop Data Roll process uses the following processes:

process action process name notes
search process on the Search Head/Search Scheduler archivebuckets This is the search activity, including scheduling, that occurs on the Search Head
Python process on the Search Head archivebuckets.py
Search process on an indexer copybuckets <JSON describing indexes> This is all of the Search Activity that occurs in the Indexer.
Python process on an indexer copybuckets.py
Java Virtual Machine process on an indexer Hunk Java code This process ties the other processes together, and does the following:

1. Writes files to HDFS

2. logs information to $SPLUNK_HOME/var/log/splunk/splunk_archiver.log

3. Writes events to stdout, which is piped back to the Splunk search process | copybuckets <JSON describing indexes>

4. Information written in the Splunk Search process becomes an event returned by the search, and you can see these events in Splunk Enterprise with the search command | archivebuckets forcerun=1.

How the processes work together

The Hadoop Data Roll search framework strings these processes together and pipes them as follows:

1. stdout of the python process on an indexer copybuckets.py to stdin for the search process on the indexer archivebuckets.

2. stdout of | copybuckets <JSON describing indexes> to stdin to python process on the Search Head archivebuckets.py.

3. stdout of archivebuckets.py into search scheduler for the search process on the Search Head archivebuckets.

4. The python process on the Search Head copybuckets.py script pipes stdout of Hadoop Data Roll Java code (Java Virtual Machine process on an indexer) to stdout to the search process on the indexer copybuckets <JSON describing indexes>.

At the end of this process, anything that the Hadoop Data Roll Java code (JVM process on an indexer) writes to stdout becomes events returned from the search scheduler | archivebuckets.

Finalizing or aborting archiving process

When Hadoop Data Roll pauses or finalizes a search, this information must be passed to downstream processes.

For example, if the search process on an indexer shuts down, the search could kill the child process, which then prevents the indexer Python process from shutting shut down gracefully. If the Python process is using a shared resource such as a database connection, or an output stream to HDFS, this could cause failure and possible loss of data.

To resolve this, the search process lets the child process decide what to do should the search process suspend or shut down. If the search process on an indexer is paused, it stops reading from its pipe to the Python process on an indexer. When this happens, the Python process on the indexer can no longer write to the pipe once the buffer fills up.

The Python process is able to determine that the search process on an indexer still exists, but is paused.

If the search process on an indexer is stopped/finalized, it shuts down and the pipe to the Python search process is broken. This is how the Splunk custom search commands know that the upstream search has stopped running. This occurs whether the search is shut down cleanly due to user action, or shut down violently due to an upstream crash.

If the archiving Java process in the indexer finds a broken pipe to the indexer search process, it logs that information, but continues to finish archiving until the buffer is full. if this is not desired, simply kill the Java process.

Last modified on 25 August, 2021
PREVIOUS
About archiving indexes with Hadoop Data Roll
  NEXT
Add or edit an HDFS provider in Splunk Web

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters