Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Configure Splunk index archiving to Hadoop using the configuration files

Before you begin, note the following:

  • You must configure a Hadoop provider.
  • Splunk must be installed using the same user for all indexers and Splunk Enterprise instances. This is the user which connects to HDFS for archiving and the user and user permissions must be consistent.
  • The data in the referring Index must be in warm, cold, or frozen buckets only.
  • The Hadoop client libraries must be in the same location on each indexer. Likewise, the Java Runtime Environment must be installed in the same location on each indexer. See System and software requirements for updated information about the required versions.
  • The Splunk user associated with the Splunk indexer must have permission to write to the HDFS node.
  • Splunk cannot currently archive buckets with raw data larger than 5GB to S3. You can configure your Splunk Enterprise bucket sizes in indexes.conf. See Archiving Splunk indexes to S3 in this manual for known issues when archiving to S3.

Set bundle deletion parameters

Use the following attribute to specify how many bundles may accrue before Splunk Enterprise deletes them:

vix.splunk.setup.bundle.reap.limit = 5

The default value is 5, which means that when there are more than five bundles, Splunk Enterprise will delete the oldest one.

Configure index archiving in the configuration file

In indexes.conf, configure the following stanza:

[splunk_index_archive]
vix.output.buckets.from.indexes
vix.output.buckets.older.than
vix.output.buckets.path
vix.provider 

Where:

  • vix.output.buckets.from.indexes is the exact name of the Splunk index you want to copy into an archive. For example: "splunk_index." You can list multiple Splunk indexes separated by commas.
  • vix.output.buckets.older.than is the age at which bucket data in the Splunk Index should be archived. For example, if you specify 432000 seconds (5 days), data will be copied into the archive when it is five days old. Note that Splunk does delete data after a while, based on index settings, so make sure that this setting copies the Splunk data before it is deleted in the Splunk Enterprise indexer.
  • vix.output.buckets.path is the directory in HDFS where the archive bucket should be stored. For example: "/user/root/archive/splunk_index_archive". If you are using S3, you should prefix this value with s3n://<s3-bucket>/ and add the additional attributes from the code example below.
  • vix.provider is the virtual index provider for the new archive.

For S3 directories you must prefix vix.output.buckets.path with s3n://<s3-bucket>/ and then add the following additional attributes to the provider stanza:

vix.fs.s3n.awsAccessKeyId = <your aws access key ID>
vix.fs.s3n.awsSecretAccessKey = <your aws secret access key>

Limit the bandwidth used for archiving

You can set bandwidth throttling to limit the transfer rate of your archives.

You set throttling for a provider, that limit is then applied across all archives assigned to that provider. To configure throttling, add the following attribute under the virtual index provider stanza you want to throttle.

vix.output.buckets.max.network.bandwidth = <bandwidth in bits/second>

For more about configuring a provider in indexes.conf see Add or edit a provider in Splunk Web.

Last modified on 22 February, 2017
Add or edit an HDFS provider in Splunk Web   Archive Splunk indexes to Hadoop in Splunk Web

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2, 9.4.0


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters