Splunk® Enterprise

Managing Indexers and Clusters of Indexers

Splunk Enterprise version 8.1 will no longer be supported as of April 19, 2023. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.

Archive Splunk indexes to Hadoop on S3

For best performance and to avoid bucket size limits, you should use the S3A filesystem that was introduced in Apache Hadoop 2.6.0. Configuration for different Hadoop distribution may differ.

Configure archiving to Amazon S3

To enable Splunk Enterprise to archive Splunk data to S3:

1. Add the following configuration to your providers:

vix.env.HADOOP_HOME: /absolute/path/to/apache/hadoop-2.6.0
vix.fs.s3a.access.key: <AWS access key>
vix.fs.s3a.secret.key: <AWS secret key>
vix.env.HADOOP_TOOLS: $HADOOP_HOME/share/hadoop/tools/lib
vix.splunk.jars: $HADOOP_TOOLS/hadoop-aws-2.6.0.jar,$HADOOP_TOOLS/aws-java-sdk-1.7.4.jar,$HADOOP_TOOLS/jackson-databind-2.2.3.jar,$HADOOP_TOOLS/jackson-core-2.2.3.jar,$HADOOP_TOOLS/jackson-annotations-2.2.3.jar

2. Using the above configuration, now create an archive index with path prefixed with s3a://:

s3a://bucket/path/to/archive

In this example,

  • s3a is the implementation Hadoop will use to transfer and read files from the supplied path
  • bucket is the name of your S3 bucket
  • /path/to/archive are directories within the bucket

Further configuration for unique setups

You may need to further configure Splunk Enterprise to search S3 archives depending on the specifics of your configuration.

If you are using a search head exclusively

If you're just using a search head to search your archives, then set the provider's vix.mode attribute to stream:

vix.mode = stream

When vix.mode is set to stream, Splunk Enterprise streams all the data the search matches to the search head, and will not spawn MapReduce jobs on Hadoop.

If you have configured a search head with a Hadoop cluster

If the Hadoop version for search head archive indexes is compatible with your Hadoop cluster, no additional configuration is necessary to search your archive indexes. Just go to the Splunk Web search bar and enter:

index=<your-archive-index-name>

The search head will spawn Hadoop MapReduce jobs against your archive when it's appropriate to do so.

If your Hadoop cluster version is not compatible with your Hadoop Home version

You can still use Data Roll if your Hadoop cluster is not compatible with your Hadoop client libraries (that have the S3a filesystem). An example of this is if you are using Apache Hadoop 2.6.0 for archiving, but you are using Hadoop 1.2.0 for your Hadoop cluster. To do this, use the older S3n filesystem to search your archives.

To configure S3n and an older Hadoop cluster to search your archives:

1. Configure a Provider for your Hadoop cluster.


2. For every archive index, configure indexes.conf from your terminal and add a new virtual index with these properties:

[<virtual_index_name_of_your_choosing>]
vix.output.buckets.path = <archive_index_destination_path_with_s3n_instead_of_s3a>
vix.provider = <hadoop_cluster_provider>

3. Make sure the vix.output.buckets.path is S3n so that Splunk Enterprise can search using the older filesystem to search your archives..

For example. Given an archive index named "main_archive", destination path "s3a://my-bucket/archive_root/main_archive" and provider = "hadoop_cluster", you should configure a virtual index like this:

[main_archive_search] vix.output.buckets.path = s3n://my-bucket/archive_root/main_archive vix.provider = hadoop_cluster

Known Issues with S3

When using Hadoop's S3N filesystem, you're limited to uploading files that are less than 5GB. While it's rare, it's possible that Splunk buckets become larger than 5GB. These buckets would not get archived when using the S3N filesystem.

If you use the S3N filesystem, configure your indexes to roll buckets from hot to warm at a size less than 5 GB via the maxDataSize attribute in indexes.conf.

Data Roll archiving requires, at a minimum, read-after-write consistency. For the US Standard region, S3 only guarantees this when accessed via the Northern Virginia endpoint. See the Amazon AWS S3 FAQ for more details.

Bucket raw data limit

Because of how Hadoop interacts with the S3 file system, Splunk Enterprise cannot currently archive buckets with raw data sets larger than 5GB to S3.

We recommend that you use a S3FileSystem implementation that supports uploads larger than 5GB. To ensure that all your data is archived, configure your indexes to roll buckets from hot to warm at a size less than 5 GB via the maxDataSize attribute in indexes.conf.

Data copy process

When archiving to S3, data is copied twice. This is because S3 does not support file renaming and the FileSystem implements file rename as follows:

  • Download the file
  • Upload it renamed
  • Delete the original file

This process does not create duplicate data in your archive.

Bandwidth throttling limitations

Splunk Enterprise cannot guarantee that bandwidth throttling will be respected when archiving to S3. Splunk will still attempt to throttle bandwidth where possible, if configured to do so.

Last modified on 06 September, 2023
Archive Splunk indexes to Hadoop in Splunk Web   Search indexed data archived to Hadoop

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters