Configure Hunk to read Hadoop Archive (HAR) files
To allow Hunk to read files archived in your Hadoop database, add the following stanza to the indexes.conf
file:
[provider:<provider_name>] vix.env.HADOOP_HOME = <path_to_hadoop> vix.env.JAVA_HOME = <path_to_java> vix.family = hadoop vix.fs.default.name = hdfs://<namenode>:<port> vix.mapred.job.tracker = <jobtracker>:<port> vix.splunk.home.hdfs = <path_on_hdfs> [<vix_name>] vix.input.1.path = har:///<path_to_archive_file>/<archive_file>.har/... vix.provider = <provider_name>
Archive cold buckets to frozen | Working with Hive and Parquet data |
This documentation applies to the following versions of Hunk®(Legacy): 6.0, 6.0.1, 6.0.2, 6.0.3, 6.1, 6.1.1, 6.1.2, 6.1.3, 6.2, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.3.13, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11
Feedback submitted, thanks!