Hunk User Manual

Set up a provider and virtual index in the configuration file

Once you have successfully installed and licensed Hunk, you can modify indexes.conf to create a provider and virtual index or use Hunk Web to add virtual indexes and providers.

Configure your permissions

Before you configure a provider and virtual index, you should confirm that Hunk has the proper permissions and gather the information you will need to set up the provider and indexer.s:

  • Hunk must have read-only access to the HDFS directory where your virtual index data resides.
  • Hunk must have read-write access to the HDFS directory where your Hunk instance is installed. (This is usually your splunkMR directory, for example: User/hue/splunk_mr/dispatch). Hunk creates the following directories in this directory:
    • /dispatch (this is the directory where the temp results are stored).
    • /packages (this is the Hunk .tgz file that will get copied over to the data node).
    • /bundles (this is where the configurations are stored.)
  • Hunk must have read-write access to the Datanode where your /tmp directory resides. This is the temp directory that you point to when you configure vix.splunk.home.datanode in your Provider settings.

Gather up the following information

You'll need to know the following information about your search head, file system, and Hadoop configuration:

  • The host name and port for the NameNode of the Hadoop cluster.
  • The host name and port for the JobTracker of the Hadoop cluster.
  • Installation directories of Hadoop client libraries and Java.
  • Path to a writable directory on the DataNode/TaskTracker *nix filesystem, the one for which the Hadoop user account has read and write permission.
  • Path to a writable directory in HDFS that can be used exclusively by this Hunk search head.

Edit Indexes.conf

Edit indexes.conf to establish a virtual index. This is where you tell Splunk about your Hadoop cluster and about the data you want to access via virtual indexes.

Create indexes.conf

Create a copy of indexes.conf and place it into your local directory. In this example we are using:


Note: The following changes to indexes.conf become effective at search time, no restart is necessary.

Create a provider

For each different Hadoop cluster you create a provider stanza. In this stanza, you provide:

  • The path to your Java installation
  • The path to your Hadoop library
  • Other MapReduce configurations that you want to use when running searches against this cluster.

1. Create the provider. You may configure multiple indexes for a provider.

[provider:MyHadoopProvider]                  = hadoop
vix.env.JAVA_HOME           = /path_to_java_home
vix.env.HADOOP_HOME         = /path_to_hadoop_client_libraries

2. Tell Hunk about the cluster, including the NameNode and JobTracker as well as where to find and where to install your .tgz copy.

vix.mapred.job.tracker = = hdfs://
vix.splunk.home.hdfs = /<the path in HDFS that is dedicated to this search head for temp storage>
vix.splunk.setup.package = /<the path on the search head to the package to install in the data nodes>
vix.splunk.home.datanode = /<the path on the TaskTracker's Linux filesystem on which the above Splunk package should be installed>

3. Optionally configure timeout and replication for bundle and package setup on your TaskTrackers:

vix.splunk.setup.bundle.max.inactive.wait = max time (in seconds) that a downloading bundle file can be inactive.
vix.splunk.setup.bundle.poll.interval = polling interval (in milliseconds) for tasks waiting on TaskTracker bundle install.
vix.splunk.setup.bundle.setup.timelimit = time limit (in milliseconds) for setting up bundle on TaskTracker.
vix.splunk.setup.package.max.inactive.wait = max time (in seconds) that Splunk package download can be inactive.
vix.splunk.setup.package.poll.interval = polling interval (in milliseconds) for tasks waiting on Splunk installation on TaskTracker.
vix.splunk.setup.package.setup.timelimit = time limit (in milliseconds) for setting up Splunk on TaskTracker.
vix.splunk.setup.bundle.replication = set custom replication factor for bundle on hdfs.
vix.splunk.setup.package.replication = set custom replication factor for Splunk package on hdfs.

Create a virtual index

1. Define one or more virtual indexes for each provider. This is where you can specify how the data is organized into directories, which files are part of the index and some hints about the time range of the content of the files.

vix.provider          = MyHadoopProvider
vix.input.1.path      = /home/myindex/data/${date_date}/${date_hour}/${server}/...
vix.input.1.accept    = \.gz$  = /home/myindex/data/(\d+)/(\d+)/ = yyyyMMddHH = 0  = /home/myindex/data/(\d+)/(\d+)/ = yyyyMMddHH = 3600
  • For vix.input.1.path: Provide a fully qualified path to the data that belongs in this index and any fields you want to extract from the path.

For example:


Items enclosed in ${}'s are extracted as fields and added to each search result from that path. The search will ignore the directories which do not match the search string, thus significantly aiding performance.

  • For vix.input.1.accept provide a regular expression whitelist of files to match.
  • For vix.input.1.ignore provide a regular expression blacklist of files to ignore. Note, ignore takes precedence over accept.

2. Use the regex, format, and offset values to extract a time range for the data contained in a particular path. The time range is made up of two parts: earliest time and latest time The following configurations can be used:

  • For, provide a regular expression that matches a portion of the directory which provides date and time, to allow for interpreting time from the path.
    Use capturing groups to extract the parts that make up the timestamp. The values of the capturing groups are concatenated together and are interpreted according to the specified format. Extracting a time range from the path will significantly speed searching for particular time windows by ignoring directories which fall outside of the search's time range.
  • For, provide a date/time format string for how to interpret the data extracted from the above regex. The format string specs can be found in the SimpleDateFormat. The following two non-standard formats are also supported: epoch to interpret the data as an epoch time and mtime to use the modification time of the file rather than the data extracted by the regex.
  • For, you can optionally use it to provide an offset to account for timezone and/or safety boundaries.

3. Set bundle deletion parameters by describing how many bundles may accrue before Splunk deletes them:

vix.splunk.setup.bundle.reap.limit = 5

The default value is 5, which means that when there are more than five bundles, Hunk will delete the oldest one.

Set provider configuration variables

Hunk also provides preset configuration variables for each provider you create. You can leave the preset variables in place or edit them as needed. If you want to edit them, see Provider Configuration Variables in the reference section of this manaual.

Note: If you are configuring Hunk to work with YARN, you must add new settings. See "Required configuration variables for YARN" in this manual.

Edit props.conf (optional) to define data processing

Optionally, you can edit props.conf to define how to process data files. Index and search time attributes are accepted for either type. The example below shows how twitter data (json object representing tweets) is processed using index and search time props. It shows a single line json data, with _time being a calculated field (note we've disabled index-time timestamping)

priority         = 100
sourcetype       = twitter-hadoop

KV_MODE          = json
EVAL-_time       = strptime(postedTime, "%Y-%m-%dT%H:%M:%S.%lZ")
Last modified on 22 February, 2017
About virtual indexes   Add a sourcetype

This documentation applies to the following versions of Hunk®(Legacy): 6.1, 6.1.1, 6.1.2, 6.1.3, 6.2, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.3.7, 6.3.8, 6.3.9, 6.3.10, 6.3.11, 6.3.12, 6.3.13, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11

Was this topic useful?

You must be logged into in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters