Splunk® Enterprise

Splunk Analytics for Hadoop

Splunk Enterprise version 8.0 is no longer supported as of October 22, 2021. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.

Configure Hive connectivity

Splunk Analytics for Hadoop reaches End of Life on January 31, 2025.

By default, Hive saves data for multiple file formats as either binary files or as a set of text files delimited with special characters. Splunk Analytics for Hadoop currently supports 4 Hive (v0.12) file format types: Textfile, RCfile, ORC files and Sequencefile.

Splunk Analytics for Hadoop supports different file formats via its preprocessor framework, providing a data preprocessor called HiveSplitGenerator. This data processor lets Splunk Analytics for Hadoop access and process data stored or used by Hive.

The easiest way to configure Splunk Analytics for Hadoop to connect to Hive tables is to edit indexes.conf to:

  • Provide Splunk Analytics for Hadoop with the metastore URI.
  • Specify that Splunk Analytics for Hadoop use the HiveSplitGenerator to read the Hive data.

If you don't want Splunk Analytics for Hadoop to access your metastore server, you can manually configure it to access raw data files that make up your Hive tables. See "Configure Splunk Analytics for Hadoop to read your Hive tables without connecting to Metastore" in this topic.

Splunk Analytics for Hadoop currently supports the following versions of Hive:

  • 0.10
  • 0.11
  • 0.12
  • 0.13
  • 0.14
  • 1.2
  • 3.1.2

Hive 3.1.2 supports Hadoop 3.x. The earlier Hive versions listed here can only support Hadoop 2.x or lower.

Before you begin

To set up Splunk Analytics for Hadoop to read Hive tables, you must have already configured your indexes and providers, if you have not set them up yet, see:

Ensure your Hadoop and Hive versions are compatible

When you set up your Hadoop data provider, make sure it uses a compatible version of Hive. Hadoop 2.x or below requires versions of Hive that are 2.x or below. Hadoop 3.x requires versions of HIve that are at least 3.x. If you configure a Hadoop cluster that is version 3.x with a Hive instance that is version 2.x or lower, you will run into connectivity issues when you try to save a file in a Hive file format.

Configure Hive connectivity with a metastore

To configure Hive connectivity, you provide the vix.hive.metastore.uris.

Splunk Analytics for Hadoop uses the information in the provided Metastore server to read the table information, including column names, types, data location and format, thus allowing it to process the search request.

Here's an example of a configured provider stanza that properly enables Hive connectivity. Note that a table contains one or more files, and that each virtual index could have multiple input paths, one for each table.

[provider:BigBox]
...
vix.splunk.search.splitter = HiveSplitGenerator 
vix.hive.metastore.uris = thrift://metastore.example.com:9083

[orders]
vix.provider = BigBox
vix.input.1.path = /user/hive/warehouse/user-orders/...
vix.input.1.accept = \.txt$
vix.input.1.splitter.hive.dbname = default 
vix.input.1.splitter.hive.tablename = UserOrders

vix.input.2.path = /user/hive/warehouse/reseller-orders/...
vix.input.2.accept = .*
vix.input.2.splitter.hive.dbname = default
vix.input.2.splitter.hive.tablename = ResellerRrders

In the rare case that the split logic of the Hadoop InputFormat implementation of your table is different from that of Hadoop's FileInputFormat, the HiveSplitGenerator split logic does not work. Instead, you must implement a custom SplitGenerator and use it to replace the default SplitGenerator. See Configure Splunk Analytics for Hadoop to use a custom file format for more information.

Configure Splunk Analytics for Hadoop to use a custom file format

To use a custom file format, you edit your provider stanza to add a .jar file that contains your custom classes as follows:

vix.splunk.jars

Note that if you don't specify a InputFormat class, files are treated as text files and broken into records by new-line character.

Configure Splunk Analytics for Hadoop to read your Hive tables without connecting to Metastore

If you are unable or do not wish to expose your Metastore server, you can configure Hive connectivity by specifying additional configuration items. For Splunk Analytics for Hadoop, the minimum required information is:

  • columnnames
  • columntypes

Other information is required if you specify it when you create the table (for example if your tables specify InputFormat instead of Hive, you must tell Splunk Analytics for Hadoop.)

Create a stanza in indexes.conf that provides Splunk Analytics for Hadoop with the list of column names and types of your Hive table(s). These column names become the field names you see when running reports in Splunk Analytics for Hadoop:

[your-provider]
vix.splunk.search.splitter = HiveSplitGenerator 

[your-vix]
vix.provider = your-provider
vix.input.1.path = /user/hive/warehouse/employees/...
vix.input.1.splitter.hive.columnnames = name,salary,subordinates,deductions,address
vix.input.1.splitter.hive.columntypes = string:float:array<string>:map<string,float>:struct<street:string,city:string,state:string,zip:int>
vix.input.1.splitter.hive.fileformat  = sequencefile
vix.input.2.path = /user/hive/warehouse/employees_rc/...

Partitioning table data

When using the Hive Metastore, Splunk Analytics for Hadoop automatically analyzes the tables, preserving partition keys and values, and, based on your search criteria, pruning any unwanted partitions. This can help speed up searches.

When not using a Metastore, you can update your [virtual-index] stanza to tell Splunk Analytics for Hadoop about the partitions using key values as part of the file path. For example, the following configuration

vix.input.1.path = /apps/hive/warehouse/sdc_orc2/${server}/${date_date}/...

would extract and recognize a "server" and a "date_date" partitions in the following path

/apps/hive/warehouse/sdc_orc2/idxr01/20120101/000859_0

Here is an example of a partitioned path that Splunk Analytics for Hadoop will automatically recognize the same partitions without any extra configuration

/apps/hive/warehouse/sdc_orc2/server=idxr01/date_date=20120101/000859_0
Last modified on 30 October, 2023
Working with Hive and Parquet data   Configure Parquet connectivity

This documentation applies to the following versions of Splunk® Enterprise: 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10, 8.1.0, 8.1.1, 8.1.2, 8.1.3, 8.1.4, 8.1.5, 8.1.6, 8.1.7, 8.1.8, 8.1.9, 8.1.10, 8.1.11, 8.1.12, 8.1.13, 8.1.14, 8.2.0, 8.2.1, 8.2.2, 8.2.3, 8.2.4, 8.2.5, 8.2.6, 8.2.7, 8.2.8, 8.2.9, 8.2.10, 8.2.11, 8.2.12, 9.0.0, 9.0.1, 9.0.2, 9.0.3, 9.0.4, 9.0.5, 9.0.6, 9.0.7, 9.0.8, 9.0.9, 9.0.10, 9.1.0, 9.1.1, 9.1.2, 9.1.3, 9.1.4, 9.1.5, 9.1.6, 9.1.7, 9.2.0, 9.2.1, 9.2.2, 9.2.3, 9.2.4, 9.3.0, 9.3.1, 9.3.2


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters