System and software requirements
Make sure you have access to at least one Hadoop cluster (with data in it) and the ability to run MapReduce jobs on that data.
Splunk Analytics for Hadoop is supported on the following Hadoop distributions and versions:
- Java 1.8 or higher
- Apache Hadoop
- 0.20
- 1.0.2
- 1.0.3
- 1.0.4
- 2.4
- 2.6
- 2.7
- Cloudera Distribution Including Apache Hadoop
- 4
- 4.2
- 4.3.0
- 4.4 (HA NN and HA JT)
- 5.0
- 5.3
- 5.3 (HA)
- 5.4
- 5.5
- 5.6
- Hortonworks Data Platform (HDP)
- 1.3
- 2.0
- 2.1
- 2.2
- 2.3
- 2.4
- MapR
- 2.1
- 3.0
- 5.0
- Amazon Elastic MapReduce (EMR)
- IBM InfoSphere BigInsights
- 5.1
- Pivotal HD
What you need on your Hadoop nodes
On Hadoop TaskTracker nodes you need a directory on the *nix file system running your Hadoop nodes that meets the following requirements:
- One gigabyte of free disk space for a copy of Splunk.
- 5-10GB of free disk space for temporary storage. This storage is used by the search processes.
What you need on your Hadoop file system
On your Hadoop file system (HDFS or otherwise) you will need:
- A subdirectory under
jobtracker.staging.root.dir
(usually /user/) with the name of the user account under which Splunk Analytics for Hadoop is running on the search head. For example, if Splunk Analytics for Hadoop is started by user "BigDataUser" andjobtracker.staging.root.dir=/user/
you need a directory/user/HadoopAnalytics
that is accessible by user "BigDataUser".
- A subdirectory under the above directory that can be used by this server for intermediate storage, such as
/user/hadoopanalytics/server01/
Learn more and get help | Ensure compatibility with Splunk Enterprise and Hadoop |
This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 8.0.0, 8.0.1
Feedback submitted, thanks!