Step 2: Set up your data
1. Upload the Hunkdata.json.gz
and Hunk installer to the virtual machine you configured in "Set up your virtual machine".
Once you have your virtual machine configured, you need to install the tutorial sample data: Hunkdata.json.gz.
2. SSH to your virtual machine, and move Hunkdata.json.gz
and your Splunk download to the HDFS user’s home directory.
Note: Make sure that the Hunk user has read and write access to the directory in which you place your data.
The following is an example of how you can do this. Directory structure may vary depending upon your configuration:
ssh root@<your sandbox ip> su <hunk user> cp Hunkdata.json.gz ~hdfs (this copies the data to the hdfs user)
3. Put the data into HDFS as the hdfs user. For example:
su - hdfs -c "hadoop fs -mkdir /data" su - hdfs -c "hadoop fs -put ~/Hunkdata.json.gz /data" su - hdfs -c "hadoop fs -ls /data"
Step 1: Set up a Hadoop Virtual Machine instance | Step 3: Set up an HDFS directory for Hunk access |
This documentation applies to the following versions of Hunk®(Legacy): 6.0, 6.0.1, 6.0.2, 6.0.3, 6.1, 6.1.1, 6.1.2, 6.1.3, 6.2, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.3.0, 6.3.1, 6.3.2, 6.3.3, 6.3.4, 6.3.5, 6.3.6, 6.4.0, 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11
Feedback submitted, thanks!