Splunk® Hadoop Connect

Deploy and Use Splunk Hadoop Connect

Download manual as PDF

Download topic as PDF

App dashboard

From the Home page, you can view summary information about your scheduled export jobs, indexed Hadoop Distributed File System (HDFS) data, and the Hadoop clusters to which you have configured connections. This page includes three panels: Scheduled Exports, Indexed HDFS Data by Source, and Explore Cluster.

Scheduled Exports panel

In the Scheduled Exports panel, work with new and existing scheduled export jobs. Scheduled exports are searches you configure to run and send the results to your HDFS directory or mounted file system on a scheduled basis. For information about how scheduled export jobs work, see "Export to HDFS or a mounted file system".

  • Click More details for information about jobs run in the last 24 hours for your scheduled exports. See "Use the Troubleshooting menu" for more information.

NewScheduledExportsPage.png

Reading scheduled export information

The table provides the following information:

  • Name: The name assigned to the scheduled export job when it is created.
  • Next scheduled run: How long until the next job runs for the scheduled export.
  • Export cursor: How long ago the last job ran for the scheduled export.
  • Frequency: How often a job runs for this scheduled export.
  • Load factor: How much of the available system capacity was used by the last time the job ran. This number is calculated as:
    (export duration) / (exported time range)

Performing actions on scheduled exports

For each scheduled export listed, you can click Details for a job to open the Job Details page.

Detail page1.png

The Job Details page lets you view the following information about the scheduled export:

  • Scheduled export links: Links that take you to the export base directory where you can explore the exported data. If the export job is in "Running" state, you see links to the search jobs that are being executed by this export job.
  • Scheduled export configuration: The configuration details for scheduled export. For information about these values, see "Export to HDFS or a mounted file system".
  • Scheduled export status: Shows details on the status of the export. Includes earliest and latest committed export data points.
  • Scheduled export job status: Status of the last run job.

On the Scheduled Exports panel, you can do the following:

  • Click Explore for a job to drill down to view job output by partition. You can search the partitioned segment or add it to the Splunk platform as input. For adding an input, see "Import from HDFS".
  • Click Run now to tell the system to run an export job as soon as possible.
  • Click Pause to disable a job run while keeping the scheduled export configuration and status intact for later use.
  • Click Resume to enable a paused job.
  • Click Delete to stop the job and remove the scheduled export configuration. Exported data is saved.

Indexed HDFS Data by Source panel

The Indexed HDFS Data by Source panel displays the indexed data presented by the source.

Screen Shot Indexed HSDS1.png

Click Manage HDFS Inputs to view and manage your configured inputs.

Click HDFS Inputs1.png

 On this page you can:
  • Add a new input by clicking New. See "Import from HDFS".
  • Click on the resource name to modify it, for information about source configuration fields, see "Import from HDFS".
  • Source type: View the source type for the resource.
  • App: View the App which manages the resource.
  • Status: Change the status of the source by clicking Enabled or Disabled.
  • Actions: You can clone the source information or delete it from the system.

Explore Cluster panel

Explore cluster callout1.png

On the Explore Cluster panel, you can:

  • View your configured HDFS clusters and mounted file systems. Select a cluster or file system and click through down to the file level. Explore directories and files and import them into the Splunk platform. See "Explore HDFS or a mounted file system".
PREVIOUS
About supported file types
  NEXT
Export to HDFS or a mounted file system

This documentation applies to the following versions of Splunk® Hadoop Connect: 1.1, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5


Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters