Splunk® Enterprise

Getting Data In

Download manual as PDF

Splunk Enterprise version 6.x is no longer supported as of October 23, 2019. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
This documentation does not apply to the most recent version of Splunk. Click here for the latest version.
Download topic as PDF

What Splunk Enterprise does with your data (and how to make it do it better)

Splunk Enterprise consumes any sort of data and indexes it, transforming it into useful and searchable knowledge in the form of events. The data pipeline, displayed below, shows the main processes that act on the data during indexing. These processes constitute event processing. After the data has been processed into events, you can associate the events with knowledge objects to further enhance their usefulness.

The data pipeline

Once a chunk of data enters Splunk Enterprise, it moves through the data pipeline, which transforms the data into searchable events. This diagram shows the main steps in the data pipeline:

Datapipeline1 60.png

For a concise description of the data pipeline, see "How data moves through Splunk Enterprise" in the Distributed Deployment manual.

Splunk Enterprise makes reasonable decisions for most types of data during event processing, so that the resulting events are immediately useful and searchable. However, depending on the data and what sort of knowledge you need to extract from it, you might want to tweak one or more steps of event processing.

Event processing

Event processing occurs in two stages, parsing and indexing. All data that comes into Splunk Enterprise enters through the parsing pipeline as large chunks. During parsing, Splunk Enterprise breaks these chunks into events which it hands off to the indexing pipeline, where final processing occurs.

During both parsing and indexing, Splunk Enterprise acts on the data, transforming it in various ways. Most of these processes are configurable, so you have the ability to adapt them to your needs. In the description that follows, each link takes you to a topic that discusses one of these processes, with information on ways you can configure it.

While parsing, Splunk Enterprise performs a number of actions, including:

  • Extracting a set of default fields for each event, including host, source, and sourcetype.
  • Configuring character set encoding.
  • Identifying line termination using linebreaking rules. While many events are short and only take up a line or two, others can be long. You can also modify line termination settings interactively, using the Splunk Enterprise data preview feature.
  • Identifying timestamps or creating them if they don't exist. At the same time that it processes timestamps, Splunk Enterprise identifies event boundaries. You can also modify timestamp setings interactively, using the Splunk Enterprise data preview feature.

In the indexing pipeline, Splunk Enterprise performs additional processing, including:

  • Breaking all events into segments that can then be searched. You can determine the level of segmentation. The segmentation level affects indexing and searching speed, search capability, and efficiency of disk compression.
  • Building the index data structures.
  • Writing the raw data and index files to disk, where post-indexing compression occurs.

The distinction between parsing and indexing pipelines matters mainly for forwarders. Heavy forwarders can fully parse data locally and then forward the parsed data on to receiving indexers, where the final indexing occurs. With universal forwarders, on the other hand, the data gets forwarded after very minimal parsing. Most parsing then occurs on the receiving indexer.

  • For more information about events and what happens to them during the indexing process, see Overview of event processing in this manual.
  • A detailed diagram that depicts the indexing pipelines and explains how indexing works can be found in "How Indexing Works" in the Community Wiki.

Enhance and refine events

Once the data has been transformed into events, you can make the events even more useful by associating them with knowledge objects, such as event types, field extractions, and saved searches. For information about managing Splunk knowledge, read the Knowledge Manager manual, starting with "What is Splunk knowledge?".

Last modified on 26 October, 2014
About Windows data and Splunk Enterprise
Monitor files and directories

This documentation applies to the following versions of Splunk® Enterprise: 6.0, 6.0.1, 6.0.2, 6.0.3, 6.0.4, 6.0.5, 6.0.6, 6.0.7, 6.0.8, 6.0.9, 6.0.10, 6.0.11, 6.0.12, 6.0.13, 6.0.14, 6.0.15, 6.1, 6.1.1, 6.1.2, 6.1.3, 6.1.4, 6.1.5, 6.1.6, 6.1.7, 6.1.8, 6.1.9, 6.1.10, 6.1.11, 6.1.12, 6.1.13, 6.1.14

Was this documentation topic helpful?

Enter your email address, and someone from the documentation team will respond to you:

Please provide your comments here. Ask a question or make a suggestion.

You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters