Distributed Deployment Manual

 


How data moves through Splunk: the data pipeline

NOTE - Splunk version 4.x reached its End of Life on October 1, 2013. Please see the migration information.

This documentation does not apply to the most recent version of Splunk. Click here for the latest version.

How data moves through Splunk: the data pipeline

Data in Splunk transitions through several phases, as it moves along the data pipeline from its origin in sources such as logfiles and network feeds to its transformation into searchable events that encapsulate valuable knowledge. The data pipeline includes these segments:

You can assign each of these segments to a different Splunk instance, as described here.

This diagram outlines the data pipeline:

Datapipeline1.png

Splunk instances participate in one or more segments of the data pipeline, as described in "Scale your deployment".

Note: The diagram represents a simplified view of the indexing architecture. It provides a functional view of the architecture and does not fully describe Splunk internals. In particular, the parsing pipeline actually consists of three pipelines: parsing, merging, and typing, which together handle the parsing function. The distinction can matter during troubleshooting, but does not generally affect how you configure or deploy Splunk.

Input

In the input segment, Splunk consumes data. It acquires the raw data stream from its source, breaks it into 64K blocks, and annotates each block with some metadata keys. The keys apply to the entire input source overall. They include the host, source, and source type of the data. The keys can also include values that are used internally by Splunk, such as the character encoding of the data stream, and values that control later processing of the data, such as the index into which the events should be stored.

During this phase, Splunk does not look at the contents of the data stream, so the keys apply to the entire source, not to individual events. In fact, at this point, Splunk has no notion of individual events at all, only of a stream of data with certain global properties.

Parsing

During the parsing segment, Splunk examines, analyzes, and transforms the data. This is also known as event processing. The parsing phase has many sub-phases:

  • Breaking the stream of data into individual lines.
  • Identifying, parsing, and setting timestamps.
  • Annotating individual events with metadata copied from the source-wide keys.
  • Transforming event data and metadata according to Splunk regex transform rules.

Indexing

During indexing, Splunk takes the parsed events and writes them to the search index on disk. It writes both compressed raw data and the corresponding index files.

For brevity, parsing and indexing are often referred together as the indexing process. At a high level, that's fine. But when you need to look more closely at the actual processing of data, it can be important to consider the two segments individually.

Search

Splunk's search function manages all aspects of how the user sees and uses the indexed data, including interactive and scheduled searches, reports and charts, dashboards, and alerts. As part of its search function, Splunk stores user-created knowledge objects, such as saved searches, event types, views, and field extractions.

For more information on the various steps in the pipeline, see "How indexing works".

This documentation applies to the following versions of Splunk: 4.2 , 4.2.1 , 4.2.2 , 4.2.3 , 4.2.4 , 4.2.5 , 4.3 , 4.3.1 , 4.3.2 , 4.3.3 , 4.3.4 , 4.3.5 , 4.3.6 , 4.3.7 View the Article History for its revisions.


You must be logged into splunk.com in order to post comments. Log in now.

Was this documentation topic helpful?

If you'd like to hear back from us, please provide your email address:

We'd love to hear what you think about this topic or the documentation as a whole. Feedback you enter here will be delivered to the documentation team.

Feedback submitted, thanks!