How Splunk Enterprise handles your data
Splunk Enterprise consumes any sort of data and indexes it, transforming it into searchable knowledge in the form of events. The data pipeline shows the main processes that act on the data during indexing. These processes constitute event processing. After the data is processed into events, you can associate the events with knowledge objects to enhance their usefulness.
The data pipeline
After a chunk of data enters Splunk Enterprise, it moves through the data pipeline, which transforms the data into searchable events. This diagram shows the main steps in the data pipeline.
For a description of the data pipeline, see "How data moves through Splunk Enterprise" in the Distributed Deployment Manual.
Event processing occurs in two stages, parsing and indexing. All data that comes into Splunk Enterprise enters through the parsing pipeline as large chunks. During parsing, Splunk Enterprise breaks these chunks into events. It then hands off the events to the indexing pipeline, where final processing occurs.
During both parsing and indexing, Splunk Enterprise transforms the data. You can configure most of these processes to adapt them to your needs.
While parsing, Splunk Enterprise performs a number of actions, including:
- Extracting a set of default fields for each event, including
- Configuring character set encoding.
- Identifying line termination using line breaking rules. You can also modify line termination settings interactively, using the "Set Sourcetype" page in Splunk Web.
- Identifying or creating timestamps. At the same time that it processes timestamps, Splunk Enterprise identifies event boundaries. You can modify timestamp setings interactively, using the "Set sourcetype" page.
- You can set up Splunk Enterprise to mask sensitive event data (such as credit card or social security numbers) at this stage. It can also be configured to apply custom metadata to incoming events.
In the indexing pipeline, Splunk Enterprise performs additional processing, including:
- Breaking all events into segments that can then be searched. You can determine the level of segmentation, which affects indexing and searching speed, search capability, and efficiency of disk compression.
- Building the index data structures.
- Writing the raw data and index files to disk, where post-indexing compression occurs.
The distinction between parsing and indexing pipelines matters mainly for forwarders. Heavy forwarders can parse data locally and then forward the parsed data on to receiving indexers, where the final indexing occurs. With universal forwarders, the data gets forwarded after minimal parsing. Most parsing then occurs on the receiving indexer.
- For information about events and what happens to them during the indexing process, see Overview of event processing in this manual.
Enhance and refine events
After the data has been transformed into events, you can make the events more useful by associating them with knowledge objects, such as event types, field extractions, and reports. For information about managing Splunk knowledge, see the Knowledge Manager manual, starting with "What is Splunk knowledge?".
Configure your inputs
How do you want to add data?
This documentation applies to the following versions of Splunk® Enterprise: 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10, 6.2.11, 6.2.12, 6.2.13, 6.2.14, 6.2.15