Splunk® Enterprise

Distributed Deployment Manual

Acrobat logo Download manual as PDF


Splunk Enterprise version 7.2 is no longer supported as of April 30, 2021. See the Splunk Software Support Policy for details. For information about upgrading to a supported version, see How to upgrade Splunk Enterprise.
This documentation does not apply to the most recent version of Splunk® Enterprise. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Components and the data pipeline

Each segment of the data pipeline corresponds to one or more Splunk Enterprise processing components. For example, data input is a pipeline segment. You can use either an indexer or a forwarder to input data.

Most segments of the data pipeline can be handled by multiple component types. The component that you employ for a segment depends on how you structure your deployment.

For example, although you can input data directly into an indexer, you would typically do so only in small deployments consisting of a single instance. In larger deployments, and frequently also in single-instance deployments, you would instead input data through a forwarder. Delegating input tasks to a forwarder can offer greater flexibility for your deployment.

For information on the data pipeline, see How data moves through Splunk Enterprise: The data pipeline.

For information on the processing components, see Scale your deployment with Splunk Enterprise components.

How components support the data pipeline

This table correlates the data pipeline segments with the components that can handle the segment:

Data pipeline segment Components
Data input indexer
universal forwarder
heavy forwarder
Parsing indexer
heavy forwarder
Indexing indexer
Search indexer
search head

Parsing can also occur on other components under limited circumstances:

Some typical interactions between components

These are examples of some of the ways that you can distribute and manage Splunk Enterprise functionality.

Forward data to an indexer

In most deployments, forwarders handle data input only, collecting data and sending it on to a Splunk Enterprise indexer. The indexer then performs both parsing and indexing. In some deployments, however, the forwarders also parse the data before sending it to the indexer, which then only indexes.

See How data moves through Splunk Enterprise: The data pipeline for the distinction between parsing and indexing.

Forwarders come in two flavors:

  • Universal forwarders. These maintain a small footprint on their host machine. They perform minimal processing on the incoming data streams before forwarding them on to an indexer. The indexer then parses and indexes the data.
  • Heavy forwarders. These retain much of the functionality of a full Splunk Enterprise instance. They can parse data before forwarding it to the receiving indexer. When a heavy forwarder parses the data, the indexer handles only the indexing segment.

Both types of forwarders tag data with metadata, such as host, source, and source type, before forwarding it on to the indexer.

Forwarders allow you to use resources efficiently while processing large quantities or disparate types of data. They also enable a number of interesting deployment topologies, by offering capabilities for load balancing, data filtering, and routing.

For an extended discussion of forwarders, including configuration and detailed use cases, see About forwarding and receiving in Forwarding Data.

Search across multiple indexers

In distributed search, you separate the indexing/parsing and search segments. Search heads send search requests to indexers and merge the results back to the user. This topology is particularly useful for horizontal scaling. To expand your deployment beyond the departmental level, you will likely employ distributed search.

For an extended discussion of distributed search, including configuration and detailed use cases, see About distributed search in Distributed Search.

Configurations and the data pipeline

For guidance on where to configure Splunk Enterprise settings, see Configuration parameters and the data pipeline in the Admin Manual. The topic lists configuration settings and the data pipeline segments that they act upon.

If you know which components in your Splunk Enterprise topology handle which segments of the data pipeline, you can use that topic to determine where to configure any particular setting. For example, if you use a search head to handle the search segment, you need to configure all search-related settings on the search head.

For more information

In summary, these are the fundamental components of a Splunk Enterprise distributed environment:

Last modified on 16 October, 2020
PREVIOUS
How data moves through Splunk deployments: The data pipeline
  NEXT
Components that help to manage your deployment

This documentation applies to the following versions of Splunk® Enterprise: 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 7.0.6, 7.0.7, 7.0.8, 7.0.9, 7.0.10, 7.0.11, 7.0.13, 7.1.0, 7.1.1, 7.1.2, 7.1.3, 7.1.4, 7.1.5, 7.1.6, 7.1.7, 7.1.8, 7.1.9, 7.1.10, 7.2.0, 7.2.1, 7.2.2, 7.2.3, 7.2.4, 7.2.5, 7.2.6, 7.2.7, 7.2.8, 7.2.9, 7.2.10, 7.3.0, 7.3.1, 7.3.2, 7.3.3, 7.3.4, 7.3.5, 7.3.6, 7.3.7, 7.3.8, 7.3.9, 8.0.0, 8.0.1, 8.0.2, 8.0.3, 8.0.4, 8.0.5, 8.0.6, 8.0.7, 8.0.8, 8.0.9, 8.0.10


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters