Splunk Cloud Platform

Use Ingest Processors

How data moves through the Ingest Processor solution

Data moves through Ingest Processor as follows:

  1. A tool, machine, or piece of software in your network generates data such as event logs or traces.
  2. An agent, such as a Splunk forwarder, receives the data and then sends it to the Ingest Processor. Alternatively, the device or software that generated the data can send it to the Ingest Processor without using an agent.
  3. Ingest Processor filters and transforms data in pipelines based on a partition, and then sends the resulting processed data to a specified destination such as a Splunk index.

Ingest Processor routes processed data to destinations based on pipelines you configured with a partition and SPL2 statement and then apply. If there are no applicable pipelines, then unprocessed data is either dropped or routed to the default destination specified in the configuration settings for Ingest Processor. For more information on how data moves through a Ingest Processor, see Partitions.

If you don't specify a default destination, the Ingest Processor drops unprocessed data.

As the Ingest Processor receives and processes data, it measures metrics indicating the volume of data that was received, processed, and sent to a destination. These metrics are stored in the _metrics index of the Splunk Cloud Platform deployment that is connected to your tenant. The Ingest Processor service surfaces the metrics in the dashboard, providing detailed overviews of the amount of data that is moving through the system.

Partitions

Ingest Processor merges received data into an internal dataset before processing and routing that data. A partition is a subset of data that you select for processing in your pipeline. Each pipeline that you apply to the Ingest Processor creates a partition. For information about how to specify a partition when creating a pipeline, see Create pipelines for Ingest Processor.

The partitions that you create and the configuration of your Ingest Processor determines how the Ingest Processor routes the received data and whether any data is dropped:

  • The data that Ingest Processor receives is defined as processed or unprocessed based on whether there is at least one partition for that data. For example, if your Ingest Processor receives Windows event logs and Linux audit logs, but you only applied a pipeline with a partition for Windows event logs, then those Windows logs will go into your pipeline and are considered to be processed while the Linux audit logs are considered to be unprocessed because they are not included in your partition for that pipeline.
  • Each pipeline creates a partition of the incoming data based on specified conditions, and only processes data that meets those conditions. Any data that does not meet those conditions is considered to be unprocessed.
  • If you configure your pipeline to filter the processed data in the SPL2 editor, the data that is filtered out gets dropped.
  • If you configure your Ingest Processor to have a default destination, any data that does not meet your pipeline's partition conditions is classified as unprocessed data and goes to that default destination.
  • If you do not set a default destination, then any unprocessed data is dropped.


See also

Last modified on 20 December, 2024
System architecture of the Ingest Processor solution   Ingest Processor pipeline syntax

This documentation applies to the following versions of Splunk Cloud Platform: 9.1.2308, 9.1.2312, 9.2.2403, 9.2.2406 (latest FedRAMP release), 9.3.2408


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters