Splunk Cloud Platform

Use Ingest Processors

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Ingest Processor is currently released as a preview only and is not officially supported. See Splunk General Terms for more information. For any questions on this preview, please reach out to ingestprocessor@splunk.com.

Create pipelines for Ingest Processor

A pipeline is a set of data processing instructions written in the Search Processing Language, version 2 (SPL2). Create pipelines in your Ingest Processor to specify how you want the Ingest Processor to route and process particular subsets of the received data.

To create a valid pipeline, you must complete the following tasks:

  • Specify the partition of the incoming data for your pipeline to process. See Partitions for more information.
  • Specify the destination that the pipeline sends processed data to.
  • Write an SPL2 statement that defines what data to further process, how to process it, and where to send the processed data to.

When you apply a pipeline, the Ingest Processor uses those instructions to process the data that it receives.

Preventing data loss

Each pipeline filters all the incoming data for a specified source type, host, or source and only processes data of that criteria. Any data that is associated with a different source type is excluded from the pipeline. If the Ingest Processor doesn't have an additional pipeline that accepts the excluded data, that data is either routed to the default destination or dropped.

As a best practice for preventing unwanted data loss, make sure to always have a default destination for your Ingest Processor pipeline. Otherwise, all unprocessed data is dropped. See Partitions to learn about what qualifies as unprocessed data.

Prerequisites

Before starting to create a pipeline, confirm the following:

  • If you want to partition your data by source type, the source type of the data that you want the pipeline to process is listed on the Source types page of your tenant. If your source type is not listed, then you must add that source type to your tenant and configure event breaking and merging definitions for it. See Add source types for Ingest Processors for more information.
  • The destination that you want the pipeline to send data to is listed on the Destinations page of your tenant. If your destination is not listed, then you must add that destination to your tenant. See Add or manage destinations for more information.

Steps

Complete these steps to create a pipeline that receives data associated with a specific source type, source, or host, optionally processes it, and sends that data to a destination.

  1. Navigate to the Pipelines page and then select Ingest Processor pipeline.
  2. On the Get started page, select Blank pipeline and then Next.
  3. On the Define your pipeline's partition page, do the following:
    1. Select how you want to partition your incoming data that you want to send to your pipeline. You can partition by source type, source, and host.
    2. Enter the conditions for your partition, including the operator and the value. Your pipeline will receive and process the incoming data that meets these conditions.
    3. Select Next to confirm the pipeline partition.
  4. (Optional) On the Add sample data page, enter or upload sample data for generating previews that show how your pipeline processes data.

    The sample data must be in the same format as the actual data that you want to process. See Getting sample data for previewing data transformations for more information.

  5. Select Next to confirm any sample data that you want to use for your pipeline.
  6. On the Select destination dataset page, select the name of the destination that you want to send data to, then do the following:
    1. If you selected a Splunk platform S2S or Splunk platform HEC destination, select Next.
    2. If you selected another type of destination, select Done and skip the next step.
  7. (Optional) If you're sending data to a Splunk platform deployment, you can specify a target index:
    1. In the Index name field, select the name of the index that you want to send your data to.
    2. (Optional) In some cases, incoming data already specifies a target index. If you want your Index name selection to override previous target index settings, then select the Overwrite previously specified target index check box.
    3. Select Done.
    4. If you're sending data to a Splunk platform deployment, be aware that the destination index is determined by a precedence order of configurations.

  8. On the SPL2 editor page, add any desired actions to your SPL2 statement.
    1. (Optional) To process the incoming data before sending it to a destination, add processing commands to the SPL2 statement. You can do that by selecting the plus icon (This image shows an icon of a plus sign.) next to Actions and selecting a data processing action, or by typing SPL2 commands and functions directly in the editor. For information and examples of the types of data processing actions that you can define in your pipeline, see the following pages:
    2. Make sure that your pipeline contains one SPL2 statement only. Do not define multiple SPL2 statements in the same pipeline.

  9. (Optional) Select the Preview Pipeline icon (Image of the Preview Pipeline icon) to generate a preview that shows what the sample data looks like when it passes through the pipeline.
  10. To save your pipeline, do the following:
    1. Select Save pipeline.
    2. In the Name field, enter a name for your pipeline.
    3. (Optional) In the Description field, enter a description for your pipeline.
    4. Select Save. The pipeline is now listed on the Pipelines page, and you can now apply it, as needed.
  11. To apply this pipeline, do the following:
    1. Navigate to the Pipelines page.
    2. In the row that lists your pipeline, select the Actions icon (Image of the Actions icon), and then select Apply/Remove.
    3. Select the pipelines that you want to apply, and then select Save. It can take a few minutes to finish applying your pipeline. During this time, all applied pipelines enter the Pending status.
    4. (Optional) To confirm that the Ingest Processor has finished applying your pipeline, navigate to the Ingest Processor page and check if all affected pipelines have returned to the Healthy status.

Your applied pipelines can now process and route data as specified in the pipeline configuration.

Last modified on 13 March, 2024
PREVIOUS
Ingest Processor pipeline syntax
  NEXT
Modify Ingest Processor pipelines

This documentation applies to the following versions of Splunk Cloud Platform: 9.1.2308 (latest FedRAMP release), 9.1.2312


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters