Splunk Cloud Platform

Use Ingest Processors

Use templates to create pipelines for Ingest Processor

To help you get started on creating and using pipelines, the Ingest Processor service includes sample pipelines called templates. Templates are Splunk-built pipelines that are designed to work with specific data sources and use cases. For example, the Linux Audit template takes linux_audit logs and extracts common fields. Templates include sample data and preconfigured SPL2 statements, so you can use them as a starting point to build custom pipelines to solve specific use cases or as a reference to learn how to write SPL2 to build pipelines.

To view a list of the available pipeline templates, log in to your tenant, navigate to the Pipelines page, and then select Templates. Available templates include the following:

Data source Pipeline Template Name Description Edge Processor Ingest Processor
Cisco ASA syslog Cisco ASA syslog data: Extract and filter cisco asa syslog data Take Cisco ASA syslog message data and filter it. This template also automatically removes the header information from messages, which reduces the message size by 10%. This template will not filter messages with a syslog message ID of 430003. Yes Yes
Generic template to get started Generic data: De-identify Personally Identifiable Information This template de-identifies Personally Identifiable Information (PII) from patient data. Yes Yes
Generic template to get started Generic data: Mask IP addresses from a specific range This template masks IP addresses based on a specified CIDR range. Yes Yes
Generic template to get started Generic data: Route 'root' user events to special index This template routes events related to the "root" user to a special index. Yes Yes
JSON JSON data: Generate metrics from log data Take pre-configured JSON data to show how the logs_to_metrics function can be used to convert logs to metrics. No Yes
Palo Alto Palo Alto Network logs: Reduce log size Reduce the size of Palo Alto Network logs by removing unnecessary fields. Then, extract recommended event fields. Yes Yes
Palo Alto Palo Alto Networks PAN-OS syslog data: Extract fields and classification of Palo Alto logs Take Palo Alto Networks syslog message data and set the sourcetypes and indexes based on the message text. This pipeline also automatically removes the header information from messages, which reduces the message size by 10%. Yes Yes
Palo Alto Palo Alto Network traffic logs: Generate metrics from logs Generate metrics with dimensions from Palo Alto Network traffic logs, and then route the metrics and the original logs to two different destinations. No Yes
Kubernetes Prometheus-formatted Kubernetes logs: Extract fields and generate metrics Generate metrics with dimensions from Prometheus-formatted Kubernetes logs, and then route the metrics and the original logs to two different destinations. No Yes
Syslog Syslog data: Extract fields and filter for systemd logs Take syslog data and filter it for systemd events. Yes Yes
Syslog Syslog data: Mask IP addresses from hostname field Take syslog data and mask IP addresses from the hostname field. Yes Yes
Nix UNIX and Linux bandwidth logs: Reduce log size and convert to TSV format Reduce the size of 'bandwidth' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated values (TSV) format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux cpu logs: Reduce log size and convert to TSV format Reduce the size of 'cpu' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated values (TSV) format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux df logs: Reduce log size and convert to TSV format Reduce the size of 'df' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. The original tab-separated values (TSV) format of the logs and compatibility with the Splunk Common Information Model (CIM) are both preserved. Yes Yes
Nix UNIX and Linux hardware logs: Reduce log size and convert to tab-separated key-value pair format Reduce the size of 'hardware' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated key-value pair format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux interfaces logs: Reduce log size and convert to TSV format Reduce the size of 'interfaces' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated values (TSV) format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux iostat logs: Reduce log size and convert to TSV format Reduce the size of 'iostat' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated values (TSV) format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux lastlog logs: Reduce log size and convert to TSV format Reduce the size of 'lastlog' logs emitted by the Splunk Add-on for Unix and Linux by converting the logs into tab-separated values (TSV) format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux lsof logs: Reduce log size and convert to TSV format Reduce the size of 'lsof' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated values (TSV) format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux netstat logs: Reduce log size and convert to TSV format Reduce the size of 'netstat' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated values (TSV) format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux package logs: Reduce log size and convert to TSV format Reduce the size of 'package' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated values (TSV) format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux ps logs: Reduce log size and convert to TSV format Reduce the size of 'ps' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated values (TSV) format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux top logs: Reduce log size and convert to TSV format Reduce the size of 'top' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated values (TSV) format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux vmstat logs: Reduce log size and convert to tab-separated key-value pair format Reduce the size of 'vmstat' logs emitted by the Splunk Add-on for Unix and Linux by removing unnecessary data. Then, convert the logs into tab-separated key-value pair format while preserving compatibility with the Splunk Common Information Model (CIM). Yes Yes
Nix UNIX and Linux process status logs: Generate metrics from logs Generate metrics with dimensions from UNIX and Linux process logs, and then route the metrics and original logs to two different destinations No Yes
Windows Windows event logs: Convert logs from XML to JSON Convert Windows event logs from XML to JSON, reduce the size of the logs by removing unnecessary data, and extract event fields to ensure compatibility with the Splunk Add-on for Microsoft Windows and the Splunk Common Information Model (CIM). Yes Yes

Create a pipeline using a template

To create a pipeline using a template, complete the following steps:

Prerequisites

Before starting to create a pipeline, make sure that the destination that you want the pipeline to send data to is listed on the Destinations page of your tenant. If your destination is not listed on that page, then you must add that destination to your tenant. See Add or manage destinations for more information.

Steps

  1. Navigate to the Pipelines page, and then select New pipeline then Ingest Processor pipeline.
  2. On the Get started page, select the template that you want to use from the list of available templates, then click Next.
  3. On the Define your pipeline's partition page, do the following:
    1. Select how you want to partition your incoming data that you want to send to your pipeline. You can partition by source type, source, and host.
    2. Enter the conditions for your partition, including the operator and the value. Your pipeline will receive and process the incoming data that meets these conditions.
    3. Select Next to confirm the pipeline partition.
  4. On the Add sample data page, do the following. If your template includes sample data, skip these steps.
    1. Enter or upload sample data for generating previews that show how your pipeline processes data. The sample data must contain accurate examples of the values that you want to extract into fields. For example, the following sample events represent purchases made at a store at a particular time: E9FF471F36A91031FE5B6D6228674089, 72E0B04464AD6513F6A613AABB04E701, Credit Card, 7.7, 2023-01-13 04:41:00, 2023-01-13 04:45:00, -73.997292, 40.720982, 4532038713619608 A5D125F5550BE7822FC6EE156E37733A, 08DB3F9FCF01530D6F7E70EB88C3AE5B, Credit Card,14, 2023-01-13 04:37:00, 2023-01-13 04:47:00, -73.966843,40.756741, 4539385381557252 1E65B7E2D1297CF3B2CA87888C05FE43,F9ABCCCC4483152C248634ADE2435CF0, Game Card, 16.5, 2023-01-13 04:26:00, 2023-01-13 04:46:00, -73.956451, 40.771442
    2. Select Next to confirm the sample data that you want to use for your pipeline.
  5. On the Select a metrics destination page, select the name of the destination that you want to send metrics to.
  6. (Optional) If you selected Splunk Metrics store as your metrics destination, specify the name of the target metrics index where you want to send your metrics.
  7. On the Select a data destination page, select the name of the destination that you want to send logs to.
  8. (Optional) If you selected a Splunk platform destination, you can configure index routing:
    1. Select one of the following options in the expanded destinations panel:
      Option Description
      Default The pipeline does not route events to a specific index.


      If the event metadata already specifies an index, then the event is sent to that index. Otherwise, the event is sent to the default index of the Splunk Cloud Platform deployment.

      Specify index for events with no index The pipeline only routes events to your specified index if the event metadata did not already specify an index.
      Specify index for all events The pipeline routes all events to your specified index.
    2. If you selected Specify index for events with no index or Specify index for all events, then from the Index name drop-down list, select the name of the index that you want to send your data to.
      If your desired index is not available in the drop-down list, then confirm that the index is configured to be available to the tenant and then refresh the connection between the tenant and the Splunk Cloud Platform deployment. For detailed instructions, see Make more indexes available to the tenant.
  9. If you're sending data to a Splunk Cloud Platform deployment, be aware that the destination index is determined by a precedence order of configurations. See How does Ingest Processor know which index to send data to? for more information

  10. Select Done to confirm the data destination.
    The pipeline builder displays the pipeline that you've just created based on the selected template and configuration options. You can review the comments in the pipeline editor to learn more about what the pipeline will do to the sample data.
  11. To save your pipeline, do the following:
    1. Select Save pipeline.
    2. In the Name field, enter a name for your pipeline.
    3. (Optional) In the Description field, enter a description for your pipeline.
    4. Select Save. The pipeline is now listed on the Pipelines page, and you can now apply it, as needed.
  12. To apply this pipeline, do the following:
    1. Navigate to the Pipelines page.
    2. In the row that lists your pipeline, select the Actions icon (Image of the Actions icon), and then select Apply/Remove.
    3. Select the pipelines that you want to apply, and then select Save. It can take a few minutes to finish applying your pipeline. During this time, all applied pipelines enter the Pending status.
    4. (Optional) To confirm that the Ingest Processor has finished applying your pipeline, navigate to the Ingest Processor page and check if all affected pipelines have returned to the Healthy status.

The pipeline that you applied can now process the data it receives based on the processing instructions defined in the template.

Last modified on 14 April, 2025
Keyboard shortcuts for Ingest Processor   Getting sample data for previewing data transformations

This documentation applies to the following versions of Splunk Cloud Platform: 9.1.2308, 9.1.2312, 9.2.2403, 9.2.2406, 9.3.2408 (latest FedRAMP release), 9.3.2411


Please expect delayed responses to documentation feedback while the team migrates content to a new system. We value your input and thank you for your patience as we work to provide you with an improved content experience!

Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters