Splunk® Enterprise

Getting Data In

Acrobat logo Download manual as PDF


Acrobat logo Download topic as PDF

Use ingest actions to improve the data input process

Ingest actions is a feature for routing, filtering, and masking data while it is streamed to your indexers. Each data transformation is expressed as a rule. You can apply multiple rules to a data stream, and save the combined rules as a ruleset.

The Ingest Actions page in Splunk Web allows you to dynamically preview and build rules, using sample data.

You can configure ingest actions for these deployment topologies:

  • Indexer clusters. Configure and preview the ruleset from the cluster manager or from a connected search head, which proxies to the cluster manager. You then explicitly deploy the ruleset to the cluster peer nodes.
  • Standalone indexers. Configure, preview, and save the ruleset directly on the indexer. The ruleset is effective immediately.
  • Heavy forwarders via deployment server. Configure the ruleset on a deployment server. The deployment server automatically deploys the ruleset to heavy forwarders configured as deployment clients.
  • Standalone heavy forwarders. Configure and save the ruleset directly on the forwarder. The ruleset is effective immediately.
  • Splunk Cloud Platform. Configure and preview the ruleset from your search head. In the case of the Victoria Experience, the ruleset will be deployed automatically to the indexers. In the case of the Classic Experience, you need to explicitly deploy the ruleset.

Requirements

Indexer cluster

  • All nodes on the indexer cluster must be running Splunk Enterprise for Linux.
  • Requires access to Splunk Web on the cluster manager or on a connected search head as the admin role, or as a member of a role with the list_ingest_rulesets and edit_ingest_rulesets capabilities.

Standalone indexer

  • The indexer must be running Splunk Enterprise for Linux.
  • Requires access to Splunk Web as the admin role, or as a member of a role with the list_ingest_rulesets and edit_ingest_rulesets capabilities.
  • The standalone indexer cannot be configured to also function as a deployment server.

Heavy forwarders managed through a deployment server

  • The heavy forwarders and deployment server must each be running Splunk Enterprise for Linux.
  • Requires access to Splunk Web on the deployment server as the admin role, or as a member of a role with the list_ingest_rulesets and edit_ingest_rulesets capabilities.
  • A maximum of ten heavy forwarders is supported.
  • The deployment server must be dedicated to the ingest actions function and cannot be used for other types of deployment client configurations. The deployment server cannot service any other deployment clients.
  • The ingest actions function on a deployment server can only be used to configure ingest actions for the deployment clients. You cannot also use the function to create rulesets for use on the deployment server instance itself (as, for example, if the deployment server is also functioning in some capacity as a standalone indexer).
  • The heavy forwarders must be preconfigured as deployment clients of the deployment server where the data ingest configuration occurs. For information on configuring deployment clients, see Configure deployment clients.
  • The Ingest Actions page on the deployment server automatically creates the IngestAction_AutoGenerated server class and assigns that class to the forwarders.
  • If you want the heavy forwarders to send data to an S3 destination, you must configure the S3 destination on each of the heavy forwarders individually, either through the Ingest Actions page on each forwarder or through an outputs.conf file on each forwarder. You cannot configure the destination on the deployment server. To configure the destination on the Ingest Actions page, the heavy forwarders require access to Splunk Web as the admin role, or as a member of a role with the list_ingest_rulesets and edit_ingest_rulesets capabilities.

Standalone heavy forwarder

  • The heavy forwarder must be running Splunk Enterprise for Linux.
  • Requires access to Splunk Web as the admin role, or as a member of a role with the list_ingest_rulesets and edit_ingest_rulesets capabilities.

Spunk Cloud Platform

  • Requires access to Splunk Web on the search head as the sc_admin role, or as a member of a role with the list_ingest_rulesets and edit_ingest_rulesets capabilities.

License implications

Ingest-based licenses: Data that is filtered or routed by the ingest actions feature, such that the data does not get added to an index, does not count against your license.

Workload-based licenses: Ingest actions workloads that don't occur at the indexing tier do not count against your license. For example, workloads that occur at the heavy forwarder tier do not count against your license.

Introduction to rules and rulesets

A rule is a specific type of data transformation. A rule can route, filter, or mask data. Descriptions are provided below. By using multiple rules, you can perform complex modifications to an incoming data source before its data is indexed, or skip indexing of some data entirely.

A ruleset is a set of rules applied to a data source. Only one ruleset per source type is supported. Rules in a ruleset are processed in order.

You create rules through the Ingest Actions page:

  • On indexer clusters, you access the Ingest Actions page on the cluster manager or on a connected search head.
  • For groups of heavy forwarders, you access the Ingest Actions page on a deployment server dedicated to the ingest actions function.
  • On Splunk Cloud Platform, you access the Ingest Actions page on a search head.

Once you create a ruleset you must save it. Depending on where you created the ruleset, the ruleset is either immediately effective or requires an additional deployment step.

Once the ruleset has been deployed, each rule in the ruleset is applied to its matching data stream before the data is indexed.

After a ruleset is applied, the data cannot be reverted to its original form. Changing or disabling an existing ruleset affects only new data. If you want to retain the original data while also modifying some of it, use the clone events feature, described in this topic.

Access and edit the Ingest Actions page

The process of accessing the Ingest Actions page varies slightly depending on the deployment topology.

On indexer clusters

For Splunk Enterprise indexer clusters, you can create a ruleset either on the cluster manager or on a connected search head. In the case of a connected search head, the search head proxies the configuration to the cluster manager. When finished, you then explicitly deploy the ruleset configuration to the set of peer nodes.

Perform these steps:

  1. On the cluster manager or connected search head, select Settings > Data > Ingest Actions.
  2. If routing to S3, add an S3 destination through the Destinations tab.
  3. Through the Rulesets tab:
    1. Provide a ruleset name and description.
    2. In the Event Stream, provide a source type for the data preview.
    3. Add a rule. Descriptions are provided below.
    4. Use the data preview to review the impact of the rule on your data source.
    5. Add additional rules as needed.
    6. Save your rules in the ruleset.
  4. Once the ruleset has been saved, either directly on the cluster manager or through the search head, you must deploy the ruleset to the set of peer nodes. See Deploy a ruleset on an indexer cluster.
  5. Use Splunk Search to validate the changes to your data.

If you edit or delete an existing destination, the peer nodes will undergo a rolling restart when the changes are deployed.

On standalone indexers

For Splunk Enterprise indexers, perform these steps to create a ruleset:

  1. On the indexer, select Settings > Data > Ingest Actions.
  2. If routing to S3, add an S3 destination through the Destinations tab.
  3. Through the Rulesets tab:
    1. Provide a ruleset name and description.
    2. In the Event Stream, provide a source type for the data preview.
    3. Add a rule. Descriptions are provided below.
    4. Use the data preview to review the impact of the rule on your data source.
    5. Add additional rules as needed.
    6. Save your rules in the ruleset. The updates are effective immediately on the indexer.
  4. Use Splunk Search to validate the changes to your data.

If you edit or delete an existing destination, you must restart the instance for the changes to take effect.

On heavy forwarders managed through a deployment server

For Splunk Enterprise heavy forwarders managed through a deployment server, perform these steps to create a ruleset:

  1. On the deployment server, select Settings > Data > Ingest Actions.
  2. If routing to S3, add an S3 destination directly on each heavy forwarder, as described in the note below.
  3. Through the Rulesets tab:
    1. Provide a ruleset name and description.
    2. In the Event Stream, provide a source type for the data preview.
    3. Add a rule. Descriptions are provided below.
    4. Use the data preview to review the impact of the rule on your data source. Currently, you can only preview changes by uploading a log file to the deployment server. You cannot use preview with live data streaming to the heavy forwarders..
    5. Add additional rules as needed.
    6. Save your rules in the ruleset. The deployment server saves the ruleset in the splunk_ingest_actions app for the IngestAction_AutoGenerated server class. It then automatically deploys the app to all members of the IngestAction_AutoGenerated server class, first adding all forwarders to that class, if necessary. The ruleset takes effect immediately.
  4. Use Splunk Search to validate the changes to your data.

If you want the heavy forwarders to send data to an S3 destination, you must configure the destination individually on each heavy forwarder prior to creating the ruleset on the deployment server. Select Settings > Data > Ingest Actions on each heavy forwarder and configure the destination. You can alternatively create the destination in outputs.conf on each forwarder.

If you edit or delete an existing destination, you must restart the forwarder for the changes to take effect.

On standalone heavy forwarders

For Splunk Enterprise heavy forwarders, perform these steps to create a ruleset:

  1. On the heavy forwarder, select Settings > Data > Ingest Actions.
  2. If routing to S3, add an S3 destination through the Destinations tab.
  3. Through the Rulesets tab:
    1. Provide a ruleset name and description.
    2. In the Event Stream, provide a source type for the data preview.
    3. Add a rule. Descriptions are provided below.
    4. Use the data preview to review the impact of the rule on your data source.
    5. Add additional rules as needed.
    6. Save your rules in the ruleset. The updates are effective immediately on the heavy forwarder.
  4. Use Splunk Search to validate the changes to your data.

If you edit or delete an existing destination, you must restart the forwarder for the changes to take effect.

On Splunk Cloud Platform

For Splunk Cloud Platform, perform these steps to create a ruleset:

  1. On the search head, select Settings > Data > Ingest Actions.
  2. If routing to S3, add an S3 destination through the Destinations tab.
  3. Through the Rulesets tab:
    1. Provide a ruleset name and description.
    2. In the Event Stream, provide a source type for the data preview.
    3. Add a rule. Descriptions are provided below.
    4. Use the data preview to review the impact of the rule on your data source.
    5. Add additional rules as needed.
    6. Save your rules in the ruleset. In the case of the Victoria Experience, the ruleset deploys immediately. In the case of the Classic Experience, you must explicitly deploy the ruleset with the Deploy button at the top right of the Ingest Actions page.
  4. Use Splunk Search to validate the changes to your data.

Use the Ingest Actions page

Data preview

Data preview is available when you're building a ruleset using the Ingest Actions page. The data preview uses live indexed data, an uploaded sample file, or copied/pasted event logs to help you define rules. It also estimates the changes a rule will have on the data source. The data preview is only a preview of the rule changes, and does not modify any indexed data.

Selecting Sample retrieves events from the indexers. Selecting Apply applies all the created rules to the events in the preview, and provides an estimate of data volume based upon any modifications made using the rule. The All Events tab provides a visual indication of the rule matches. The Affected Events tab provides a total count, and displays the full event for every rule match.

Live data preview is not available on deployment servers. You can, however, use data preview on an uploaded sample file.

Mask with regular expression

Use a masking rule to replace strings of text in your logs. A mask rule is typically applied to fields with unique identifiers, or user names, that are captured through logging.

The mask rule requires you to provide:

Setting Description
Match Regular Expression The field accepts a regular expression, or a simple string to match in the events.
Replace Expression The field accepts a string value you want to use to replace any matches.

Filter with regular expression

Use a filtering rule to remove entire events from your logs. A filter rule is typically applied to log events that are not valued, such as DEBUG messages, log headers, and redundant log messages.

This filter rule requires you to provide:

Setting Description
Source Field Use the drop down to select a data source by: _raw, host, index, source, or source type.
Drop Events Matching Regular Expression The field accepts a regular expression, or a simple string to match in the events.

When using a filter rule, the Affected Events tab is a preview of events that will be deleted once the ruleset is deployed. If you add another rule after a filter, the new rule applies to any remaining, unfiltered events only.

Filter with eval expression

Using an eval expression is an alternative to using a regular expression for filtering. In most cases, the eval syntax is easier to read and comprehend, while offering the same functionality as a regular expression.

The eval expression rule does not support ingest-time lookups.

Use a filtering rule to remove entire events from your logs. A filter rule is typically applied to log events that are not valued, such as DEBUG messages, log headers, and redundant log messages.

This filter rule requires you to provide:

Setting Description
Drop Events Matching Eval Expression When the eval expression match is true, those events will be dropped.

When using a filter rule, the Affected Events tab is a preview of events that will be deleted once the ruleset is deployed. If you add another rule after a filter, the new rule applies to any remaining, unfiltered events only.

Set index

Use a set index rule to specify or change the destination index for an event routing to a Spunk destination. You can optionally filter the events that the rule applies to.

If this rule does not apply to a particular Splunk destination event, that event goes to the index otherwise designated for the event, either the default "main" index or an index specified through the available layered configurations in the Splunk configuration system, for example, through settings in inputs.conf or outputs.conf.

You can either specify a string for the destination index name, or you can set the index based on an eval expression, which allows you to conditionally route to different indexes.

The set index rule includes these settings:

Setting Description
Condition Optionally filter the events that follow the set index rule.
Set index as Set the index to a string value (for example, "my_index") or use an eval expression to determine the index name based on specified conditions.

The set index rule affects only events routed to the default destination "Splunk Index". It does not affect events that route to S3 destinations.

Route to Destination rule

Use a routing rule to select events, and split or duplicate them between one or more destinations.

This routing rule requires you to provide:

Setting Description
Condition Choose a method to match events for routing. Choose the regex or eval condition to select specific events, or none when you want all events sent to a destination. If a condition is set, only events matching the condition will be sent to the destination(s).
Immediately send to By default, the destination is "Splunk Index". Any matching events are placed back into the Splunk Enterprise indexing queue for processing and indexing. The destination rule also supports AWS S3 and other S3-compliant destinations. If more than one destination is chosen, a copy of any matching events is sent to all destinations chosen.
Clone events and apply more rules This toggle causes data ingest to create a clone of the event stream, applying the rules currently defined in the ruleset, and route the stream to the specified destination, while applying any additional newly defined rules against the event stream and routing that subset to a second specified destination, defined in a second Route to Destination rule. As with all rules, the ruleset must be saved and deployed before the destination rules start functioning.

Routing to AWS S3

Use the AWS S3 destination to write any matching events to a bucket on a remote storage volume.

Configure the AWS S3 remote storage settings before using AWS S3 in a "Route to Destination" rule.

You can configure and validate an S3 destination through the Data Ingest page under the Destinations tab. Select New Destination and fill out the fields, following the examples provided there.

In the case of heavy forwarders managed through a deployment server, the S3 destination must be configured on each heavy forwarder individually, not on the deployment server.

While Destinations on the Data Ingest page can handle most common S3 configuration needs, for some advanced configurations, like encryption-at-rest with KMS, you might need to directly edit outputs.conf, using the rfs stanza. For example, assuming you are using an EC2 instance with a properly configured IAM role:

[rfs:s3]
path = s3://mybucket/myprefix/
remote.s3.endpoint = https://s3.us-west-2.amazonaws.com
remote.s3.signature_version = v4
remote.s3.supports_versioning = false
remote.s3.encryption = sse-kms
remote.s3.kms.key_id = <key ID from AWS>
remote.s3.kms.auth_region = <auth region>

For a complete list of rfs settings, see Remote File System (RFS) Output. The remote filesystem settings and options for AWS S3 are similar to the SmartStore S3 configuration.

To troubleshoot the AWS S3 remote file system, search the _internal index for events from the RfsOutputProcessor and S3Client components. For example:

index="_internal" sourcetype="splunkd" (ERROR OR WARN) RfsOutputProcessor OR S3Client

Note the following:

  • Only a single, globally configured remote storage location is supported.
  • Index-time fields will not be transferred to S3.This can lead to loss of field names from INDEXED_EXTRACTIONS sources like W3C and CSV.
  • In the case of a Splunk Cloud Platform deployment, buckets must be in the same region as the deployment.
  • In the case of an indexer cluster, the remote storage configuration must be identical across the indexer cluster peers.
  • The remote file system creates buckets similar to index buckets on the remote storage location. The bucket names include the peer GUID and date.
  • Remember to set the correct life cycle policies for your S3 bucket and their paths. This data will live forever by default unless removed.
  • For information on S3 authentication requirements, see SmartStore on S3 security strategies in Managing Indexers and Clusters of Indexers. Ingest actions requirements are similar.

Data Preview for Final Destination

The last rule in every ruleset sends any remaining events along the ingestion pipeline to the indexer for indexing. The rule offers an estimate of the data volume that will be indexed.

If you use the "Route to Destination" rule in your ruleset, this rule might be skipped. For example, if a Route to Destination rule includes "Immediately send to: Splunk Index," the data stream is split at the routing rule, and the matching events are sent to be indexed. In that scenario, the Final Destination rule will display a 0Kb indexed data estimate, despite events being sent for indexing from the routing rule.

Deploy a ruleset on an indexer cluster

You can create a ruleset either on the cluster manager or on a connected search head, which proxies the request to the cluster manager. In either case, you must explicitly deploy the ruleset to the peer nodes.

When you save a ruleset, the system places the ruleset in an ingest-actions-specific app on the cluster manager. You will then be prompted to deploy the ruleset to the peer nodes. You can either deploy immediately, in response to the prompt, or later, through the configuration bundle method on the cluster manager.

Note the following:

  • All rulesets are defined in the same app on the cluster manager node. The app path is: $SPLUNK_HOME/etc/manager-apps/splunk_ingest_actions
  • When you deploy the app with your ruleset, any other configuration bundle changes queued on the cluster manager node will also be deployed. This can include other rulesets that are saved, but might be incomplete.

Deploying a ruleset might cause a rolling restart, if there are other configuration changes queued on the cluster manager node that require a restart.

Interaction with TRANSFORMS

The RULESET setting is similar in behavior to the TRANSFORMS setting in props.conf. There are some additional considerations when using RULESET:

  • If a TRANSFORMS stanza and a RULESET stanza apply to the same source type, the TRANSFORMS is applied first.
  • A source type must be associated with just one RULESET configuration.

Create or modify rulesets only through the Ingest Actions page or the REST endpoint /services/data/ingest/rulesets. Do not create or modify rulesets through the underlying .conf files.

Differences between RULESET and TRANSFORMS in the context of heavy forwarders

The RULESET setting has a key difference in behavior from the TRANSFORMS setting in the context of a heavy forwarder deployment:

  • TRANSFORM settings are applied only at the initial, heavy forwarder layer of processing, and not again later with downstream heavy forwarders or indexers.
  • RULESET settings can be applied at every layer of processing. For example, a heavy forwarder can apply a ruleset and then stream the data to an indexer with its own ruleset for that data. In that case, both the heavy forwarder's and the indexer's rulesets will be applied to the data in turn. Similarly, if a heavy forwarder streams data to a second heavy forwarder, which then streams the data onward to the indexer, all three processing layers can apply their own rulesets to the data.
Last modified on 22 September, 2022
PREVIOUS
Use persistent queues to help prevent data loss
  NEXT
Troubleshoot the input process

This documentation applies to the following versions of Splunk® Enterprise: 9.0.1


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters