Docs » Splunk Log Observer » Transform your data with log processing rules

Transform your data with log processing rules πŸ”—

Note

Customers with a Splunk Log Observer entitlement in Splunk Observability Cloud must transition from Log Observer to Log Observer Connect by December 2023. With Log Observer Connect, you can ingest more logs from a wider variety of data sources, enjoy a more advanced logs pipeline, and expand into security logging. See Splunk Log Observer transition to learn how.

Add value to your raw logs by creating log processing rules, also known as processors, to transform your data or a subset of your data as it arrives. To add more control to processors, you can add filters that determine which logs a processor will be applied to.

Only customers with a Splunk Log Observer entitlement in Splunk Observability Cloud can create or manage log processing rules using the Splunk Log Observer pipeline. Those customers must transition to Log Observer Connect.

After the transition to Log Observer Connect πŸ”—

When you transition to Log Observer Connect, log processing rule functionality changes. At transition, you can continue using existing log processing rules. You can turn your existing log processing rules off and on. However, you cannot create new log processing rules or edit existing rules.

Going forward after the transition to Log Observer Connect, you can process data in the Splunk platform using the following methods:

Processing method

Documentation

Field extractions

See Build field extractions with the field extractor

Ingest actions

See Use ingest actions to improve the data input process

.conf configuration

See Overview of event processing .

Edge Processor

See About the Edge Processor solution

Data Stream Processor

See Use the Data Stream Processor .

Prepackaged processing rules πŸ”—

Prepackaged processing rules appear at the beginning of the list of processing rules, and have a lock icon. These prepackaged processing rules always execute before any processing rules you define. You can’t modify or reorder prepackaged processing rules.

One example of a prepackaged processing rule is the Level to severity attributed remapper.

Splunk Observability Cloud includes prepackaged processing rule for Kubernetes and Cassandra.

Observability Cloud provides three types of log processing rules:

Order of execution of logs pipeline rules πŸ”—

Logs pipeline rules execute in the following order:

  1. All log processing rules (field extraction, field copy, and field redaction processors)

  2. All log metricization rules

  3. All infinite logging rules

Because log processing rules execute first, you can create field extraction rules, then use the resulting fields in log metricization rules or infinite logging rules or both. For more information, see Sequence of logs pipeline rules.

Field extraction processors πŸ”—

Field extraction lets you find an existing field in your incoming logs and create a processor based on the format of the field’s value.

Field extraction helps you do the following tasks:

Consider the following raw log record

10.4.93.105 - - [04/Feb/2021:16:57:05 +0000] β€œGET /metrics HTTP/1.1” 200 73810 β€œ-” β€œGo-http-client/1.1” 23

If you have not defined any processors in your logs pipeline, you can only do a keyword search on the sample log, which searches the _raw field. The following table shows how you can extract fields to define processing rules:

Example of value to extract

Processor definition to use

IP address (10.4.93.105)

IP

04/Feb/2021:16:57:05 +0000

time

GET

method

/metrics

path

Creating Regex and event time field extractions allows you to filter and aggregate on the fields: IP, time, method, and path. This enables you to create the query β€œDisplay a Visual Analysis of the number of requests from {IP} broken down by {method}”.

Additionally, the extracted fields begin appearing in the fields summary panel along with their top values and other statistics.

There are three types of field extraction. These are:

  • Regex processors

  • JSON processors

  • Event time processors

  • KV parser processors

To start creating a field extraction, follow these steps:

  1. From the navigation menu, go to Data Configuration > Logs Pipeline Management. A list of existing processors is displayed with the prepackaged processors displaying first.

  2. Click New Processing Rule.

    Alternatively, you can launch the processor wizard from Log Observer. To do this, click into a log in the Logs table. The Log Details panel appears on the right. Click a field value then select Extract field. This takes you to Define Processor, the second step of the processor wizard. Skip to step 7.

  3. Select Field Extraction as the processor type, then click Continue. This takes you to Select sample, the first step in the processor wizard.

  4. To narrow your search for a log that contains the field you want to extract, you can select a time from the time picker or click Add Filter and add keywords or fields.

  5. Click the log containing the field you want. A list of fields and values appears below the log line.

  6. Click Use as sample next to the field you want to extract, then click Next. This takes you to Define Processor, the second step of the processor wizard.

  7. Select the extraction processor type that you want to use.

  8. From here, follow the steps to create the extraction processor type you selected:

Create a Regex processor πŸ”—

The regular expression workspace lets you to extract fields from your data and then create a new processor using regex. Pipeline Management makes suggestions to help you write the appropriate regex for your processor. You can modify the regex within the processor wizard.

To create a regex processor, follow these steps:

  1. Highlight the value of the field you want to extract in your sample and select Extract field from the drop-down menu.

  2. Click into the field name box and enter a name for the field you selected. The default name is Field1. Results display in a table.

  3. Click Edit regex below the field name box if you want to modify the regex that the processor has automatically generated to create this rule based on your field name and value.

  4. Preview your rule in the table to ensure that the correct fields are extracted.

  5. To apply your new rule to only a subset of incoming logs, add filters to the content control bar. The new rule will apply only to logs matching this filter.

  6. In step 3 of the processor wizard entitled Name, Save, and Review, give your new rule a name and description.

  7. Review your configuration choices, then click Save. Your processor defaults to Active and immediately begins processing incoming logs.

  8. To see your new processor, go to Data Configuration > Logs Pipeline Management, expand the Processing Rules section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click Inactive.

Create a JSON processor πŸ”—

To create a JSON processor, follow these steps:

  1. To apply your new rule to only a subset of incoming logs, click Add Filter and add a keyword or field. The new rule will apply only to logs matching this filter. Pipeline Management only applies the new processor to log events that match this filter.

  2. Preview your rule to ensure that Pipeline Management is extracting the correct field values.

  3. If you see the correct field values in the results table, click Next. Otherwise, adjust your filter.

  4. Add a name and description for your new rule, then click Save. Your processor defaults to Active and immediately begins processing incoming logs.

  5. To see your new processor, go to Data Configuration > Logs Pipeline Management, expand the Processing Rules section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click Inactive.

Create an event time processor πŸ”—

To create an event time processor, follow these steps:

  1. Select a time format from the drop-down list. The wizard looks for the selected format within your sample.

  2. From the matches you see, select the time when the sample event occurred, then click Next.

  3. Add filters to the content control bar to define a matching condition, then click Next. Pipeline Management only applies the new processor to log events that match this filter.

  4. Give your new rule a name and description.

  5. Review your configuration choices, then click Save. Your processor defaults to Active and immediately begins processing incoming logs.

  6. To see your new processor, go to Data Configuration > Logs Pipeline Management, expand the Processing Rules section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click Inactive.

Create a KV parser processor πŸ”—

A KV parser processor is a rule that parses key-value (KV) pairs. To create a KV parser processor, follow these steps:

  1. To apply your new rule to only a subset of incoming logs, click Add Filter then add a keyword or field. The new rule will apply only to logs matching this filter.

  2. Preview your rule to ensure that Pipeline Management is extracting the correct field values.

  3. If you see the correct field values in the results table, click Next. Otherwise, adjust your filter.

  4. Add a name and description for your new rule, then click Save. Your processor defaults to Active and immediately begins processing incoming logs.

  5. To see your new processor, go to Data Configuration > Logs Pipeline Management, expand the Processing Rules section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click Inactive.

Field copy processors πŸ”—

Field copy processors let you define a new relationship between new or existing fields. One way to use Field Copy Processors is to use OpenTelemetry mappings to help power your Related Content suggestions.

To create a field copy processor, follow these steps:

  1. From the navigation menu, go to Data Configuration > Logs Pipeline Management.

  2. Click New Processing Rule.

  3. Select Field Copy, then click Continue.

  4. Enter a target field in the first text box. You can choose from available extracted fields in the drop-down list.

  5. In the second text box, choose a field to which you want to map your target field. The drop-down list options suggest OpenTelemetry mappings, which help power your Related Content suggestions.

  6. If you want to create multiple mappings, click + Add another field copying rule and repeat steps 4 and 5; otherwise, click Next.

  7. To apply your new rule to only a subset of incoming logs, add filters to the content control bar. The new rule is applied only to logs matching this filter. If you do not add a filter, the rule is applied to all incoming log events.

  8. Preview your rule to ensure that Pipeline Management is extracting the correct field values, then click Next.

  9. Give your new rule a name and description, then click Save. Your processor defaults to Active and immediately begins processing incoming logs.

  10. To see your new processor, go to Data Configuration > Logs Pipeline Management, expand the Processing Rules section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click Inactive.

Field redaction processors πŸ”—

Field redaction lets you mask data, including personally identifiable information.

To create a field redaction processor, follow these steps:

  1. From the navigation menu, go to Data Configuration > Logs Pipeline Management.

  2. Click New Processing Rule.

  3. Select Field Redaction, then click Continue. This takes you to the first step in the processor wizard, Select Sample.

  4. To find a log that contains the field you want to redact, add filters to the content control bar until the Logs table displays a log with the desired field.

  5. Click the log containing the field you want. A list of fields and values appears below the log line.

  6. Click Use as sample next to the field you want to redact, then click Next. This takes you to Define Processor, the second step of the processor wizard.

  7. Select if you want to redact an entire field value or a partial field value. If you want to redact a partial field value, highlight the portion you want to redact. You can edit the regex here.

  8. Define a matching condition. To apply your new rule to only a subset of incoming logs, add filters to the content control bar. The new rule will apply only to logs matching this filter.

  9. Give your new rule a name and description.

  10. Review your configuration choices, then click Save. Your processor defaults to Active and immediately begins processing incoming logs.

  11. To see your new processor, go to Data Configuration > Logs Pipeline Management, expand the Processing Rules section, and find it in the list. You can reorder, edit, or delete all processors except those that are prepackaged (shown with a lock). To disable your processor, click Inactive.

Note

If the field you redacted also appears in _raw, it is still available in _raw. Redact the field in _raw in addition to redacting the field itself.

Log processing rules limits πŸ”—

An organization can create a total of 128 log processing rules. The 128 rule limit includes the combined sum of field extraction processors, field copy processors, and field redaction processors.