Splunk® Data Stream Processor

Use the Data Stream Processor

Acrobat logo Download manual as PDF


DSP 1.2.0 is impacted by the CVE-2021-44228 and CVE-2021-45046 security vulnerabilities from Apache Log4j. To fix these vulnerabilities, you must upgrade to DSP 1.2.4. See Upgrade the Splunk Data Stream Processor to 1.2.4 for upgrade instructions.

On October 30, 2022, all 1.2.x versions of the Splunk Data Stream Processor will reach its end of support date. See the Splunk Software Support Policy for details.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.
Acrobat logo Download topic as PDF

Adding and updating fields in the

The allows you to change incoming records in real-time giving you increased flexibility and control of your streaming data. You can add or update fields in your data before sending it to a destination. Follow these guidelines when adding or updating fields. If the fields that you want to work with are part of a larger structure, you may need to extract those field values as a prerequisite. See Working with nested data or Extracting fields in events data.

You can also remove certain fields before sending it to a destination. See Remove unwanted fields from your data.

Adding fields in the

To add a new top-level field to your incoming records, use the Eval function. The Eval function adds or updates an existing field in your records. Use the following examples as a guideline for how to add new top-level fields to your data. The following examples assume that your pipeline is streaming records that use the event schema.

Example 1: Adding a new field containing the time that the record passed through the Eval function

In this example, we'll use Eval to add a new top-level field called mystarttimestamp that contains the timestamp for when the record was detected in the Eval function.

  1. From the UI, click on Build Pipeline and select the Splunk DSP Firehose source function.
  2. Create a new top-level field called mystarttimestamp containing the time that the record passed through the function.
    1. Add an Eval function to the pipeline.
    2. Enter the following expression in the function field:
      mystarttimestamp=time()
  3. (Optional) Verify that you now have a top-level field mystarttimestamp containing the time that the record passed through the function.
    1. Click Start Preview and select the Eval function.
    2. Log in to SCloud.
      ./scloud login
    3. Send a sample record to your pipeline to verify that the game card number is being extracted.
      ./scloud ingest post-events <<< "Hello World"

You now have a new top-level field in your data, mystarttimestamp, containing the time that the record passed through the Eval function.

Example 2: Adding a new field environment containing which environment the record is from

In this example, we'll use Eval to add a new top-level field called environment that checks the source field to determine what environment this record was sent from.

Assume that records with the following source content are streaming through your pipeline.

  • The source field from record 1:
     "source": "playground"
  • The source field from record 2:
    "source": "prod"
  • The source field from record 3:
    "source": "test"

You can add a top-level field to your schema to make it more apparent which environment the incoming records are coming from. In this example, we'll add a new top-level field called environment to the streaming records. This top-level field is filled out according to the contents of the source field: if the source field contains playground or production, then the corresponding value is placed in the environment field. If the source field contains a different value, then the value other is placed in the environment field instead.

  1. From the UI, click on Build Pipeline and select the Splunk DSP Firehose source function.
  2. Create a new top-level field called environment which we'll configure to contain the environment that the record is from.
    1. Add an Eval function.
    2. Enter the following expression in the function field:
      environment=if(like(source, "%playground%"), "playground", if(like(source, "%production%"), "production", "other"))
  3. (Optional) Verify that you now have a top-level field environment containing what environment this record was sent from.
    1. Click Start Preview and select the Eval function.
    2. Log in to SCloud.
      ./scloud login
    3. Send a few sample records to your pipeline to verify that your function is working as expected.
      ./scloud ingest post-events --source playground <<< "Hello" 
      ./scloud ingest post-events --source prod <<< "World" 
      ./scloud ingest post-events --source test <<< "Hello, World!" 

You now have a new top-level field in your data, environment, containing the environment that the record was sent from.

Updating fields in the

There are many different ways that you can manipulate fields in the , and the specific functions that you need to include in your pipeline vary depending on the type of data that you are working with. The following examples demonstrate just a few ways that you can update fields in standard, unnested data. If you want to learn how to update fields that contain nested data, see Working with metrics.

To update a field in your data, use the Eval function. The Eval function updates or adds an existing field in your records. Use the following examples as a guideline for how to update fields in your data. The following examples assume that your pipeline is streaming records that use the event schema.

Example 1: Updating the body field

In this example, we'll use the Eval and concat functions to update the body field and prepend the string price= to the body field.

  1. From the UI, click on Build Pipeline and select the Splunk DSP Firehose source function.
  2. Prepend price= to the existing body field.
    1. Add an Eval function
    2. Enter the following expression in the function field:
      body=concat("price=", cast(body, "string"))
  3. (Optional) Verify that your body field now has "price=" prepended to it.
    1. Click Start Preview and select the Eval function.
    2. Log in to SCloud.
      ./scloud login
    3. Send a sample record to your pipeline to verify that your function is working as expected.
      ./scloud ingest post-events <<< '10.5'

The contents of the body field are now prepended with the string price=.

Example 2: Updating the source_type field

In this example, we'll use the Eval function to update the source_type field in your data.

If you do not add a sourcetype to your data and you send your data to Splunk Enterprise, your data is automatically indexed with the default httpevent sourcetype.

Set a sourcetype on your data with the Eval streaming function.

  1. From the Data Pipelines editor, click on the + icon and add the Eval function to your pipeline.
  2. In the Eval function, type the following. This sets your source_type field to buttercup_sales.
    source_type="buttercup_sales"
    
  3. (Optional) Verify that the source_type field was updated.
    1. Click Start Preview and select the Eval function.
    2. Log in to SCloud.
      ./scloud login
    3. Send a sample record to your pipeline to verify that your function is working as expected.
      ./scloud ingest post-events <<< 'Hello World'

Removing fields

For information on how to remove unwanted fields from your data, see Remove unwanted fields from your data.

See also

Functions
Eval
Fields
String manipulation
Related topics
Working with metrics data
Extracting fields in events data
SCloud
data types
Last modified on 11 March, 2022
PREVIOUS
Filtering and routing data in the
  NEXT
Extracting fields in events data

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0, 1.2.1-patch02, 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5


Was this documentation topic helpful?


You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters