Splunk Cloud Platform

Use Edge Processors

Extract timestamps from event data using an Edge Processor

Event timestamps indicate when each specific event occurred, and allow you to search or visualize events by time in the data destination.

When you use an Edge Processor to process and route data, the way that timestamps are assigned to the events varies depending on the specific combination of data sources and destinations that are involved. In some cases, the data source or destination assigns the event timestamp. In other cases, the Edge Processor assigns the event timestamp, and you might need to configure a pipeline to extract date and time information from the raw data in order to create the event timestamp.

How event timestamps are assigned

The following sections summarize how timestamps are assigned to events for each combination of data sources and destinations that the Edge Processor supports, and provide links to instructions for how you can configure the timestamp assignment behavior. Refer to the section for the data source that your Edge Processor is receiving data from:

Heavy forwarder

If your Edge Processor receives the data from a heavy forwarder, then the heavy forwarder assigns event timestamps according to the configuration settings in the props.conf file. This timestamp assignment behavior is not affected by the data destination.

For more information, see props.conf in the Splunk Enterprise Admin Manual and the Configure timestamps chapter in the Splunk Enterprise Getting Data In manual.

If these events pass through an Edge Processor pipeline that modifies the _time field, then the modified _time value is used as the event timestamp. As a best practice to limit the amount of configurations that can override each other, do not modify the _time field in the Edge Processor pipelines when processing data from a heavy forwarder.

Universal forwarder without INDEXED_EXTRACTIONS

The following table summarizes how event timestamps are assigned when the Edge Processor receives the data from a universal forwarder that does not have the INDEXED_EXTRACTIONS property configured in the props.conf file, and then sends the data to a given destination:

Destination How the timestamp is assigned
Splunk platform S2S The Splunk platform assigns event timestamps according to the configuration settings in the props.conf file.


For more information, see props.conf in the Splunk Enterprise Admin Manual and the Configure timestamps chapter in the Splunk Cloud Platform Getting Data In manual.

Splunk platform HEC or Amazon S3 By default, the Edge Processor sets a timestamp of "0". You must configure a pipeline to assign event timestamps, or else the Splunk platform will use the current time as the event timestamp.


For more information, see Configure a pipeline to assign event timestamps in this topic.

The Edge Processor renames the _time field to time when sending events out to the destination.

Universal forwarder with INDEXED_EXTRACTIONS

If your Edge Processor receives the data from a universal forwarder that has the INDEXED_EXTRACTIONS property configured in the props.conf file, then the universal forwarder assigns event timestamps according to the configuration settings in the props.conf file. This timestamp assignment behavior is not affected by the data destination.

For more information, see props.conf in the Splunk Enterprise Admin Manual.

If these events pass through an Edge Processor pipeline that modifies the _time field, then the modified _time value is used as the event timestamp. As a best practice to limit the amount of configurations that can override each other, do not modify the _time field in the Edge Processor pipelines when processing data from a universal forwarder that has INDEXED_EXTRACTIONS configured.

HTTP client using the services/collector HEC endpoint

The following table summarizes how event timestamps are assigned when the Edge Processor receives the data from an HTTP client through the services/collector HTTP Event Collector (HEC) endpoint, and then sends the data to a given destination:

Destination How the timestamp is assigned
Splunk platform S2S The Splunk platform assigns event timestamps according to the configuration settings in the props.conf file.


For more information, see props.conf in the Splunk Enterprise Admin Manual and the Configure timestamps chapter in the Splunk Cloud Platform Getting Data In manual.

Splunk platform HEC or Amazon S3 The services/collector HEC endpoint assigns event timestamps based on the time key in the HTTP request body. If the time key is not specified, then the endpoint does not assign timestamps.


If the data passes through an Edge Processor pipeline that modifies the _time field, then the modified _time value is used as the event timestamp.

If none of the above occur, then the Splunk platform will use the current time as the timestamp.

For more information, see Using the services/collector endpoint in Edge Processors and Configure a pipeline to assign event timestamps in this topic.

The Edge Processor renames the _time field to time when sending events out to the destination.

HTTP client using the services/collector/raw HEC endpoint

The following table summarizes how event timestamps are assigned when the Edge Processor receives the data from an HTTP client through the services/collector/raw HEC endpoint, and then sends the data to a given destination:

Destination How the timestamp is assigned
Splunk platform S2S The Splunk platform assigns event timestamps according to the configuration settings in the props.conf file.


For more information, see props.conf in the Splunk Enterprise Admin Manual and the Configure timestamps chapter in the Splunk Cloud Platform Getting Data In manual.

Splunk platform HEC or Amazon S3 The services/collector/raw HEC endpoint assigns event timestamps based on the time query string parameter. If the time query string parameter is not specified, then the endpoint sets a timestamp of "0".


If the data passes through an Edge Processor pipeline that modifies the _time field, then the modified _time value is used as the event timestamp.

If none of the above occur, then the Splunk platform will use the current time as the timestamp.

For more information, see Using the services/collector/raw endpoint in Edge Processors and Configure a pipeline to assign event timestamps in this topic.

The Edge Processor renames the _time field to time when sending events out to the destination.

Syslog devices

The following table summarizes how event timestamps are assigned when the Edge Processor receives the data from a syslog device and then sends the data to a given destination:

Destination How the timestamp is assigned
Splunk platform S2S The Splunk platform assigns event timestamps according to the configuration settings in the props.conf file.


For more information, see props.conf in the Splunk Enterprise Admin Manual and the Configure timestamps chapter in the Splunk Cloud Platform Getting Data In manual.

Splunk platform HEC or Amazon S3 When you configure a port for the Edge Processor to start receiving syslog data, you select an RFC protocol.
  • If your syslog data is compliant with the selected RFC protocol, the Edge Processor assigns event timestamps based on the date and time information found in the data.
  • If your data is not compliant, then the Edge Processor sets the timestamp to "0".

If the data passes through an Edge Processor pipeline that modifies the _time field, then the modified _time value is used as the event timestamp.

If none of the above occur, then the Splunk platform will use the current time as the timestamp.

For more information, see the following documentation:

The Edge Processor renames the _time field to time when sending events out to the destination.

Configure a pipeline to assign event timestamps

If your event timestamps are assigned by the Edge Processor instead of the data source or destination, you can configure a pipeline to create these timestamps using date and time information extracted from the raw data.

In order for the timestamps to be valid in the Edge Processor solution and other Splunk software, you must use the strptime() SPL2 function to store them in UNIX time format in a field named _time. For more information about how the _time field works in Splunk software, see _time in the Splunk Enterprise Knowledge Manager Manual and About searching with time in the Splunk Cloud Platform Search Manual.

  1. On the Pipelines page, select New pipeline. Follow the on-screen instructions to define a partition, enter sample data, and select a data destination.
    After you complete the on-screen instructions, the pipeline builder displays the SPL2 statement for your pipeline.
  2. To generate a preview of how your pipeline processes data based on the sample data that you provided, select the Preview Pipeline icon (Image of the Preview Pipeline icon).
  3. To extract date and time information from the raw data and store it in an event field, do the following:
    1. Select the plus icon (This image shows an icon of a plus sign.) from the Actions section, then select Extract fields from _raw.
    2. In the Regular expression field, specify one or more named capture groups using Regular Expression 2 (RE2) syntax.
      The name of the capture group determines the name of the extracted field, and the matched values determine the values of the extracted field. You can select named capture groups from the Insert from library list or enter named capture groups directly in the field.
      For example, if your raw data contains date and time information formatted like 2023-01-13 04:41:00, you can extract that information into an event field named Timestamp_ISO8601 by selecting Timestamp_ISO8601 from the Insert from library list. The resulting regular expression looks like this:
    3. (?P<Timestamp_ISO8601>(?:\d\d){1,2}-(?:0?[1-9]|1[0-2])-(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])[T ](?:2[0123]|[01]?[0-9]):?(?:[0-5][0-9])(?::?(?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?))?(?:Z|[+-](?:2[0123]|[01]?[0-9])(?::?(?:[0-5][0-9])))?)
      

      For more information about the supported regular expression syntax, see Regular expression syntax for Edge Processor pipelines.

    4. (Optional) By default, the regular expression matches are case sensitive. To make the matches case insensitive, uncheck the Match case check box.
    5. Use the Events preview pane to validate the results of your regular expression. The events in this pane are based on the last time that you generated a pipeline preview, and the pane highlights the values that match your regular expression for extraction.
    6. When you are satisfied with the events highlighted in the Events preview pane, select Apply to perform the timestamp extraction.

    A rex command is added to the pipeline, and the new field appears in the Fields list.

  4. To convert the extracted timestamp to UNIX time and store it in a field named _time, do the following:
    1. In the preview results panel, hover over the header of the field that you extracted during the previous step, and then select the Options for "<field_name>" icon (Image of the Options icon). Then, select Convert _time from <field_name>.
    2. In the Source timestamp format field, specify the current format of the timestamp using SPL2 time variables. See Using time variables in the SPL2 Search Manual for more information.
      For example, if your extracted timestamps are formatted like 2023-01-13 04:41:00, then set Source timestamp format to %Y-%m-%d %H:%M:%S.
    3. Select Apply.
  5. An eval command containing the strptime() function is added to the pipeline, and a new field named _time is added to your events. This field stores the timestamp UNIX format, but continues to display the timestamp in a human readable format. This ensures an accurate timestamp value even if you access the data from a different time zone.

  6. (Optional) After creating the _time field, you can remove the field that you extracted during step 3. To do this, add the following fields command to your SPL2 pipeline statement. Make sure to place this command after your field extraction and conversion expressions.
    | fields - <field_name>
    

    For example, if the name of the field that you extracted during step 3 is Timestamp_ISO8601, then add the following command to your pipeline:

    | fields - Timestamp_ISO8601
    
  7. Save your pipeline, and then apply it to your Edge Processors as needed. For more more information, see Apply pipelines to Edge Processors.
Last modified on 18 December, 2024
Extract fields from event data using an Edge Processor   Updates to partitioning and filtering behavior in Edge Processor pipelines

This documentation applies to the following versions of Splunk Cloud Platform: 9.0.2209, 9.0.2303, 9.0.2305, 9.1.2308, 9.1.2312, 9.2.2403, 9.2.2406 (latest FedRAMP release), 9.3.2408


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters