Merge Events
This topic describes how to use the function in the Splunk Data Stream Processor.
Description
Parses data received from a universal forwarder into a stream of complete events for a Splunk Index. The universal forwarder does not parse incoming data, so you must use this function if you are using a universal forwarder. You must use merge_events
in conjunction with key_by
.
Function Input/Output Schema
- Function Input
GroupedStream<record<K>, record<V>>
- This function accepts records with schema V, grouped on schema K.
- Function Output
collection<record<R>>
- This function outputs a collection of records with a schema R.
Syntax
The required fields are in bold font.
- Merge_Events
- content=expression<string>
- delimiter=<regular-expression>
- [output=<string>]
- [max event size=<long>]
Required arguments
- content
- Expects: expression<string>
- Description: An expression to get the desired field to be broken into single events.
- UI Example: cast(get("body"), "string");
- delimiter
- Expects: delimiter=<regular-expression>
- Description: The Java regular expression used to break events.
- UI Example: /(\n)[0-9]{2}-[0-9]{2}-[0-9]{4}/
Optional arguments
- output
- Expects: string
- Description: The name of the output field in the new event.
- Default: body
- UI Example: body
- max event size
- Expects: long
- Description: Specifies the maximum event size, in bytes, of an event. Size cannot exceed 1MB.
- Default: 1000000 (1MB)
- UI Example: 1000000
SPL2 examples
1. This example breaks a single event in body using a newline delimiter and outputs multiple events in newfield.
...| merge_events content=cast(body, "string"), delimiter=/\n/, output=newfield, max-event-size=1000000 |...
Key_by | MV Expand |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.1.0
Feedback submitted, thanks!