All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
Key_by
This topic describes how to use the function in the Splunk Data Stream Processor.
Description
Groups a stream of records by one or more field(s) and returns a grouped stream. Because Key_by
outputs a GroupedBy
stream, this function must be used in conjunction with Merge Events
. This function does not show metrics in the UI.
Syntax
- key_by
- keys=<fields>
Function Input/Output Schema
- Function Input
collection<record<R>>
- This function takes in collections of records with schema R.
- Function Output
GroupedStream<record<K>, record<V>>
- This function outputs records with schema V, grouped on schema K.
Required arguments
- keys
- Syntax: <fields>
- Description: The names of the fields to group records.
- Example in Canvas View: source, host
SPL2 example
Examples of common use cases follow. The following examples assume that you are in the SPL View.
When working in the SPL View, you can write the function by using the syntax shown in each use case.
1. Group records by source
...| key_by keys=source |...
2. Group records by source and host
...| key_by keys=[source, host] |...
Into | Lookup |
This documentation applies to the following versions of Splunk® Data Stream Processor: 1.1.0, 1.2.0, 1.2.1-patch02, 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5, 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5
Feedback submitted, thanks!