Ingest Processor is currently released as a preview only and is not officially supported. See Splunk General Terms for more information. For any questions on this preview, please reach out to ingestprocessor@splunk.com.
Generate logs into metrics using Ingest Processor
You can create a pipeline that generates logs from your data into metrics. Generating logs into metrics lets you transform information from your data into a more visible way and configure further data processing based on those logs. You can then send the converted subset of your data to supported destinations, including Splunk Cloud indexers and Splunk Observability Cloud.
Configuring a pipeline to generate logs from metrics involves doing the following:
- Specifying the partition of the incoming data that the pipeline receives and a destination that the pipeline sends data to. See Partitions for more information.
- Defining the metrics generation fields by including a thru command in the SPL2 statement of the pipeline. See thru command overview in the SPL2 Search Reference for more information.
Prerequisites
Before creating a pipeline, confirm the following:
- The destination that you want the pipeline to send data to is listed on the Destinations page of your tenant. If your destination is not listed, then you must add that destination to your tenant. To set up Splunk Observability Cloud as a destination perform the following steps:
- Create a Splunk Observability Cloud token.
- Create a Splunk Observability Cloud connection dataset.
- Create a Splunk Observability Cloud context dataset.
- Review Metrics in Splunk Observability Cloud for more information on how Splunk Observability Cloud processes and displays metrics.
Steps
Perform the following steps to create a pipeline that converts logs to metrics:
- From the home page on Splunk Cloud Platform, navigate to the Pipelines page and select New pipeline, then Ingest Processor pipeline.
- On the Get started page, select Blank pipeline, then Next.
- On the Define your pipeline's partition page, perform any applicable definitions to the subset of data that you want this pipeline to process.
- Select how you want to partition your incoming data that you want to send to your pipeline. You can partition by source type, source, and host.
- Enter the conditions for your partition, including the operator and the value. Your pipeline will receive and process the incoming data that meets these conditions.
- Select Next to confirm the pipeline partition.
- On the Add sample data page, do the following:
- Enter or upload sample data to generate a preview of how your pipeline processes data. The sample data must contain accurate examples of the values that you want to generate into metrics.
- Select Next to confirm the sample data.
- On the Select a metrics destination page, select the name of the destination that you want to send your metrics to.
- On the Select destination dataset page, select the name of the destination that you want to send logs to, then do the following:
- If you selected a Splunk platform S2S or Splunk platform HEC destination, select Next.
- If you selected another type of destination, select Done and skip the next step.
- (Optional) If you're sending data to a Splunk platform deployment, you can specify a target index:
- In the Index name field, select the name of the index that you want to send your data to.
- (Optional) In some cases, incoming data already specifies a target index. If you want your Index name selection to override previous target index settings, then select the Overwrite previously specified target index check box.
- Select Done.
- (Optional) To generate a preview of how your pipeline processes data based on the sample data that you provided, select the Preview Pipeline icon (). Use the preview results to validate your pipeline configuration.
- On the SPL2 editor page, add processing commands to your SPL2 statement as needed. You can do this by selecting the plus icon () next to Actions and selecting a data processing action, or by typing SPL2 commands and functions directly in the editor.
- Select the plus icon () next to Actions, then select Create metricization rule.
- Complete the following fields:
- Fill in a name for your metric in the Metric name field.
- Choose the type of your metric in Metric Type.
- Select the field that contains the value of your metric in Field.
- Select the field that contains the timestamp of your metric in Time field.
- Select what field(s) you want your metrics to be grouped by in Field dimensions.
- In the Metrics preview panel, select the Rollup function that corresponds to your metric type. Each metric type has a default rollout in Splunk Observability Cloud and your selection in the Ingest Processor must match that default. See See Metric types in the Splunk Observability Cloud documentation for more information on metric types and their corresponding default rollups.
Each of these fields is an argument in your SPL2 statement as described below.
SPL2 Argument Description name
The name of the metric. metrictype
Determines how metrics are presented in Splunk Observability Cloud. This argument also affects how the metric is interpreted and displayed in Splunk Observability Cloud. Choose from Gauge, Counter, and Cumulative Counter. See Metric types in the Splunk Observability Cloud documentation for more information. count
andsum
aggregations should always use the counter metric type.average
,min
, andmax
aggregations should always use the gauge metric type.value
The value of the metric. time
The unix time in epoch seconds of the metric. dimensions
Zero or more dimensions associated with the metric. If the metric has no dimensions, this argument is optional and can be omitted.
- Select Apply to confirm your metrics definitions.
- Repeat steps 10-12 to generate multiple metrics in your pipeline if desired. A pipeline that generates multiple metrics will look like the below.
$pipeline = | from $source | thru [ | logs_to_metrics name="mymetric" metrictype="metric_type" value=metric_value time=_time dimensions={"foo": bar} | into $metrics_destination ] | thru [ | logs_to_metrics name="mymetric2" metrictype="metric_type2" value=metric_value2 time=_time dimensions={"foo": bar} | into $metrics_destination ] | into $destination;
- To save your pipeline, do the following:
- Select Save pipeline.
- In the Name field, enter a name for your pipeline.
- (Optional) In the Description field, enter a description for your pipeline.
- Select Save. The pipeline is now listed on the Pipelines page, and you can now apply it, as needed.
- To apply this pipeline, do the following:
- Navigate to the Pipelines page.
- In the row that lists your pipeline, select the Actions icon (), and then select Apply. It can take a few minutes to finish applying your pipeline. During this time, all applied pipelines enter the Pending status.
- (Optional) To confirm that Ingest Processor has finished applying your pipeline, navigate to the Ingest Processor page.
If you're sending data to a Splunk platform deployment, be aware that the destination index is determined by a precedence order of configurations.
PREVIOUS Extract fields from event data using Ingest Processor |
NEXT Extract JSON fields from data using Ingest Processor |
This documentation applies to the following versions of Splunk Cloud Platform™: 9.1.2308 (latest FedRAMP release), 9.1.2312
Feedback submitted, thanks!