Generate logs into metrics using Ingest Processor
You can create a pipeline that generates logs from your data into metrics. Generating logs into metrics lets you transform information from your data into a more visible way and configure further data processing based on those logs. You can then send the converted subset of your data to supported destinations, including Splunk Cloud indexers and Splunk Observability Cloud.
Configuring a pipeline to generate logs from metrics involves doing the following:
- Specifying the partition of the incoming data that the pipeline receives and a destination that the pipeline sends data to. See Partitions for more information.
- Defining the metrics generation fields by including a thru command in the SPL2 statement of the pipeline. See thru command overview in the SPL2 Search Reference for more information.
Reference
To help you get started on creating and using pipelines, the Ingest Processor solution includes sample pipelines called templates. Templates are Splunk-built pipelines that are designed to work with specific data sources and use cases, such as generating logs into metrics. Templates include sample data and preconfigured SPL2 statements, so you can use them as a starting point to build custom pipelines to solve specific use cases or as a reference to learn how to write SPL2 to build pipelines.
To view a list of the available pipeline templates, log in to your tenant, navigate to the Pipelines page, and then select Templates. This list of template pipelines includes sample metrics generation pipelines for Palo Alto Networks traffic logs, Prometheus formatted Kubernetes logs, and UNIX and Linux process status logs.
See Use templates to create pipelines for Ingest Processor for instructions on how to build a pipeline from a template.
Prerequisites
Before creating a pipeline, confirm the following:
- The destination that you want the pipeline to send data to is listed on the Destinations page of your tenant. If your destination is not listed, then you must add that destination to your tenant. For more information, see the Add or manage destinations topic in this manual.
- To set up Splunk Observability Cloud as a destination perform the following steps:
- See Send metrics data from Ingest Processor to a Splunk platform metrics index to set up Splunk metrics store as a metrics destination.
Review Metrics in Splunk Observability Cloud for more information on how Splunk Observability Cloud processes and displays metrics.
Steps
Perform the following steps to create a pipeline that converts logs to metrics:
- From the home page on Splunk Cloud Platform, navigate to the Pipelines page and select New pipeline, then Ingest Processor pipeline.
- On the Get started page, select Blank pipeline, then Next.
- On the Define your pipeline's partition page, perform any applicable definitions to the subset of data that you want this pipeline to process.
- Select how you want to partition your incoming data that you want to send to your pipeline. You can partition by source type, source, and host.
- Enter the conditions for your partition, including the operator and the value. Your pipeline will receive and process the incoming data that meets these conditions.
- Select Next to confirm the pipeline partition.
- On the Add sample data page, do the following:
- Enter or upload sample data to generate a preview of how your pipeline processes data. The sample data must contain accurate examples of the values that you want to generate into metrics.
- Select Next to confirm the sample data.
- On the Select a metrics destination page, select the name of the destination that you want to send metrics to.
- (Optional) If you selected Splunk Metrics store as your metrics destination, specify the name of the target metrics index where you want to send your metrics.
- On the Select a data destination page, select the name of the destination that you want to send logs to.
- (Optional) If you selected a Splunk platform destination, you can configure index routing:
- Select one of the following options in the expanded destination panel:
Option Description Default The pipeline does not route events to a specific index.
If the event metadata already specifies an index, then the event is sent to that index. Otherwise, the event is sent to the default index of the Splunk Cloud Platform deployment.Specify index for events with no index The pipeline only routes events to your specified index if the event metadata did not already specify an index. Specify index for all events The pipeline routes all events to your specified index. - If you selected Specify index for events with no index or Specify index for all events, then from the Index name drop-down list, select the name of the index that you want to send your data to.
If your desired index is not available in the drop-down list, then confirm that the index is configured to be available to the tenant and then refresh the connection between the tenant and the Splunk Cloud Platform deployment. For detailed instructions, see Make more indexes available to the tenant.
- Select one of the following options in the expanded destination panel:
- Select Done to confirm the data destination.
After you complete the on-screen instructions, the pipeline builder displays the SPL2 statement for your pipeline. - (Optional) To generate a preview of how your pipeline processes data based on the sample data that you provided, select the Preview Pipeline icon (). Use the preview results to validate your pipeline configuration.
- On the SPL2 editor page, add processing commands to your SPL2 statement as needed. You can do this by selecting the plus icon () next to Actions and selecting a data processing action, or by typing SPL2 commands and functions directly in the editor.
- Select the plus icon () next to Actions, then select Create metricization rule.
- Complete the following fields:
- Fill in a name for your metric in the Metric name field.
- Choose the type of your metric in Metric Type.
- Select the field that contains the value of your metric in Field.
- Select the field that contains the timestamp of your metric in Time field.
- Select what field(s) you want your metrics to be grouped by in Field dimensions.
- In the Metrics preview panel, select the Rollup function that corresponds to your metric type. Each metric type has a default rollout in Splunk Observability Cloud and your selection in the Ingest Processor must match that default. See See Metric types in the Splunk Observability Cloud documentation for more information on metric types and their corresponding default rollups.
Each of these fields is an argument in your SPL2 statement as described below.
SPL2 Argument Description name
The name of the metric. metrictype
Determines how metrics are presented in Splunk Observability Cloud. This argument also affects how the metric is interpreted and displayed in Splunk Observability Cloud. Choose from Gauge, Counter, and Cumulative Counter. See Metric types in the Splunk Observability Cloud documentation for more information. count
andsum
aggregations should always use the counter metric type.average
,min
, andmax
aggregations should always use the gauge metric type.value
The value of the metric. time
The unix time in epoch seconds of the metric. dimensions
Zero or more dimensions associated with the metric. If the metric has no dimensions, this argument is optional and can be omitted.
- Select Apply to confirm your metrics definitions.
- Repeat steps 11-13 to generate multiple metrics in your pipeline if desired. A pipeline that generates multiple metrics will look like the below.
$pipeline = | from $source | thru [ | logs_to_metrics name="mymetric" metrictype="metric_type" value=metric_value time=_time dimensions={"foo": bar} | into $metrics_destination ] | thru [ | logs_to_metrics name="mymetric2" metrictype="metric_type2" value=metric_value2 time=_time dimensions={"foo": bar} | into $metrics_destination ] | into $destination;
- To save your pipeline, do the following:
- Select Save pipeline.
- In the Name field, enter a name for your pipeline.
- (Optional) In the Description field, enter a description for your pipeline.
- Select Save. The pipeline is now listed on the Pipelines page, and you can now apply it, as needed.
- To apply this pipeline, do the following:
- Navigate to the Pipelines page.
- In the row that lists your pipeline, select the Actions icon (), and then select Apply. It can take a few minutes to finish applying your pipeline. During this time, all applied pipelines enter the Pending status.
- (Optional) To confirm that Ingest Processor has finished applying your pipeline, navigate to the Ingest Processor page.
If you're sending data to a Splunk Cloud Platform deployment, be aware that the destination index is determined by a precedence order of configurations. See How does Ingest Processor know which index to send data to? for more information
Send metrics to multiple destinations
Send your metrics data to multiple destinations from within the same pipeline.
Clone and send metrics to multiple destinations
The Clone and route data action adds an empty thru
command to your SPL2 statement. It is used to create a second destination for your data.
In order to use the Clone and route data action, you have to manually move this empty child thru
inside the thru
command. Then you can add child actions to this thru
command by adding child actions to your SPL2 statement.
- Create a metric. The SPL will look like the following example:
$pipeline = | from $source | thru [ | logs_to_metrics name="a_metric" metrictype="gauge" value=foo time=_time | into $metrics_destination ] | into $destination;
- Navigate to the Actions section of the pipeline builder menu, and select Clone and route data. An empty
thru
block, with a new$destination
parameter, is added to the SPL2 statement. For example:
$pipeline = | from $source | thru [ | logs_to_metrics name="a_metric" metrictype="gauge" value=foo time=_time | into $metrics_destination ] | thru [ | into $destination2 ] | into $destination;
- Use the SPL2 editor to move the empty
thru
block, with the new$destination2
parameter, inside the parentthru
block for yourlogs_to_metrics
SPL2 function. This clonesa_metric
and sends it to$destination2
in addition to$metrics_destination
. For example:
$pipeline = | from $source | thru [ | logs_to_metrics name="a_metric" metrictype="gauge" value=foo time=_time | into $metrics_destination | thru [ | into $destination2 ] ] | into $destination;
Use a different destination for each metric
To create multiple metric destinations, perform the following steps.
- Create a metric by navigating to the Actions section of the pipeline builder menu, and select Create metricization rule.
- In the Create metrics from logs menu, perform the following tasks:
- Create a name for your metric.
- In the Field field, create a field value.
- Click Apply.
For example:
$pipeline = | from $source | thru [ | logs_to_metrics name="a_metric" metrictype="gauge" value=foo time=_time | into $metrics_destination ] | into $destination;
- Create another metric, using the Create metricization rule action. The SPL2 will now look like the following example. Note that the
a_metric
and theb_metric
use the same$metrics_destination
:$pipeline = | from $source | thru [ | logs_to_metrics name="a_metric" metrictype="gauge" value=foo time=_time | into $metrics_destination ] | thru [ | logs_to_metrics name="b_metric" metrictype="gauge" value=foo time=_time | into $metrics_destination ] | into $destination;
- Navigate to your SPL2 statement, and change the destination for the newly created
$metrics_destination
. For example:
$pipeline = | from $source | thru [ | logs_to_metrics name="a_metric" metrictype="gauge" value=foo time=_time | into $metrics_destination ] | thru [ | logs_to_metrics name="b_metric" metrictype="gauge" value=foo time=_time | into $metrics_destination_1 ] | into $destination;
This UI action can be used to create multiple metrics. Each time the action is performed, it will add a newthru
block with a new metric destination parameter that is set to$metric_destination
. You must manually rename this destination parameter (for example, to$metric_destination_1
) in order to route different metrics to different destinations. You cannot use the Data Management UI to add child actions to thisthru
.
For more information on the thru
SPL2 function, see the Process a copy of data using Ingest Processor topic in the Route data using pipelines chapter this manual, and the thru command overview in the SPL2 Search Reference manual.
For information about other ways to route data, see the Routing data in the same Ingest Processor pipeline to different actions and destinations topic in the Route data using pipelines chapter this manual.
Extract JSON fields from data using Ingest Processor | Routing data in the same Ingest Processor pipeline to different actions and destinations |
This documentation applies to the following versions of Splunk Cloud Platform™: 9.1.2308, 9.1.2312, 9.2.2403, 9.2.2406 (latest FedRAMP release), 9.3.2408
Feedback submitted, thanks!