Splunk® Data Stream Processor

Function Reference

On April 3, 2023, Splunk Data Stream Processor reached its end of sale, and will reach its end of life on February 28, 2025. If you are an existing DSP customer, please reach out to your account team for more information.

All DSP releases prior to DSP 1.4.0 use Gravity, a Kubernetes orchestrator, which has been announced end-of-life. We have replaced Gravity with an alternative component in DSP 1.4.0. Therefore, we will no longer provide support for versions of DSP prior to DSP 1.4.0 after July 1, 2023. We advise all of our customers to upgrade to DSP 1.4.0 in order to continue to receive full product support from Splunk.
This documentation does not apply to the most recent version of Splunk® Data Stream Processor. For documentation on the most recent version, go to the latest release.

Get data from Google Cloud Monitoring

Use the Google Cloud Monitoring source function to get data from Google Cloud Monitoring.

Prerequisites

Before you can use this function, you must create a connection. See Create a DSP connection to Google Cloud Monitoring in the Connect to Data Sources and Destinations with the manual. When configuring this source function, set the connection_id argument to the ID of that connection.

Function output schema

This function outputs data pipeline metric events using the metrics schema.

For each metric in the body field, the function includes a resource_type dimension in addition to the ones that are part of the original payload. This resource_type dimension shows the name of the resource type that the metric belongs to.

The following is an example of a typical record from the read_from_gcp_monitoring_metrics function:

{
"timestamp": 1582151880000,
"nanos": 0,
"id": "2823738566644596",
"host": "my-test-instance",
"source": "serviceruntime.googleapis.com/api",
"source_type": "gcp:monitoring:metrics",
"kind": "metric",
"body": {  
     "name": "request_count",
     "unit": "",
     "type": "",
     "value": 2,
     "dimensions": {
          "end_time": "1582151880",
          "location": "global",
          "method": "compute.targetVpnGateways.list",
          "resource_type": "consumed_api",
          "service": "compute",
          "start_time": "1582151820",
          "version": "v1"		
          }
     },
"attributes": {
     "default_unit": "1",
     "default_type": "INT64",
     "default_dimensions": {
          "project_id": "my-google-cloud-project"
          }
     }
}

Required arguments

connection_id
Syntax: string
Description: The ID of your Google Cloud Monitoring connection.
Example in Canvas View: my-gcp-monitor-connection

Optional arguments

initial_position
Syntax: LATEST | TRIM_HORIZON
Description: The position in the data stream where you want to start reading data. Defaults to LATEST.
  • LATEST: Start reading data from the latest position on the data stream.
  • TRIM_HORIZON: Start reading data from the very beginning of the data stream.
Example in Canvas View: LATEST

SPL2 example

When working in the SPL View, you can write the function using arguments in this exact order.

| from read_from_gcp_monitoring_metrics("my-connection-id", "TRIM_HORIZON") |... ;

Alternatively, you can use named arguments to declare arguments in any order. The following SPL2 example uses named arguments to specify the initial_position argument before the connection_id argument.

| from read_from_gcp_monitoring_metrics(initial_position: "TRIM_HORIZON", connection_id: "my-connection-id") |... ;

If you want to use a mix of unnamed and named arguments in your functions, you must list all unnamed arguments in the correct order before providing the named arguments.

Limitations of the Google Cloud Monitoring source function

The Google Cloud Monitoring source function uses scheduled data collection jobs to ingest data. See Limitations of scheduled data collection jobs for information about limitations that apply to all scheduled data collection jobs.

Last modified on 19 April, 2021
Get data from Kafka   Get data from Google Cloud Pub/Sub

This documentation applies to the following versions of Splunk® Data Stream Processor: 1.2.0, 1.2.1-patch02, 1.2.1, 1.2.2-patch02, 1.2.4, 1.2.5, 1.3.0, 1.3.1


Was this topic useful?







You must be logged into splunk.com in order to post comments. Log in now.

Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers.

0 out of 1000 Characters